Sep 9 00:18:34.019826 kernel: Linux version 6.6.104-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Sep 8 22:41:17 -00 2025 Sep 9 00:18:34.019858 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:18:34.019875 kernel: BIOS-provided physical RAM map: Sep 9 00:18:34.019884 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Sep 9 00:18:34.019892 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Sep 9 00:18:34.019901 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Sep 9 00:18:34.019911 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Sep 9 00:18:34.019919 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Sep 9 00:18:34.019927 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Sep 9 00:18:34.019936 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Sep 9 00:18:34.019948 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Sep 9 00:18:34.019956 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Sep 9 00:18:34.019971 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Sep 9 00:18:34.019980 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Sep 9 00:18:34.019994 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Sep 9 00:18:34.020004 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Sep 9 00:18:34.020019 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Sep 9 00:18:34.020029 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Sep 9 00:18:34.020038 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Sep 9 00:18:34.020047 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 9 00:18:34.020057 kernel: NX (Execute Disable) protection: active Sep 9 00:18:34.020066 kernel: APIC: Static calls initialized Sep 9 00:18:34.020075 kernel: efi: EFI v2.7 by EDK II Sep 9 00:18:34.020096 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b675198 Sep 9 00:18:34.020106 kernel: SMBIOS 2.8 present. Sep 9 00:18:34.020115 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Sep 9 00:18:34.020124 kernel: Hypervisor detected: KVM Sep 9 00:18:34.020138 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 9 00:18:34.020148 kernel: kvm-clock: using sched offset of 5743733141 cycles Sep 9 00:18:34.020159 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 9 00:18:34.020169 kernel: tsc: Detected 2794.748 MHz processor Sep 9 00:18:34.020179 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 9 00:18:34.020189 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 9 00:18:34.020199 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Sep 9 00:18:34.020209 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Sep 9 00:18:34.020219 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 9 00:18:34.020232 kernel: Using GB pages for direct mapping Sep 9 00:18:34.020242 kernel: Secure boot disabled Sep 9 00:18:34.020252 kernel: ACPI: Early table checksum verification disabled Sep 9 00:18:34.020262 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Sep 9 00:18:34.020278 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Sep 9 00:18:34.020288 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:18:34.020298 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:18:34.020312 kernel: ACPI: FACS 0x000000009CBDD000 000040 Sep 9 00:18:34.020322 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:18:34.020337 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:18:34.020347 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:18:34.020357 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 00:18:34.020367 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Sep 9 00:18:34.020378 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Sep 9 00:18:34.020392 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Sep 9 00:18:34.020403 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Sep 9 00:18:34.020413 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Sep 9 00:18:34.020424 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Sep 9 00:18:34.020434 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Sep 9 00:18:34.020445 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Sep 9 00:18:34.020455 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Sep 9 00:18:34.020466 kernel: No NUMA configuration found Sep 9 00:18:34.020479 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Sep 9 00:18:34.020494 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Sep 9 00:18:34.020505 kernel: Zone ranges: Sep 9 00:18:34.020516 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 9 00:18:34.020527 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Sep 9 00:18:34.020538 kernel: Normal empty Sep 9 00:18:34.020548 kernel: Movable zone start for each node Sep 9 00:18:34.020559 kernel: Early memory node ranges Sep 9 00:18:34.020569 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Sep 9 00:18:34.020580 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Sep 9 00:18:34.020590 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Sep 9 00:18:34.020604 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Sep 9 00:18:34.020615 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Sep 9 00:18:34.020625 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Sep 9 00:18:34.020639 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Sep 9 00:18:34.020650 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:18:34.020660 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Sep 9 00:18:34.020670 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Sep 9 00:18:34.020697 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 9 00:18:34.020708 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Sep 9 00:18:34.020724 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Sep 9 00:18:34.020734 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Sep 9 00:18:34.020745 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 9 00:18:34.020756 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 9 00:18:34.020767 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 9 00:18:34.020777 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 9 00:18:34.020788 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 9 00:18:34.020799 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 9 00:18:34.020809 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 9 00:18:34.020823 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 9 00:18:34.020834 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 9 00:18:34.020845 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 9 00:18:34.020855 kernel: TSC deadline timer available Sep 9 00:18:34.020865 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Sep 9 00:18:34.020875 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 9 00:18:34.020886 kernel: kvm-guest: KVM setup pv remote TLB flush Sep 9 00:18:34.020896 kernel: kvm-guest: setup PV sched yield Sep 9 00:18:34.020908 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Sep 9 00:18:34.020922 kernel: Booting paravirtualized kernel on KVM Sep 9 00:18:34.020933 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 9 00:18:34.020944 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Sep 9 00:18:34.020954 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u524288 Sep 9 00:18:34.020964 kernel: pcpu-alloc: s197160 r8192 d32216 u524288 alloc=1*2097152 Sep 9 00:18:34.020973 kernel: pcpu-alloc: [0] 0 1 2 3 Sep 9 00:18:34.020983 kernel: kvm-guest: PV spinlocks enabled Sep 9 00:18:34.020993 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 9 00:18:34.021005 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:18:34.021024 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 00:18:34.021034 kernel: random: crng init done Sep 9 00:18:34.021044 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 00:18:34.021054 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 00:18:34.021064 kernel: Fallback order for Node 0: 0 Sep 9 00:18:34.021074 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Sep 9 00:18:34.021093 kernel: Policy zone: DMA32 Sep 9 00:18:34.021104 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 00:18:34.021118 kernel: Memory: 2400600K/2567000K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42880K init, 2316K bss, 166140K reserved, 0K cma-reserved) Sep 9 00:18:34.021129 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 00:18:34.021139 kernel: ftrace: allocating 37969 entries in 149 pages Sep 9 00:18:34.021149 kernel: ftrace: allocated 149 pages with 4 groups Sep 9 00:18:34.021158 kernel: Dynamic Preempt: voluntary Sep 9 00:18:34.021179 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 00:18:34.021195 kernel: rcu: RCU event tracing is enabled. Sep 9 00:18:34.021206 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 00:18:34.021216 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 00:18:34.021227 kernel: Rude variant of Tasks RCU enabled. Sep 9 00:18:34.021237 kernel: Tracing variant of Tasks RCU enabled. Sep 9 00:18:34.021248 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 00:18:34.021261 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 00:18:34.021272 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Sep 9 00:18:34.021287 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 00:18:34.021298 kernel: Console: colour dummy device 80x25 Sep 9 00:18:34.021309 kernel: printk: console [ttyS0] enabled Sep 9 00:18:34.021324 kernel: ACPI: Core revision 20230628 Sep 9 00:18:34.021334 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 9 00:18:34.021345 kernel: APIC: Switch to symmetric I/O mode setup Sep 9 00:18:34.021355 kernel: x2apic enabled Sep 9 00:18:34.021366 kernel: APIC: Switched APIC routing to: physical x2apic Sep 9 00:18:34.021376 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Sep 9 00:18:34.021387 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Sep 9 00:18:34.021397 kernel: kvm-guest: setup PV IPIs Sep 9 00:18:34.021408 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 9 00:18:34.021423 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Sep 9 00:18:34.021433 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Sep 9 00:18:34.021444 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 9 00:18:34.021455 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Sep 9 00:18:34.021465 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Sep 9 00:18:34.021476 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 9 00:18:34.021486 kernel: Spectre V2 : Mitigation: Retpolines Sep 9 00:18:34.021497 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 9 00:18:34.021507 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Sep 9 00:18:34.021522 kernel: active return thunk: retbleed_return_thunk Sep 9 00:18:34.021532 kernel: RETBleed: Mitigation: untrained return thunk Sep 9 00:18:34.021543 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 9 00:18:34.021554 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 9 00:18:34.021569 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Sep 9 00:18:34.021581 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Sep 9 00:18:34.021592 kernel: active return thunk: srso_return_thunk Sep 9 00:18:34.021603 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Sep 9 00:18:34.021618 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 9 00:18:34.021629 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 9 00:18:34.021639 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 9 00:18:34.021650 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 9 00:18:34.021661 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Sep 9 00:18:34.021671 kernel: Freeing SMP alternatives memory: 32K Sep 9 00:18:34.021714 kernel: pid_max: default: 32768 minimum: 301 Sep 9 00:18:34.021725 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 9 00:18:34.021735 kernel: landlock: Up and running. Sep 9 00:18:34.021750 kernel: SELinux: Initializing. Sep 9 00:18:34.021761 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:18:34.021773 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 00:18:34.021784 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Sep 9 00:18:34.021795 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:18:34.021807 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:18:34.021818 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 00:18:34.021829 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Sep 9 00:18:34.021841 kernel: ... version: 0 Sep 9 00:18:34.021856 kernel: ... bit width: 48 Sep 9 00:18:34.021867 kernel: ... generic registers: 6 Sep 9 00:18:34.021878 kernel: ... value mask: 0000ffffffffffff Sep 9 00:18:34.021889 kernel: ... max period: 00007fffffffffff Sep 9 00:18:34.021900 kernel: ... fixed-purpose events: 0 Sep 9 00:18:34.021912 kernel: ... event mask: 000000000000003f Sep 9 00:18:34.021923 kernel: signal: max sigframe size: 1776 Sep 9 00:18:34.021934 kernel: rcu: Hierarchical SRCU implementation. Sep 9 00:18:34.021946 kernel: rcu: Max phase no-delay instances is 400. Sep 9 00:18:34.021961 kernel: smp: Bringing up secondary CPUs ... Sep 9 00:18:34.021972 kernel: smpboot: x86: Booting SMP configuration: Sep 9 00:18:34.021983 kernel: .... node #0, CPUs: #1 #2 #3 Sep 9 00:18:34.021994 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 00:18:34.022005 kernel: smpboot: Max logical packages: 1 Sep 9 00:18:34.022016 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Sep 9 00:18:34.022027 kernel: devtmpfs: initialized Sep 9 00:18:34.022037 kernel: x86/mm: Memory block size: 128MB Sep 9 00:18:34.022048 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Sep 9 00:18:34.022063 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Sep 9 00:18:34.022074 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Sep 9 00:18:34.022097 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Sep 9 00:18:34.022108 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Sep 9 00:18:34.022119 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 00:18:34.022130 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 00:18:34.022141 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 00:18:34.022151 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 00:18:34.022162 kernel: audit: initializing netlink subsys (disabled) Sep 9 00:18:34.022177 kernel: audit: type=2000 audit(1757377112.777:1): state=initialized audit_enabled=0 res=1 Sep 9 00:18:34.022188 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 00:18:34.022197 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 9 00:18:34.022205 kernel: cpuidle: using governor menu Sep 9 00:18:34.022213 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 00:18:34.022222 kernel: dca service started, version 1.12.1 Sep 9 00:18:34.022231 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 9 00:18:34.022241 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 9 00:18:34.022250 kernel: PCI: Using configuration type 1 for base access Sep 9 00:18:34.022262 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 9 00:18:34.022286 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 00:18:34.022298 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 00:18:34.022308 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 00:18:34.022319 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 00:18:34.022327 kernel: ACPI: Added _OSI(Module Device) Sep 9 00:18:34.022348 kernel: ACPI: Added _OSI(Processor Device) Sep 9 00:18:34.022377 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 00:18:34.022398 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 00:18:34.022422 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 9 00:18:34.022444 kernel: ACPI: Interpreter enabled Sep 9 00:18:34.022454 kernel: ACPI: PM: (supports S0 S3 S5) Sep 9 00:18:34.022465 kernel: ACPI: Using IOAPIC for interrupt routing Sep 9 00:18:34.022475 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 9 00:18:34.022485 kernel: PCI: Using E820 reservations for host bridge windows Sep 9 00:18:34.022493 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 9 00:18:34.022501 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 00:18:34.022760 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 00:18:34.022950 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Sep 9 00:18:34.023108 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Sep 9 00:18:34.023120 kernel: PCI host bridge to bus 0000:00 Sep 9 00:18:34.023267 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 9 00:18:34.023385 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 9 00:18:34.023502 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 9 00:18:34.023624 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Sep 9 00:18:34.023769 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 9 00:18:34.023890 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Sep 9 00:18:34.024102 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 00:18:34.024353 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 9 00:18:34.024544 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Sep 9 00:18:34.024756 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Sep 9 00:18:34.024894 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Sep 9 00:18:34.025022 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Sep 9 00:18:34.025167 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Sep 9 00:18:34.025295 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 9 00:18:34.025439 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Sep 9 00:18:34.025569 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Sep 9 00:18:34.025855 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Sep 9 00:18:34.025997 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Sep 9 00:18:34.026174 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Sep 9 00:18:34.026304 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Sep 9 00:18:34.026620 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Sep 9 00:18:34.026815 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Sep 9 00:18:34.026980 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 9 00:18:34.027129 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Sep 9 00:18:34.027258 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Sep 9 00:18:34.027386 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Sep 9 00:18:34.027513 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Sep 9 00:18:34.027659 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 9 00:18:34.027831 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 9 00:18:34.028050 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 9 00:18:34.028245 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Sep 9 00:18:34.028385 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Sep 9 00:18:34.028530 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 9 00:18:34.028658 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Sep 9 00:18:34.028669 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 9 00:18:34.028692 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 9 00:18:34.028704 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 9 00:18:34.028721 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 9 00:18:34.028732 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 9 00:18:34.028743 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 9 00:18:34.028753 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 9 00:18:34.028764 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 9 00:18:34.028774 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 9 00:18:34.028785 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 9 00:18:34.028795 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 9 00:18:34.028806 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 9 00:18:34.028817 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 9 00:18:34.028825 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 9 00:18:34.028833 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 9 00:18:34.028841 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 9 00:18:34.028849 kernel: iommu: Default domain type: Translated Sep 9 00:18:34.028857 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 9 00:18:34.028865 kernel: efivars: Registered efivars operations Sep 9 00:18:34.028873 kernel: PCI: Using ACPI for IRQ routing Sep 9 00:18:34.028881 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 9 00:18:34.028893 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Sep 9 00:18:34.028901 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Sep 9 00:18:34.028909 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Sep 9 00:18:34.028917 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Sep 9 00:18:34.029055 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 9 00:18:34.029197 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 9 00:18:34.029355 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 9 00:18:34.029368 kernel: vgaarb: loaded Sep 9 00:18:34.029379 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 9 00:18:34.029392 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 9 00:18:34.029400 kernel: clocksource: Switched to clocksource kvm-clock Sep 9 00:18:34.029409 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 00:18:34.029417 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 00:18:34.029425 kernel: pnp: PnP ACPI init Sep 9 00:18:34.029581 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 9 00:18:34.029593 kernel: pnp: PnP ACPI: found 6 devices Sep 9 00:18:34.029602 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 9 00:18:34.029614 kernel: NET: Registered PF_INET protocol family Sep 9 00:18:34.029622 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 00:18:34.029630 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 00:18:34.029638 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 00:18:34.029646 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 00:18:34.029654 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 00:18:34.029662 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 00:18:34.029671 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:18:34.029700 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 00:18:34.029716 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 00:18:34.029727 kernel: NET: Registered PF_XDP protocol family Sep 9 00:18:34.029898 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Sep 9 00:18:34.030071 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Sep 9 00:18:34.030253 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 9 00:18:34.030436 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 9 00:18:34.030596 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 9 00:18:34.030769 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Sep 9 00:18:34.030931 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 9 00:18:34.031095 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Sep 9 00:18:34.031112 kernel: PCI: CLS 0 bytes, default 64 Sep 9 00:18:34.031123 kernel: Initialise system trusted keyrings Sep 9 00:18:34.031135 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 00:18:34.031146 kernel: Key type asymmetric registered Sep 9 00:18:34.031157 kernel: Asymmetric key parser 'x509' registered Sep 9 00:18:34.031168 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 9 00:18:34.031180 kernel: io scheduler mq-deadline registered Sep 9 00:18:34.031196 kernel: io scheduler kyber registered Sep 9 00:18:34.031208 kernel: io scheduler bfq registered Sep 9 00:18:34.031219 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 9 00:18:34.031231 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 9 00:18:34.031243 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 9 00:18:34.031254 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 9 00:18:34.031265 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 00:18:34.031276 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 9 00:18:34.031288 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 9 00:18:34.031303 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 9 00:18:34.031314 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 9 00:18:34.031674 kernel: rtc_cmos 00:04: RTC can wake from S4 Sep 9 00:18:34.031709 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 9 00:18:34.031874 kernel: rtc_cmos 00:04: registered as rtc0 Sep 9 00:18:34.032035 kernel: rtc_cmos 00:04: setting system clock to 2025-09-09T00:18:33 UTC (1757377113) Sep 9 00:18:34.032209 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Sep 9 00:18:34.032232 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Sep 9 00:18:34.032243 kernel: efifb: probing for efifb Sep 9 00:18:34.032255 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Sep 9 00:18:34.032266 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Sep 9 00:18:34.032277 kernel: efifb: scrolling: redraw Sep 9 00:18:34.032288 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Sep 9 00:18:34.032300 kernel: Console: switching to colour frame buffer device 100x37 Sep 9 00:18:34.032334 kernel: fb0: EFI VGA frame buffer device Sep 9 00:18:34.032349 kernel: pstore: Using crash dump compression: deflate Sep 9 00:18:34.032363 kernel: pstore: Registered efi_pstore as persistent store backend Sep 9 00:18:34.032375 kernel: NET: Registered PF_INET6 protocol family Sep 9 00:18:34.032386 kernel: Segment Routing with IPv6 Sep 9 00:18:34.032398 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 00:18:34.032410 kernel: NET: Registered PF_PACKET protocol family Sep 9 00:18:34.032421 kernel: Key type dns_resolver registered Sep 9 00:18:34.032432 kernel: IPI shorthand broadcast: enabled Sep 9 00:18:34.032444 kernel: sched_clock: Marking stable (1187005454, 204694122)->(1423025870, -31326294) Sep 9 00:18:34.032455 kernel: registered taskstats version 1 Sep 9 00:18:34.032467 kernel: Loading compiled-in X.509 certificates Sep 9 00:18:34.032483 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.104-flatcar: cc5240ef94b546331b2896cdc739274c03278c51' Sep 9 00:18:34.032493 kernel: Key type .fscrypt registered Sep 9 00:18:34.032502 kernel: Key type fscrypt-provisioning registered Sep 9 00:18:34.032512 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 00:18:34.032522 kernel: ima: Allocated hash algorithm: sha1 Sep 9 00:18:34.032531 kernel: ima: No architecture policies found Sep 9 00:18:34.032541 kernel: clk: Disabling unused clocks Sep 9 00:18:34.032551 kernel: Freeing unused kernel image (initmem) memory: 42880K Sep 9 00:18:34.032563 kernel: Write protecting the kernel read-only data: 36864k Sep 9 00:18:34.032573 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 9 00:18:34.032583 kernel: Run /init as init process Sep 9 00:18:34.032592 kernel: with arguments: Sep 9 00:18:34.032602 kernel: /init Sep 9 00:18:34.032611 kernel: with environment: Sep 9 00:18:34.032621 kernel: HOME=/ Sep 9 00:18:34.032630 kernel: TERM=linux Sep 9 00:18:34.032640 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 00:18:34.032655 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:18:34.032669 systemd[1]: Detected virtualization kvm. Sep 9 00:18:34.032696 systemd[1]: Detected architecture x86-64. Sep 9 00:18:34.032711 systemd[1]: Running in initrd. Sep 9 00:18:34.032730 systemd[1]: No hostname configured, using default hostname. Sep 9 00:18:34.032742 systemd[1]: Hostname set to . Sep 9 00:18:34.032755 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:18:34.032767 systemd[1]: Queued start job for default target initrd.target. Sep 9 00:18:34.032779 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:18:34.032791 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:18:34.032804 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 00:18:34.032817 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:18:34.032833 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 00:18:34.032846 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 00:18:34.032860 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 00:18:34.032873 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 00:18:34.032885 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:18:34.032898 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:18:34.032910 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:18:34.032926 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:18:34.032939 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:18:34.032951 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:18:34.032963 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:18:34.032976 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:18:34.032988 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 00:18:34.033000 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 9 00:18:34.033012 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:18:34.033025 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:18:34.033065 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:18:34.033078 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:18:34.033102 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 00:18:34.033115 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:18:34.033127 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 00:18:34.033139 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 00:18:34.033151 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:18:34.033162 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:18:34.033179 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:18:34.033191 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 00:18:34.033203 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:18:34.033215 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 00:18:34.033227 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 00:18:34.033271 systemd-journald[190]: Collecting audit messages is disabled. Sep 9 00:18:34.033300 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:18:34.033312 systemd-journald[190]: Journal started Sep 9 00:18:34.033341 systemd-journald[190]: Runtime Journal (/run/log/journal/3170a45f35b64dda8e3f6440661daa34) is 6.0M, max 48.3M, 42.2M free. Sep 9 00:18:34.024746 systemd-modules-load[194]: Inserted module 'overlay' Sep 9 00:18:34.049514 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:18:34.051594 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:18:34.052205 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 00:18:34.058949 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:18:34.064268 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:18:34.066998 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 00:18:34.069027 systemd-modules-load[194]: Inserted module 'br_netfilter' Sep 9 00:18:34.070426 kernel: Bridge firewalling registered Sep 9 00:18:34.070469 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:18:34.073586 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:18:34.077584 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 00:18:34.080711 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:18:34.083538 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:18:34.086300 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:18:34.099228 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:18:34.101716 dracut-cmdline[220]: dracut-dracut-053 Sep 9 00:18:34.103179 dracut-cmdline[220]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=99a67175ee6aabbc03a22dabcade16d60ad192b31c4118a259bf1f24bbfa2d29 Sep 9 00:18:34.117049 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:18:34.151508 systemd-resolved[238]: Positive Trust Anchors: Sep 9 00:18:34.151528 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:18:34.151561 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:18:34.154654 systemd-resolved[238]: Defaulting to hostname 'linux'. Sep 9 00:18:34.156498 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:18:34.161501 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:18:34.211733 kernel: SCSI subsystem initialized Sep 9 00:18:34.221726 kernel: Loading iSCSI transport class v2.0-870. Sep 9 00:18:34.234729 kernel: iscsi: registered transport (tcp) Sep 9 00:18:34.258727 kernel: iscsi: registered transport (qla4xxx) Sep 9 00:18:34.258930 kernel: QLogic iSCSI HBA Driver Sep 9 00:18:34.319796 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 00:18:34.329862 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 00:18:34.369752 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 00:18:34.369880 kernel: device-mapper: uevent: version 1.0.3 Sep 9 00:18:34.369900 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 9 00:18:34.430777 kernel: raid6: avx2x4 gen() 17096 MB/s Sep 9 00:18:34.449018 kernel: raid6: avx2x2 gen() 15360 MB/s Sep 9 00:18:34.465101 kernel: raid6: avx2x1 gen() 13255 MB/s Sep 9 00:18:34.466395 kernel: raid6: using algorithm avx2x4 gen() 17096 MB/s Sep 9 00:18:34.483118 kernel: raid6: .... xor() 5589 MB/s, rmw enabled Sep 9 00:18:34.483227 kernel: raid6: using avx2x2 recovery algorithm Sep 9 00:18:34.515735 kernel: xor: automatically using best checksumming function avx Sep 9 00:18:34.711743 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 00:18:34.726790 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:18:34.736868 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:18:34.758825 systemd-udevd[413]: Using default interface naming scheme 'v255'. Sep 9 00:18:34.766163 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:18:34.774154 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 00:18:34.794943 dracut-pre-trigger[417]: rd.md=0: removing MD RAID activation Sep 9 00:18:34.837676 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:18:34.850937 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:18:34.952238 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:18:34.963276 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 00:18:34.979739 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 00:18:34.983970 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:18:34.986869 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:18:34.990121 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:18:35.000910 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 00:18:35.010718 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Sep 9 00:18:35.015239 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 00:18:35.021626 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 00:18:35.021661 kernel: GPT:9289727 != 19775487 Sep 9 00:18:35.021679 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 00:18:35.021709 kernel: GPT:9289727 != 19775487 Sep 9 00:18:35.021725 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 00:18:35.022706 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:18:35.024152 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:18:35.027581 kernel: cryptd: max_cpu_qlen set to 1000 Sep 9 00:18:35.032699 kernel: libata version 3.00 loaded. Sep 9 00:18:35.045754 kernel: ahci 0000:00:1f.2: version 3.0 Sep 9 00:18:35.053091 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 9 00:18:35.053147 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 9 00:18:35.049155 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:18:35.062023 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 9 00:18:35.063717 kernel: scsi host0: ahci Sep 9 00:18:35.049550 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:18:35.068345 kernel: BTRFS: device fsid 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (473) Sep 9 00:18:35.068369 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (461) Sep 9 00:18:35.068386 kernel: scsi host1: ahci Sep 9 00:18:35.056050 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:18:35.058169 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:18:35.071690 kernel: AVX2 version of gcm_enc/dec engaged. Sep 9 00:18:35.071714 kernel: AES CTR mode by8 optimization enabled Sep 9 00:18:35.058372 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:18:35.060611 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:18:35.084717 kernel: scsi host2: ahci Sep 9 00:18:35.085135 kernel: scsi host3: ahci Sep 9 00:18:35.085360 kernel: scsi host4: ahci Sep 9 00:18:35.085585 kernel: scsi host5: ahci Sep 9 00:18:35.085861 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Sep 9 00:18:35.080198 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:18:35.093344 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Sep 9 00:18:35.093379 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Sep 9 00:18:35.093400 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Sep 9 00:18:35.093416 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Sep 9 00:18:35.093435 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Sep 9 00:18:35.103223 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 00:18:35.113974 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 00:18:35.127140 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:18:35.133452 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 00:18:35.135000 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 00:18:35.150102 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 00:18:35.150282 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:18:35.150382 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:18:35.153599 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:18:35.155643 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:18:35.169473 disk-uuid[557]: Primary Header is updated. Sep 9 00:18:35.169473 disk-uuid[557]: Secondary Entries is updated. Sep 9 00:18:35.169473 disk-uuid[557]: Secondary Header is updated. Sep 9 00:18:35.172882 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:18:35.176874 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:18:35.180372 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:18:35.184994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 00:18:35.208477 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:18:35.395753 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 9 00:18:35.395866 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 9 00:18:35.403743 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 9 00:18:35.403871 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Sep 9 00:18:35.405184 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Sep 9 00:18:35.405216 kernel: ata3.00: applying bridge limits Sep 9 00:18:35.406226 kernel: ata3.00: configured for UDMA/100 Sep 9 00:18:35.406716 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 9 00:18:35.407726 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 9 00:18:35.409279 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Sep 9 00:18:35.461723 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Sep 9 00:18:35.462129 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Sep 9 00:18:35.475716 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Sep 9 00:18:36.178503 disk-uuid[559]: The operation has completed successfully. Sep 9 00:18:36.180225 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 00:18:36.213794 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 00:18:36.213964 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 00:18:36.252084 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 00:18:36.256202 sh[596]: Success Sep 9 00:18:36.270712 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Sep 9 00:18:36.314220 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 00:18:36.325803 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 00:18:36.331196 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 00:18:36.343306 kernel: BTRFS info (device dm-0): first mount of filesystem 7cd16ef1-c91b-4e35-a9b3-a431b3c1949a Sep 9 00:18:36.343379 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:18:36.343392 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 9 00:18:36.344438 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 00:18:36.345173 kernel: BTRFS info (device dm-0): using free space tree Sep 9 00:18:36.351229 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 00:18:36.352086 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 00:18:36.357000 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 00:18:36.359974 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 00:18:36.373340 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:18:36.373422 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:18:36.373440 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:18:36.377751 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:18:36.390324 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 9 00:18:36.392173 kernel: BTRFS info (device vda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:18:36.403130 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 00:18:36.411015 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 00:18:36.484432 ignition[692]: Ignition 2.19.0 Sep 9 00:18:36.484445 ignition[692]: Stage: fetch-offline Sep 9 00:18:36.484496 ignition[692]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:18:36.484508 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:18:36.484622 ignition[692]: parsed url from cmdline: "" Sep 9 00:18:36.484626 ignition[692]: no config URL provided Sep 9 00:18:36.484632 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 00:18:36.484642 ignition[692]: no config at "/usr/lib/ignition/user.ign" Sep 9 00:18:36.484671 ignition[692]: op(1): [started] loading QEMU firmware config module Sep 9 00:18:36.484699 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 00:18:36.523533 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:18:36.524839 ignition[692]: op(1): [finished] loading QEMU firmware config module Sep 9 00:18:36.536891 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:18:36.564121 systemd-networkd[785]: lo: Link UP Sep 9 00:18:36.564135 systemd-networkd[785]: lo: Gained carrier Sep 9 00:18:36.608317 systemd-networkd[785]: Enumeration completed Sep 9 00:18:36.608444 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:18:36.610182 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:18:36.610186 systemd-networkd[785]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:18:36.610655 systemd[1]: Reached target network.target - Network. Sep 9 00:18:36.611300 systemd-networkd[785]: eth0: Link UP Sep 9 00:18:36.611306 systemd-networkd[785]: eth0: Gained carrier Sep 9 00:18:36.611315 systemd-networkd[785]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:18:36.642770 systemd-networkd[785]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:18:36.650159 ignition[692]: parsing config with SHA512: be9d8a7a06e50e4f49227e1c716bbb48ae744648417648a21cd09376f7f9fd878f57541e1254bab0c52cc76f5e08183f8bf9715dc846119fe095b7530f33df7d Sep 9 00:18:36.655296 unknown[692]: fetched base config from "system" Sep 9 00:18:36.655325 unknown[692]: fetched user config from "qemu" Sep 9 00:18:36.656710 ignition[692]: fetch-offline: fetch-offline passed Sep 9 00:18:36.700873 systemd-resolved[238]: Detected conflict on linux IN A 10.0.0.15 Sep 9 00:18:36.656889 ignition[692]: Ignition finished successfully Sep 9 00:18:36.700895 systemd-resolved[238]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Sep 9 00:18:36.708960 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:18:36.709385 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 00:18:36.738072 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 00:18:36.759637 ignition[789]: Ignition 2.19.0 Sep 9 00:18:36.759651 ignition[789]: Stage: kargs Sep 9 00:18:36.759855 ignition[789]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:18:36.759868 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:18:36.760659 ignition[789]: kargs: kargs passed Sep 9 00:18:36.760724 ignition[789]: Ignition finished successfully Sep 9 00:18:36.807521 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 00:18:36.861982 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 00:18:36.901319 ignition[798]: Ignition 2.19.0 Sep 9 00:18:36.901339 ignition[798]: Stage: disks Sep 9 00:18:36.901539 ignition[798]: no configs at "/usr/lib/ignition/base.d" Sep 9 00:18:36.901551 ignition[798]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:18:36.905530 ignition[798]: disks: disks passed Sep 9 00:18:36.905583 ignition[798]: Ignition finished successfully Sep 9 00:18:36.909730 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 00:18:36.912102 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 00:18:36.914529 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 00:18:36.917190 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:18:36.919366 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:18:36.921559 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:18:36.937999 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 00:18:36.971675 systemd-fsck[808]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 9 00:18:37.385293 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 00:18:37.406830 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 00:18:37.496731 kernel: EXT4-fs (vda9): mounted filesystem ee55a213-d578-493d-a79b-e10c399cd35c r/w with ordered data mode. Quota mode: none. Sep 9 00:18:37.497498 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 00:18:37.499734 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 00:18:37.513916 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:18:37.517261 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 00:18:37.520234 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 00:18:37.520331 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 00:18:37.529369 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (816) Sep 9 00:18:37.529400 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:18:37.529412 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:18:37.529424 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:18:37.520377 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:18:37.531498 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:18:37.534042 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:18:37.536102 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 00:18:37.555972 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 00:18:37.593753 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 00:18:37.599720 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Sep 9 00:18:37.606564 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 00:18:37.612859 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 00:18:37.730936 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 00:18:37.743968 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 00:18:37.746650 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 00:18:37.754649 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 00:18:37.756193 kernel: BTRFS info (device vda6): last unmount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:18:37.790294 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 00:18:37.799163 ignition[929]: INFO : Ignition 2.19.0 Sep 9 00:18:37.799163 ignition[929]: INFO : Stage: mount Sep 9 00:18:37.800997 ignition[929]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:18:37.800997 ignition[929]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:18:37.800997 ignition[929]: INFO : mount: mount passed Sep 9 00:18:37.800997 ignition[929]: INFO : Ignition finished successfully Sep 9 00:18:37.803053 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 00:18:37.810893 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 00:18:38.293029 systemd-networkd[785]: eth0: Gained IPv6LL Sep 9 00:18:38.516065 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 00:18:38.524443 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (943) Sep 9 00:18:38.524510 kernel: BTRFS info (device vda6): first mount of filesystem a5263def-4663-4ce6-b873-45a7d7f1ec33 Sep 9 00:18:38.524523 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 9 00:18:38.526083 kernel: BTRFS info (device vda6): using free space tree Sep 9 00:18:38.528708 kernel: BTRFS info (device vda6): auto enabling async discard Sep 9 00:18:38.530313 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 00:18:38.555796 ignition[960]: INFO : Ignition 2.19.0 Sep 9 00:18:38.555796 ignition[960]: INFO : Stage: files Sep 9 00:18:38.558108 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:18:38.558108 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:18:38.558108 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Sep 9 00:18:38.558108 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 00:18:38.558108 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 00:18:38.565907 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 00:18:38.565907 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 00:18:38.565907 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 00:18:38.565907 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:18:38.565907 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Sep 9 00:18:38.560641 unknown[960]: wrote ssh authorized keys file for user: core Sep 9 00:18:38.633510 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 00:18:40.328601 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Sep 9 00:18:40.328601 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 9 00:18:40.333671 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 00:18:40.333671 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:18:40.333671 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 00:18:40.333671 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:18:40.333671 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 00:18:40.333671 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:18:40.346148 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 00:18:40.346148 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:18:40.346148 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 00:18:40.346148 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:18:40.346148 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:18:40.346148 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:18:40.346148 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Sep 9 00:18:40.671727 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 9 00:18:41.395031 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Sep 9 00:18:41.395031 ignition[960]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 9 00:18:41.435389 ignition[960]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:18:41.437269 ignition[960]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 00:18:41.437269 ignition[960]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 9 00:18:41.437269 ignition[960]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Sep 9 00:18:41.437269 ignition[960]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:18:41.437269 ignition[960]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 00:18:41.437269 ignition[960]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Sep 9 00:18:41.437269 ignition[960]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 00:18:41.480869 ignition[960]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:18:41.492290 ignition[960]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 00:18:41.494073 ignition[960]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 00:18:41.494073 ignition[960]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Sep 9 00:18:41.496825 ignition[960]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 00:18:41.498302 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:18:41.500075 ignition[960]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 00:18:41.501721 ignition[960]: INFO : files: files passed Sep 9 00:18:41.502463 ignition[960]: INFO : Ignition finished successfully Sep 9 00:18:41.505989 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 00:18:41.516917 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 00:18:41.517720 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 00:18:41.528512 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 00:18:41.529733 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 00:18:41.530945 initrd-setup-root-after-ignition[987]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 00:18:41.534186 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:18:41.534186 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:18:41.538468 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 00:18:41.535939 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:18:41.539080 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 00:18:41.548834 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 00:18:41.575124 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 00:18:41.575269 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 00:18:41.587617 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 00:18:41.589737 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 00:18:41.590809 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 00:18:41.591662 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 00:18:41.609715 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:18:41.627813 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 00:18:41.696666 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:18:41.697962 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:18:41.700181 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 00:18:41.702154 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 00:18:41.702285 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 00:18:41.704597 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 00:18:41.706139 systemd[1]: Stopped target basic.target - Basic System. Sep 9 00:18:41.708159 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 00:18:41.710161 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 00:18:41.712209 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 00:18:41.714303 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 00:18:41.716350 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 00:18:41.718590 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 00:18:41.720583 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 00:18:41.722792 systemd[1]: Stopped target swap.target - Swaps. Sep 9 00:18:41.724543 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 00:18:41.724720 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 00:18:41.726983 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:18:41.728437 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:18:41.730510 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 00:18:41.730635 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:18:41.732718 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 00:18:41.732859 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 00:18:41.735116 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 00:18:41.735249 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 00:18:41.737048 systemd[1]: Stopped target paths.target - Path Units. Sep 9 00:18:41.738774 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 00:18:41.742733 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:18:41.744249 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 00:18:41.746204 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 00:18:41.748019 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 00:18:41.748140 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 00:18:41.750407 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 00:18:41.750526 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 00:18:41.754743 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 00:18:41.754914 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 00:18:41.757015 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 00:18:41.757133 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 00:18:41.764889 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 00:18:41.765955 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 00:18:41.766073 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:18:41.769243 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 00:18:41.770455 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 00:18:41.770579 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:18:41.772142 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 00:18:41.772257 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 00:18:41.778270 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 00:18:41.786618 ignition[1014]: INFO : Ignition 2.19.0 Sep 9 00:18:41.786618 ignition[1014]: INFO : Stage: umount Sep 9 00:18:41.786618 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 00:18:41.786618 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 00:18:41.786618 ignition[1014]: INFO : umount: umount passed Sep 9 00:18:41.786618 ignition[1014]: INFO : Ignition finished successfully Sep 9 00:18:41.778439 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 00:18:41.788819 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 00:18:41.788998 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 00:18:41.791369 systemd[1]: Stopped target network.target - Network. Sep 9 00:18:41.793768 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 00:18:41.793848 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 00:18:41.796121 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 00:18:41.796175 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 00:18:41.798491 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 00:18:41.798576 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 00:18:41.801506 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 00:18:41.801574 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 00:18:41.803135 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 00:18:41.805560 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 00:18:41.810037 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 00:18:41.811174 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 00:18:41.811373 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 00:18:41.811761 systemd-networkd[785]: eth0: DHCPv6 lease lost Sep 9 00:18:41.814335 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 00:18:41.814617 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 00:18:41.818208 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 00:18:41.818422 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 00:18:41.822476 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 00:18:41.822545 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:18:41.823768 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 00:18:41.823857 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 00:18:41.834845 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 00:18:41.836643 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 00:18:41.836718 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 00:18:41.839004 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 00:18:41.839111 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:18:41.841214 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 00:18:41.841267 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 00:18:41.843461 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 00:18:41.843526 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:18:41.846022 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:18:41.868004 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 00:18:41.868299 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:18:41.870721 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 00:18:41.870781 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 00:18:41.873004 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 00:18:41.873062 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:18:41.875031 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 00:18:41.875084 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 00:18:41.877420 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 00:18:41.877473 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 00:18:41.879612 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 00:18:41.879704 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 00:18:41.892882 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 00:18:41.894226 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 00:18:41.894291 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:18:41.897356 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:18:41.897430 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:18:41.900582 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 00:18:41.900724 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 00:18:41.904545 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 00:18:41.904699 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 00:18:41.906718 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 00:18:41.916866 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 00:18:41.923963 systemd[1]: Switching root. Sep 9 00:18:41.959353 systemd-journald[190]: Journal stopped Sep 9 00:18:44.413944 systemd-journald[190]: Received SIGTERM from PID 1 (systemd). Sep 9 00:18:44.414016 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 00:18:44.414037 kernel: SELinux: policy capability open_perms=1 Sep 9 00:18:44.414058 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 00:18:44.414069 kernel: SELinux: policy capability always_check_network=0 Sep 9 00:18:44.414081 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 00:18:44.414100 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 00:18:44.414111 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 00:18:44.414123 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 00:18:44.414135 kernel: audit: type=1403 audit(1757377123.203:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 00:18:44.414148 systemd[1]: Successfully loaded SELinux policy in 71.527ms. Sep 9 00:18:44.414172 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.692ms. Sep 9 00:18:44.414190 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 9 00:18:44.414203 systemd[1]: Detected virtualization kvm. Sep 9 00:18:44.414215 systemd[1]: Detected architecture x86-64. Sep 9 00:18:44.414231 systemd[1]: Detected first boot. Sep 9 00:18:44.414248 systemd[1]: Initializing machine ID from VM UUID. Sep 9 00:18:44.414260 zram_generator::config[1060]: No configuration found. Sep 9 00:18:44.414275 systemd[1]: Populated /etc with preset unit settings. Sep 9 00:18:44.414287 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 00:18:44.414299 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 00:18:44.414311 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 00:18:44.414331 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 00:18:44.414350 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 00:18:44.414362 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 00:18:44.414375 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 00:18:44.414393 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 00:18:44.414405 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 00:18:44.414418 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 00:18:44.414431 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 00:18:44.414443 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 00:18:44.414456 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 00:18:44.414471 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 00:18:44.414484 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 00:18:44.414497 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 00:18:44.414509 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 00:18:44.414522 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 9 00:18:44.414534 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 00:18:44.414547 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 00:18:44.414559 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 00:18:44.414572 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 00:18:44.414587 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 00:18:44.414599 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 00:18:44.414615 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 00:18:44.414628 systemd[1]: Reached target slices.target - Slice Units. Sep 9 00:18:44.414640 systemd[1]: Reached target swap.target - Swaps. Sep 9 00:18:44.414653 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 00:18:44.414665 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 00:18:44.414690 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 00:18:44.414706 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 00:18:44.414719 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 00:18:44.414731 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 00:18:44.414743 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 00:18:44.414755 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 00:18:44.414768 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 00:18:44.414781 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:18:44.414793 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 00:18:44.414815 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 00:18:44.414832 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 00:18:44.414845 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 00:18:44.414857 systemd[1]: Reached target machines.target - Containers. Sep 9 00:18:44.414870 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 00:18:44.414882 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:18:44.414894 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 00:18:44.414911 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 00:18:44.414923 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:18:44.414938 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:18:44.414950 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:18:44.414963 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 00:18:44.414992 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:18:44.415005 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 00:18:44.415017 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 00:18:44.415029 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 00:18:44.415041 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 00:18:44.415056 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 00:18:44.415068 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 00:18:44.415081 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 00:18:44.415092 kernel: fuse: init (API version 7.39) Sep 9 00:18:44.415105 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 00:18:44.415122 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 00:18:44.415134 kernel: loop: module loaded Sep 9 00:18:44.415146 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 00:18:44.415158 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 00:18:44.415189 systemd-journald[1123]: Collecting audit messages is disabled. Sep 9 00:18:44.415252 systemd[1]: Stopped verity-setup.service. Sep 9 00:18:44.415268 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:18:44.415280 systemd-journald[1123]: Journal started Sep 9 00:18:44.415302 systemd-journald[1123]: Runtime Journal (/run/log/journal/3170a45f35b64dda8e3f6440661daa34) is 6.0M, max 48.3M, 42.2M free. Sep 9 00:18:44.077525 systemd[1]: Queued start job for default target multi-user.target. Sep 9 00:18:44.100757 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 00:18:44.101315 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 00:18:44.419631 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 00:18:44.420233 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 00:18:44.421444 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 00:18:44.422615 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 00:18:44.423663 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 00:18:44.424835 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 00:18:44.426000 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 00:18:44.427272 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 00:18:44.428835 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 00:18:44.429067 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 00:18:44.430488 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:18:44.430773 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:18:44.432192 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:18:44.432400 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:18:44.433904 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 00:18:44.434102 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 00:18:44.435460 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:18:44.435629 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:18:44.437058 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 00:18:44.443094 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 00:18:44.444665 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 00:18:44.458649 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 00:18:44.460702 kernel: ACPI: bus type drm_connector registered Sep 9 00:18:44.470934 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 00:18:44.473981 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 00:18:44.506110 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 00:18:44.506172 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 00:18:44.508904 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 9 00:18:44.518922 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 00:18:44.553665 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 00:18:44.554979 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:18:44.570128 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 00:18:44.591478 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 00:18:44.593032 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:18:44.595957 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 00:18:44.598465 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:18:44.602790 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 00:18:44.611990 systemd-journald[1123]: Time spent on flushing to /var/log/journal/3170a45f35b64dda8e3f6440661daa34 is 16.172ms for 991 entries. Sep 9 00:18:44.611990 systemd-journald[1123]: System Journal (/var/log/journal/3170a45f35b64dda8e3f6440661daa34) is 8.0M, max 195.6M, 187.6M free. Sep 9 00:18:44.679312 systemd-journald[1123]: Received client request to flush runtime journal. Sep 9 00:18:44.679387 kernel: loop0: detected capacity change from 0 to 142488 Sep 9 00:18:44.608750 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 00:18:44.613643 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:18:44.614245 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:18:44.617374 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 00:18:44.619140 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 00:18:44.620990 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 00:18:44.638246 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 00:18:44.660242 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 9 00:18:44.663668 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 00:18:44.665503 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 00:18:44.680902 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 9 00:18:44.683224 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 00:18:44.687498 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 00:18:44.689821 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 00:18:44.692772 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 00:18:44.706187 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 00:18:44.708294 udevadm[1178]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 9 00:18:44.714995 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 00:18:44.717515 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 9 00:18:44.728723 kernel: loop1: detected capacity change from 0 to 140768 Sep 9 00:18:44.775900 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 00:18:44.783984 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 00:18:44.786709 kernel: loop2: detected capacity change from 0 to 229808 Sep 9 00:18:44.837473 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Sep 9 00:18:44.837502 systemd-tmpfiles[1197]: ACLs are not supported, ignoring. Sep 9 00:18:44.846181 kernel: loop3: detected capacity change from 0 to 142488 Sep 9 00:18:44.848579 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 00:18:44.898727 kernel: loop4: detected capacity change from 0 to 140768 Sep 9 00:18:44.910805 kernel: loop5: detected capacity change from 0 to 229808 Sep 9 00:18:44.921730 (sd-merge)[1200]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 00:18:44.922578 (sd-merge)[1200]: Merged extensions into '/usr'. Sep 9 00:18:44.928216 systemd[1]: Reloading requested from client PID 1166 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 00:18:44.928239 systemd[1]: Reloading... Sep 9 00:18:45.021779 zram_generator::config[1226]: No configuration found. Sep 9 00:18:45.177636 ldconfig[1161]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 00:18:45.224769 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:18:45.292290 systemd[1]: Reloading finished in 362 ms. Sep 9 00:18:45.335377 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 00:18:45.337435 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 00:18:45.355115 systemd[1]: Starting ensure-sysext.service... Sep 9 00:18:45.358010 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 00:18:45.367363 systemd[1]: Reloading requested from client PID 1264 ('systemctl') (unit ensure-sysext.service)... Sep 9 00:18:45.367382 systemd[1]: Reloading... Sep 9 00:18:45.415856 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 00:18:45.416453 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 00:18:45.420981 systemd-tmpfiles[1265]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 00:18:45.421397 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Sep 9 00:18:45.421516 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Sep 9 00:18:45.426460 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:18:45.426486 systemd-tmpfiles[1265]: Skipping /boot Sep 9 00:18:45.437708 zram_generator::config[1295]: No configuration found. Sep 9 00:18:45.447028 systemd-tmpfiles[1265]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 00:18:45.447049 systemd-tmpfiles[1265]: Skipping /boot Sep 9 00:18:45.670984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:18:45.723669 systemd[1]: Reloading finished in 355 ms. Sep 9 00:18:45.746324 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 00:18:45.764539 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:18:45.767635 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 00:18:45.792081 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 00:18:45.797062 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 00:18:45.803053 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 00:18:45.815888 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:18:45.816111 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:18:45.824465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:18:45.830171 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:18:45.834068 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:18:45.836807 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:18:45.837028 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:18:45.854134 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 00:18:45.856419 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 00:18:45.861895 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 00:18:45.864296 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:18:45.864548 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:18:45.866837 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:18:45.867085 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:18:45.868266 augenrules[1354]: No rules Sep 9 00:18:45.870140 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:18:45.872545 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:18:45.872821 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:18:45.885482 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:18:45.886868 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:18:45.895097 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:18:45.897639 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:18:45.900077 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:18:45.901269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:18:45.908983 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 00:18:45.911924 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 00:18:45.913089 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:18:45.914327 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 00:18:45.918756 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 00:18:45.920978 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 00:18:45.923152 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:18:45.923390 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:18:45.925226 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:18:45.925604 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:18:45.928546 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:18:45.928777 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:18:45.930513 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 00:18:45.938831 systemd-udevd[1368]: Using default interface naming scheme 'v255'. Sep 9 00:18:45.944429 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:18:45.944646 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 00:18:45.951025 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 00:18:45.954823 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 00:18:45.962962 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 00:18:45.973206 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 00:18:45.974424 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 00:18:45.974562 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 00:18:45.974642 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 9 00:18:45.977572 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 00:18:45.979882 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 00:18:45.980127 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 00:18:45.982281 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 00:18:45.982847 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 00:18:45.985298 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 00:18:45.985729 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 00:18:45.996381 systemd[1]: Finished ensure-sysext.service. Sep 9 00:18:46.012061 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 00:18:46.012291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 00:18:46.022739 systemd-resolved[1334]: Positive Trust Anchors: Sep 9 00:18:46.023155 systemd-resolved[1334]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 00:18:46.023282 systemd-resolved[1334]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 00:18:46.025281 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 9 00:18:46.028569 systemd-resolved[1334]: Defaulting to hostname 'linux'. Sep 9 00:18:46.036846 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1392) Sep 9 00:18:46.036807 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 00:18:46.039822 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 00:18:46.039912 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 00:18:46.043570 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 00:18:46.045666 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 00:18:46.051223 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 00:18:46.118152 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 00:18:46.124869 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 9 00:18:46.128006 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 00:18:46.138719 kernel: ACPI: button: Power Button [PWRF] Sep 9 00:18:46.145865 systemd-networkd[1410]: lo: Link UP Sep 9 00:18:46.145880 systemd-networkd[1410]: lo: Gained carrier Sep 9 00:18:46.148214 systemd-networkd[1410]: Enumeration completed Sep 9 00:18:46.148328 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 00:18:46.150031 systemd[1]: Reached target network.target - Network. Sep 9 00:18:46.152389 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:18:46.152403 systemd-networkd[1410]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 00:18:46.154179 systemd-networkd[1410]: eth0: Link UP Sep 9 00:18:46.154188 systemd-networkd[1410]: eth0: Gained carrier Sep 9 00:18:46.154202 systemd-networkd[1410]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 00:18:46.159990 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 00:18:46.164932 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 00:18:46.170821 systemd-networkd[1410]: eth0: DHCPv4 address 10.0.0.15/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 00:18:46.191100 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Sep 9 00:18:46.191477 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 9 00:18:46.191505 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 9 00:18:46.193344 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 9 00:18:46.193649 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 9 00:18:46.208945 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 00:18:46.210368 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 00:18:46.685164 systemd-resolved[1334]: Clock change detected. Flushing caches. Sep 9 00:18:46.685708 systemd-timesyncd[1411]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 00:18:46.686094 systemd-timesyncd[1411]: Initial clock synchronization to Tue 2025-09-09 00:18:46.684967 UTC. Sep 9 00:18:46.715838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:18:46.725403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 00:18:46.725731 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:18:46.796867 kernel: mousedev: PS/2 mouse device common for all mice Sep 9 00:18:46.829822 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 00:18:46.879478 kernel: kvm_amd: TSC scaling supported Sep 9 00:18:46.879569 kernel: kvm_amd: Nested Virtualization enabled Sep 9 00:18:46.879607 kernel: kvm_amd: Nested Paging enabled Sep 9 00:18:46.880779 kernel: kvm_amd: LBR virtualization supported Sep 9 00:18:46.880813 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Sep 9 00:18:46.881588 kernel: kvm_amd: Virtual GIF supported Sep 9 00:18:46.907418 kernel: EDAC MC: Ver: 3.0.0 Sep 9 00:18:47.006772 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 00:18:47.018173 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 9 00:18:47.027763 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 9 00:18:47.036202 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:18:47.084474 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 9 00:18:47.086137 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 00:18:47.087290 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 00:18:47.088576 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 00:18:47.089878 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 00:18:47.091485 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 00:18:47.092779 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 00:18:47.094038 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 00:18:47.095301 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 00:18:47.095346 systemd[1]: Reached target paths.target - Path Units. Sep 9 00:18:47.096299 systemd[1]: Reached target timers.target - Timer Units. Sep 9 00:18:47.098325 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 00:18:47.102263 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 00:18:47.116728 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 00:18:47.119527 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 9 00:18:47.121317 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 00:18:47.122633 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 00:18:47.123718 systemd[1]: Reached target basic.target - Basic System. Sep 9 00:18:47.124870 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:18:47.124912 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 00:18:47.126427 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 00:18:47.128808 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 00:18:47.131492 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 9 00:18:47.133514 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 00:18:47.138660 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 00:18:47.140896 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 00:18:47.142770 jq[1445]: false Sep 9 00:18:47.143103 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 00:18:47.147951 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 00:18:47.151624 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 00:18:47.156683 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 00:18:47.163656 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 00:18:47.165298 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 00:18:47.166511 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 00:18:47.168639 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 00:18:47.174713 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 00:18:47.179898 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 9 00:18:47.183087 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 00:18:47.183328 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 00:18:47.183729 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 00:18:47.184646 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 00:18:47.188066 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 00:18:47.188315 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 00:18:47.197570 jq[1459]: true Sep 9 00:18:47.199749 dbus-daemon[1444]: [system] SELinux support is enabled Sep 9 00:18:47.205085 extend-filesystems[1446]: Found loop3 Sep 9 00:18:47.205085 extend-filesystems[1446]: Found loop4 Sep 9 00:18:47.205085 extend-filesystems[1446]: Found loop5 Sep 9 00:18:47.205085 extend-filesystems[1446]: Found sr0 Sep 9 00:18:47.205085 extend-filesystems[1446]: Found vda Sep 9 00:18:47.205085 extend-filesystems[1446]: Found vda1 Sep 9 00:18:47.205085 extend-filesystems[1446]: Found vda2 Sep 9 00:18:47.205085 extend-filesystems[1446]: Found vda3 Sep 9 00:18:47.205085 extend-filesystems[1446]: Found usr Sep 9 00:18:47.205085 extend-filesystems[1446]: Found vda4 Sep 9 00:18:47.205085 extend-filesystems[1446]: Found vda6 Sep 9 00:18:47.205085 extend-filesystems[1446]: Found vda7 Sep 9 00:18:47.205085 extend-filesystems[1446]: Found vda9 Sep 9 00:18:47.205085 extend-filesystems[1446]: Checking size of /dev/vda9 Sep 9 00:18:47.204636 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 00:18:47.224104 update_engine[1458]: I20250909 00:18:47.213600 1458 main.cc:92] Flatcar Update Engine starting Sep 9 00:18:47.224104 update_engine[1458]: I20250909 00:18:47.216778 1458 update_check_scheduler.cc:74] Next update check in 2m52s Sep 9 00:18:47.223791 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 00:18:47.224694 extend-filesystems[1446]: Resized partition /dev/vda9 Sep 9 00:18:47.223827 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 00:18:47.231766 extend-filesystems[1479]: resize2fs 1.47.1 (20-May-2024) Sep 9 00:18:47.226501 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 00:18:47.226522 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 00:18:47.226995 (ntainerd)[1473]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 00:18:47.229530 systemd[1]: Started update-engine.service - Update Engine. Sep 9 00:18:47.235877 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 00:18:47.237316 tar[1463]: linux-amd64/LICENSE Sep 9 00:18:47.237316 tar[1463]: linux-amd64/helm Sep 9 00:18:47.242640 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 00:18:47.251276 jq[1471]: true Sep 9 00:18:47.272486 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 00:18:47.321457 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1405) Sep 9 00:18:47.323401 systemd-logind[1454]: Watching system buttons on /dev/input/event1 (Power Button) Sep 9 00:18:47.323441 systemd-logind[1454]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 9 00:18:47.325301 systemd-logind[1454]: New seat seat0. Sep 9 00:18:47.335375 extend-filesystems[1479]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 00:18:47.335375 extend-filesystems[1479]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 00:18:47.335375 extend-filesystems[1479]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 00:18:47.381873 extend-filesystems[1446]: Resized filesystem in /dev/vda9 Sep 9 00:18:47.340187 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 00:18:47.347122 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 00:18:47.375228 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 00:18:47.461344 locksmithd[1481]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 00:18:47.503552 bash[1500]: Updated "/home/core/.ssh/authorized_keys" Sep 9 00:18:47.504507 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 00:18:47.508067 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 00:18:47.902342 containerd[1473]: time="2025-09-09T00:18:47.902231761Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 9 00:18:47.998416 containerd[1473]: time="2025-09-09T00:18:47.998319063Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:18:48.002276 containerd[1473]: time="2025-09-09T00:18:48.002233415Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.104-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:18:48.002387 containerd[1473]: time="2025-09-09T00:18:48.002348321Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 9 00:18:48.002470 containerd[1473]: time="2025-09-09T00:18:48.002453228Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 9 00:18:48.003010 containerd[1473]: time="2025-09-09T00:18:48.002989854Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 9 00:18:48.003076 containerd[1473]: time="2025-09-09T00:18:48.003061949Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 9 00:18:48.003210 containerd[1473]: time="2025-09-09T00:18:48.003189238Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:18:48.003264 containerd[1473]: time="2025-09-09T00:18:48.003251495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:18:48.003613 containerd[1473]: time="2025-09-09T00:18:48.003590601Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:18:48.003690 containerd[1473]: time="2025-09-09T00:18:48.003675641Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 9 00:18:48.003750 containerd[1473]: time="2025-09-09T00:18:48.003735633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:18:48.003796 containerd[1473]: time="2025-09-09T00:18:48.003784314Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 9 00:18:48.003983 containerd[1473]: time="2025-09-09T00:18:48.003963861Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:18:48.004310 containerd[1473]: time="2025-09-09T00:18:48.004290183Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 9 00:18:48.004566 containerd[1473]: time="2025-09-09T00:18:48.004545703Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 9 00:18:48.004626 containerd[1473]: time="2025-09-09T00:18:48.004613329Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 9 00:18:48.004823 containerd[1473]: time="2025-09-09T00:18:48.004803586Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 9 00:18:48.004950 containerd[1473]: time="2025-09-09T00:18:48.004933941Z" level=info msg="metadata content store policy set" policy=shared Sep 9 00:18:48.017745 containerd[1473]: time="2025-09-09T00:18:48.017682213Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 9 00:18:48.017810 containerd[1473]: time="2025-09-09T00:18:48.017781780Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 9 00:18:48.017810 containerd[1473]: time="2025-09-09T00:18:48.017801366Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 9 00:18:48.017847 containerd[1473]: time="2025-09-09T00:18:48.017816996Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 9 00:18:48.017867 containerd[1473]: time="2025-09-09T00:18:48.017852051Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 9 00:18:48.018136 containerd[1473]: time="2025-09-09T00:18:48.018103233Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 9 00:18:48.018528 containerd[1473]: time="2025-09-09T00:18:48.018491591Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 9 00:18:48.018700 containerd[1473]: time="2025-09-09T00:18:48.018671188Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 9 00:18:48.018700 containerd[1473]: time="2025-09-09T00:18:48.018694752Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 9 00:18:48.018740 containerd[1473]: time="2025-09-09T00:18:48.018708378Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 9 00:18:48.018740 containerd[1473]: time="2025-09-09T00:18:48.018723706Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 9 00:18:48.018784 containerd[1473]: time="2025-09-09T00:18:48.018751248Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 9 00:18:48.018784 containerd[1473]: time="2025-09-09T00:18:48.018765214Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 9 00:18:48.018821 containerd[1473]: time="2025-09-09T00:18:48.018783228Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 9 00:18:48.018821 containerd[1473]: time="2025-09-09T00:18:48.018798326Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 9 00:18:48.018821 containerd[1473]: time="2025-09-09T00:18:48.018810950Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 9 00:18:48.018872 containerd[1473]: time="2025-09-09T00:18:48.018822903Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 9 00:18:48.018872 containerd[1473]: time="2025-09-09T00:18:48.018837069Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 9 00:18:48.018872 containerd[1473]: time="2025-09-09T00:18:48.018857187Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.018872 containerd[1473]: time="2025-09-09T00:18:48.018870742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.018951 containerd[1473]: time="2025-09-09T00:18:48.018883436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.018951 containerd[1473]: time="2025-09-09T00:18:48.018895499Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.018951 containerd[1473]: time="2025-09-09T00:18:48.018919945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.018951 containerd[1473]: time="2025-09-09T00:18:48.018937007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.018951 containerd[1473]: time="2025-09-09T00:18:48.018949941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.019041 containerd[1473]: time="2025-09-09T00:18:48.018962635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.019041 containerd[1473]: time="2025-09-09T00:18:48.018996558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.019041 containerd[1473]: time="2025-09-09T00:18:48.019014642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.019041 containerd[1473]: time="2025-09-09T00:18:48.019025653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.019041 containerd[1473]: time="2025-09-09T00:18:48.019039088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.019131 containerd[1473]: time="2025-09-09T00:18:48.019056350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.019131 containerd[1473]: time="2025-09-09T00:18:48.019071549Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 9 00:18:48.019131 containerd[1473]: time="2025-09-09T00:18:48.019093590Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.019131 containerd[1473]: time="2025-09-09T00:18:48.019105412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.019131 containerd[1473]: time="2025-09-09T00:18:48.019116493Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 9 00:18:48.019233 containerd[1473]: time="2025-09-09T00:18:48.019210490Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 9 00:18:48.019256 containerd[1473]: time="2025-09-09T00:18:48.019239364Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 9 00:18:48.019256 containerd[1473]: time="2025-09-09T00:18:48.019250996Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 9 00:18:48.019300 containerd[1473]: time="2025-09-09T00:18:48.019263559Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 9 00:18:48.019300 containerd[1473]: time="2025-09-09T00:18:48.019273708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.019345 containerd[1473]: time="2025-09-09T00:18:48.019298104Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 9 00:18:48.019345 containerd[1473]: time="2025-09-09T00:18:48.019323171Z" level=info msg="NRI interface is disabled by configuration." Sep 9 00:18:48.019442 containerd[1473]: time="2025-09-09T00:18:48.019345753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 9 00:18:48.019831 containerd[1473]: time="2025-09-09T00:18:48.019760962Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 9 00:18:48.019831 containerd[1473]: time="2025-09-09T00:18:48.019834069Z" level=info msg="Connect containerd service" Sep 9 00:18:48.020088 containerd[1473]: time="2025-09-09T00:18:48.019888491Z" level=info msg="using legacy CRI server" Sep 9 00:18:48.020088 containerd[1473]: time="2025-09-09T00:18:48.019896086Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 00:18:48.023675 containerd[1473]: time="2025-09-09T00:18:48.023509323Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 9 00:18:48.024902 containerd[1473]: time="2025-09-09T00:18:48.024832815Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 00:18:48.025184 containerd[1473]: time="2025-09-09T00:18:48.025091951Z" level=info msg="Start subscribing containerd event" Sep 9 00:18:48.025320 containerd[1473]: time="2025-09-09T00:18:48.025278762Z" level=info msg="Start recovering state" Sep 9 00:18:48.025483 containerd[1473]: time="2025-09-09T00:18:48.025381124Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 00:18:48.025566 containerd[1473]: time="2025-09-09T00:18:48.025494176Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 00:18:48.028929 containerd[1473]: time="2025-09-09T00:18:48.028481620Z" level=info msg="Start event monitor" Sep 9 00:18:48.028929 containerd[1473]: time="2025-09-09T00:18:48.028520462Z" level=info msg="Start snapshots syncer" Sep 9 00:18:48.028929 containerd[1473]: time="2025-09-09T00:18:48.028538015Z" level=info msg="Start cni network conf syncer for default" Sep 9 00:18:48.028929 containerd[1473]: time="2025-09-09T00:18:48.028552232Z" level=info msg="Start streaming server" Sep 9 00:18:48.028929 containerd[1473]: time="2025-09-09T00:18:48.028652069Z" level=info msg="containerd successfully booted in 0.128614s" Sep 9 00:18:48.028785 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 00:18:48.069350 tar[1463]: linux-amd64/README.md Sep 9 00:18:48.077942 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 00:18:48.101497 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 00:18:48.136477 sshd_keygen[1466]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 00:18:48.170355 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 00:18:48.182751 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 00:18:48.185182 systemd[1]: Started sshd@0-10.0.0.15:22-10.0.0.1:39262.service - OpenSSH per-connection server daemon (10.0.0.1:39262). Sep 9 00:18:48.192444 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 00:18:48.192750 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 00:18:48.197200 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 00:18:48.215980 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 00:18:48.227834 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 00:18:48.230826 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 9 00:18:48.232427 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 00:18:48.251834 sshd[1528]: Accepted publickey for core from 10.0.0.1 port 39262 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:18:48.254616 sshd[1528]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:48.263997 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 00:18:48.282840 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 00:18:48.286485 systemd-logind[1454]: New session 1 of user core. Sep 9 00:18:48.300209 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 00:18:48.316923 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 00:18:48.322326 (systemd)[1539]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 00:18:48.445023 systemd[1539]: Queued start job for default target default.target. Sep 9 00:18:48.461019 systemd[1539]: Created slice app.slice - User Application Slice. Sep 9 00:18:48.461050 systemd[1539]: Reached target paths.target - Paths. Sep 9 00:18:48.461064 systemd[1539]: Reached target timers.target - Timers. Sep 9 00:18:48.463175 systemd[1539]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 00:18:48.477309 systemd[1539]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 00:18:48.477501 systemd[1539]: Reached target sockets.target - Sockets. Sep 9 00:18:48.477523 systemd[1539]: Reached target basic.target - Basic System. Sep 9 00:18:48.477568 systemd[1539]: Reached target default.target - Main User Target. Sep 9 00:18:48.477608 systemd[1539]: Startup finished in 145ms. Sep 9 00:18:48.478105 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 00:18:48.480991 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 00:18:48.544821 systemd[1]: Started sshd@1-10.0.0.15:22-10.0.0.1:39264.service - OpenSSH per-connection server daemon (10.0.0.1:39264). Sep 9 00:18:48.584462 sshd[1550]: Accepted publickey for core from 10.0.0.1 port 39264 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:18:48.586543 sshd[1550]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:48.592090 systemd-logind[1454]: New session 2 of user core. Sep 9 00:18:48.602621 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 00:18:48.622604 systemd-networkd[1410]: eth0: Gained IPv6LL Sep 9 00:18:48.626222 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 00:18:48.628260 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 00:18:48.646071 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 00:18:48.650249 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:18:48.653624 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 00:18:48.671908 sshd[1550]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:48.676651 systemd[1]: Started sshd@2-10.0.0.15:22-10.0.0.1:39268.service - OpenSSH per-connection server daemon (10.0.0.1:39268). Sep 9 00:18:48.680344 systemd[1]: sshd@1-10.0.0.15:22-10.0.0.1:39264.service: Deactivated successfully. Sep 9 00:18:48.687698 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 00:18:48.693833 systemd-logind[1454]: Session 2 logged out. Waiting for processes to exit. Sep 9 00:18:48.695302 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 00:18:48.698591 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 00:18:48.698884 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 00:18:48.703351 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 00:18:48.704797 systemd-logind[1454]: Removed session 2. Sep 9 00:18:48.718099 sshd[1564]: Accepted publickey for core from 10.0.0.1 port 39268 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:18:48.720125 sshd[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:48.724823 systemd-logind[1454]: New session 3 of user core. Sep 9 00:18:48.734548 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 00:18:48.821104 sshd[1564]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:48.825357 systemd[1]: sshd@2-10.0.0.15:22-10.0.0.1:39268.service: Deactivated successfully. Sep 9 00:18:48.827838 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 00:18:48.828586 systemd-logind[1454]: Session 3 logged out. Waiting for processes to exit. Sep 9 00:18:48.829518 systemd-logind[1454]: Removed session 3. Sep 9 00:18:50.645619 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:18:50.647114 (kubelet)[1585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:18:50.647727 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 00:18:50.650020 systemd[1]: Startup finished in 1.395s (kernel) + 9.380s (initrd) + 7.041s (userspace) = 17.818s. Sep 9 00:18:51.316828 kubelet[1585]: E0909 00:18:51.316739 1585 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:18:51.321458 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:18:51.321768 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:18:51.322272 systemd[1]: kubelet.service: Consumed 2.245s CPU time. Sep 9 00:18:58.834403 systemd[1]: Started sshd@3-10.0.0.15:22-10.0.0.1:42438.service - OpenSSH per-connection server daemon (10.0.0.1:42438). Sep 9 00:18:58.869710 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 42438 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:18:58.871824 sshd[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:58.876692 systemd-logind[1454]: New session 4 of user core. Sep 9 00:18:58.898581 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 00:18:58.954618 sshd[1598]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:58.964849 systemd[1]: sshd@3-10.0.0.15:22-10.0.0.1:42438.service: Deactivated successfully. Sep 9 00:18:58.967198 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 00:18:58.968978 systemd-logind[1454]: Session 4 logged out. Waiting for processes to exit. Sep 9 00:18:58.992929 systemd[1]: Started sshd@4-10.0.0.15:22-10.0.0.1:42444.service - OpenSSH per-connection server daemon (10.0.0.1:42444). Sep 9 00:18:58.994172 systemd-logind[1454]: Removed session 4. Sep 9 00:18:59.023354 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 42444 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:18:59.025104 sshd[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:59.029189 systemd-logind[1454]: New session 5 of user core. Sep 9 00:18:59.038522 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 00:18:59.089357 sshd[1605]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:59.107274 systemd[1]: sshd@4-10.0.0.15:22-10.0.0.1:42444.service: Deactivated successfully. Sep 9 00:18:59.109773 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 00:18:59.112061 systemd-logind[1454]: Session 5 logged out. Waiting for processes to exit. Sep 9 00:18:59.120791 systemd[1]: Started sshd@5-10.0.0.15:22-10.0.0.1:42460.service - OpenSSH per-connection server daemon (10.0.0.1:42460). Sep 9 00:18:59.121964 systemd-logind[1454]: Removed session 5. Sep 9 00:18:59.150452 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 42460 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:18:59.152150 sshd[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:59.156974 systemd-logind[1454]: New session 6 of user core. Sep 9 00:18:59.163539 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 00:18:59.219448 sshd[1612]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:59.231275 systemd[1]: sshd@5-10.0.0.15:22-10.0.0.1:42460.service: Deactivated successfully. Sep 9 00:18:59.233998 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 00:18:59.235824 systemd-logind[1454]: Session 6 logged out. Waiting for processes to exit. Sep 9 00:18:59.243722 systemd[1]: Started sshd@6-10.0.0.15:22-10.0.0.1:42470.service - OpenSSH per-connection server daemon (10.0.0.1:42470). Sep 9 00:18:59.244839 systemd-logind[1454]: Removed session 6. Sep 9 00:18:59.275807 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 42470 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:18:59.277986 sshd[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:59.283265 systemd-logind[1454]: New session 7 of user core. Sep 9 00:18:59.292504 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 00:18:59.354584 sudo[1622]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 00:18:59.354973 sudo[1622]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:18:59.370579 sudo[1622]: pam_unix(sudo:session): session closed for user root Sep 9 00:18:59.372774 sshd[1619]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:59.388026 systemd[1]: sshd@6-10.0.0.15:22-10.0.0.1:42470.service: Deactivated successfully. Sep 9 00:18:59.389690 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 00:18:59.391332 systemd-logind[1454]: Session 7 logged out. Waiting for processes to exit. Sep 9 00:18:59.392805 systemd[1]: Started sshd@7-10.0.0.15:22-10.0.0.1:42480.service - OpenSSH per-connection server daemon (10.0.0.1:42480). Sep 9 00:18:59.393571 systemd-logind[1454]: Removed session 7. Sep 9 00:18:59.426402 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 42480 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:18:59.428016 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:59.432062 systemd-logind[1454]: New session 8 of user core. Sep 9 00:18:59.446511 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 00:18:59.502350 sudo[1631]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 00:18:59.502735 sudo[1631]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:18:59.507894 sudo[1631]: pam_unix(sudo:session): session closed for user root Sep 9 00:18:59.515092 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 9 00:18:59.515470 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:18:59.534634 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 9 00:18:59.536577 auditctl[1634]: No rules Sep 9 00:18:59.537137 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 00:18:59.537429 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 9 00:18:59.540397 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 9 00:18:59.580200 augenrules[1652]: No rules Sep 9 00:18:59.582267 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 9 00:18:59.583756 sudo[1630]: pam_unix(sudo:session): session closed for user root Sep 9 00:18:59.585862 sshd[1627]: pam_unix(sshd:session): session closed for user core Sep 9 00:18:59.598769 systemd[1]: sshd@7-10.0.0.15:22-10.0.0.1:42480.service: Deactivated successfully. Sep 9 00:18:59.600577 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 00:18:59.602350 systemd-logind[1454]: Session 8 logged out. Waiting for processes to exit. Sep 9 00:18:59.609663 systemd[1]: Started sshd@8-10.0.0.15:22-10.0.0.1:42490.service - OpenSSH per-connection server daemon (10.0.0.1:42490). Sep 9 00:18:59.610962 systemd-logind[1454]: Removed session 8. Sep 9 00:18:59.642237 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 42490 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:18:59.644057 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:18:59.649243 systemd-logind[1454]: New session 9 of user core. Sep 9 00:18:59.663551 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 00:18:59.719614 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 00:18:59.719976 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 00:19:00.034830 (dockerd)[1681]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 00:19:00.034959 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 00:19:00.553894 dockerd[1681]: time="2025-09-09T00:19:00.553789407Z" level=info msg="Starting up" Sep 9 00:19:01.180808 dockerd[1681]: time="2025-09-09T00:19:01.180723466Z" level=info msg="Loading containers: start." Sep 9 00:19:01.317409 kernel: Initializing XFRM netlink socket Sep 9 00:19:01.351587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 00:19:01.361718 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:01.415510 systemd-networkd[1410]: docker0: Link UP Sep 9 00:19:01.619205 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:01.623959 (kubelet)[1789]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:19:01.735874 kubelet[1789]: E0909 00:19:01.735734 1789 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:19:01.745168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:19:01.745436 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:19:01.850431 dockerd[1681]: time="2025-09-09T00:19:01.850348551Z" level=info msg="Loading containers: done." Sep 9 00:19:01.869750 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1787992129-merged.mount: Deactivated successfully. Sep 9 00:19:02.205626 dockerd[1681]: time="2025-09-09T00:19:02.205415816Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 00:19:02.205796 dockerd[1681]: time="2025-09-09T00:19:02.205638674Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 9 00:19:02.205950 dockerd[1681]: time="2025-09-09T00:19:02.205911235Z" level=info msg="Daemon has completed initialization" Sep 9 00:19:02.348644 dockerd[1681]: time="2025-09-09T00:19:02.348541559Z" level=info msg="API listen on /run/docker.sock" Sep 9 00:19:02.348823 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 00:19:03.561155 containerd[1473]: time="2025-09-09T00:19:03.561083862Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 9 00:19:04.591805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3379277820.mount: Deactivated successfully. Sep 9 00:19:06.546682 containerd[1473]: time="2025-09-09T00:19:06.546586181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:06.547470 containerd[1473]: time="2025-09-09T00:19:06.547376222Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=30078664" Sep 9 00:19:06.550226 containerd[1473]: time="2025-09-09T00:19:06.550176455Z" level=info msg="ImageCreate event name:\"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:06.554222 containerd[1473]: time="2025-09-09T00:19:06.554143336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:06.555497 containerd[1473]: time="2025-09-09T00:19:06.555443845Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"30075464\" in 2.994280324s" Sep 9 00:19:06.555658 containerd[1473]: time="2025-09-09T00:19:06.555501163Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:1f41885d0a91155d5a5e670b2862eed338c7f12b0e8a5bbc88b1ab4a2d505ae8\"" Sep 9 00:19:06.557015 containerd[1473]: time="2025-09-09T00:19:06.556956142Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 9 00:19:08.244692 containerd[1473]: time="2025-09-09T00:19:08.244615659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:08.245532 containerd[1473]: time="2025-09-09T00:19:08.245467126Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=26018066" Sep 9 00:19:08.247029 containerd[1473]: time="2025-09-09T00:19:08.246989902Z" level=info msg="ImageCreate event name:\"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:08.250331 containerd[1473]: time="2025-09-09T00:19:08.250270165Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:08.252515 containerd[1473]: time="2025-09-09T00:19:08.252453841Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"27646961\" in 1.695436544s" Sep 9 00:19:08.252606 containerd[1473]: time="2025-09-09T00:19:08.252515216Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:358ab71c1a1ea4846ad0b3dff0d9db6b124236b64bc8a6b79dc874f65dc0d492\"" Sep 9 00:19:08.253527 containerd[1473]: time="2025-09-09T00:19:08.253497338Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 9 00:19:10.721915 containerd[1473]: time="2025-09-09T00:19:10.721802894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:10.725659 containerd[1473]: time="2025-09-09T00:19:10.725533792Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=20153911" Sep 9 00:19:10.734555 containerd[1473]: time="2025-09-09T00:19:10.734500080Z" level=info msg="ImageCreate event name:\"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:10.745948 containerd[1473]: time="2025-09-09T00:19:10.745886237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:10.747338 containerd[1473]: time="2025-09-09T00:19:10.747257700Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"21782824\" in 2.493725196s" Sep 9 00:19:10.747338 containerd[1473]: time="2025-09-09T00:19:10.747311330Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:ab4ad8a84c3c69c18494ef32fa087b32f7c44d71e6acba463d2c7dda798c3d66\"" Sep 9 00:19:10.747910 containerd[1473]: time="2025-09-09T00:19:10.747870439Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 9 00:19:11.995852 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 9 00:19:12.013520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:12.261551 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:12.262377 (kubelet)[1920]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:19:12.499099 kubelet[1920]: E0909 00:19:12.498936 1920 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:19:12.504059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:19:12.504280 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:19:12.753582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3607301689.mount: Deactivated successfully. Sep 9 00:19:14.511312 containerd[1473]: time="2025-09-09T00:19:14.511225057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:14.539821 containerd[1473]: time="2025-09-09T00:19:14.539722800Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=31899626" Sep 9 00:19:14.570999 containerd[1473]: time="2025-09-09T00:19:14.570928092Z" level=info msg="ImageCreate event name:\"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:14.630388 containerd[1473]: time="2025-09-09T00:19:14.630296220Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:14.631285 containerd[1473]: time="2025-09-09T00:19:14.631245531Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"31898645\" in 3.883332943s" Sep 9 00:19:14.631343 containerd[1473]: time="2025-09-09T00:19:14.631288311Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:1b2ea5e018dbbbd2efb8e5c540a6d3c463d77f250d3904429402ee057f09c64e\"" Sep 9 00:19:14.632073 containerd[1473]: time="2025-09-09T00:19:14.632029251Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 9 00:19:17.688770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1004327094.mount: Deactivated successfully. Sep 9 00:19:20.323906 containerd[1473]: time="2025-09-09T00:19:20.323806712Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:20.324671 containerd[1473]: time="2025-09-09T00:19:20.324573007Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Sep 9 00:19:20.325927 containerd[1473]: time="2025-09-09T00:19:20.325888250Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:20.329973 containerd[1473]: time="2025-09-09T00:19:20.329924631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:20.333519 containerd[1473]: time="2025-09-09T00:19:20.331575710Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 5.699497186s" Sep 9 00:19:20.333519 containerd[1473]: time="2025-09-09T00:19:20.331687436Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Sep 9 00:19:20.334045 containerd[1473]: time="2025-09-09T00:19:20.334015709Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 00:19:21.073455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount761602138.mount: Deactivated successfully. Sep 9 00:19:21.080986 containerd[1473]: time="2025-09-09T00:19:21.080938322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:21.082020 containerd[1473]: time="2025-09-09T00:19:21.081957892Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 9 00:19:21.083946 containerd[1473]: time="2025-09-09T00:19:21.083908782Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:21.086270 containerd[1473]: time="2025-09-09T00:19:21.086241627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:21.087081 containerd[1473]: time="2025-09-09T00:19:21.087048208Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 752.998693ms" Sep 9 00:19:21.087081 containerd[1473]: time="2025-09-09T00:19:21.087080750Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 9 00:19:21.087737 containerd[1473]: time="2025-09-09T00:19:21.087695812Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 9 00:19:21.647994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3723032230.mount: Deactivated successfully. Sep 9 00:19:22.754798 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 9 00:19:22.767564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:22.964100 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:22.970060 (kubelet)[2010]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 00:19:23.058028 kubelet[2010]: E0909 00:19:23.057755 2010 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 00:19:23.063517 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 00:19:23.063761 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 00:19:25.356007 containerd[1473]: time="2025-09-09T00:19:25.355889458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:25.366158 containerd[1473]: time="2025-09-09T00:19:25.366092193Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58377871" Sep 9 00:19:25.377271 containerd[1473]: time="2025-09-09T00:19:25.377212593Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:25.385562 containerd[1473]: time="2025-09-09T00:19:25.385488474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:25.388385 containerd[1473]: time="2025-09-09T00:19:25.387898721Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 4.300158023s" Sep 9 00:19:25.388385 containerd[1473]: time="2025-09-09T00:19:25.387969948Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Sep 9 00:19:28.832088 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:28.845741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:28.881310 systemd[1]: Reloading requested from client PID 2092 ('systemctl') (unit session-9.scope)... Sep 9 00:19:28.881351 systemd[1]: Reloading... Sep 9 00:19:28.996875 zram_generator::config[2132]: No configuration found. Sep 9 00:19:29.394444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:19:29.474724 systemd[1]: Reloading finished in 592 ms. Sep 9 00:19:29.527788 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 00:19:29.527913 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 00:19:29.528258 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:29.544850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:29.739460 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:29.745854 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:19:29.797455 kubelet[2180]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:19:29.797455 kubelet[2180]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:19:29.797455 kubelet[2180]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:19:29.799423 kubelet[2180]: I0909 00:19:29.799338 2180 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:19:31.725033 kubelet[2180]: I0909 00:19:31.724962 2180 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:19:31.725033 kubelet[2180]: I0909 00:19:31.725013 2180 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:19:31.725668 kubelet[2180]: I0909 00:19:31.725637 2180 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:19:31.782417 kubelet[2180]: I0909 00:19:31.782350 2180 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:19:31.803886 kubelet[2180]: E0909 00:19:31.803834 2180 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:19:31.841738 kubelet[2180]: E0909 00:19:31.841648 2180 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:19:31.841738 kubelet[2180]: I0909 00:19:31.841705 2180 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:19:31.848949 kubelet[2180]: I0909 00:19:31.848885 2180 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:19:31.849417 kubelet[2180]: I0909 00:19:31.849332 2180 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:19:31.849645 kubelet[2180]: I0909 00:19:31.849407 2180 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:19:31.849850 kubelet[2180]: I0909 00:19:31.849655 2180 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:19:31.849850 kubelet[2180]: I0909 00:19:31.849669 2180 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:19:31.849959 kubelet[2180]: I0909 00:19:31.849931 2180 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:19:31.862120 kubelet[2180]: I0909 00:19:31.862061 2180 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:19:31.862218 kubelet[2180]: I0909 00:19:31.862190 2180 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:19:31.863324 kubelet[2180]: I0909 00:19:31.863296 2180 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:19:31.863350 kubelet[2180]: I0909 00:19:31.863328 2180 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:19:31.865543 kubelet[2180]: E0909 00:19:31.865474 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:19:31.865543 kubelet[2180]: E0909 00:19:31.865512 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:19:31.923494 kubelet[2180]: I0909 00:19:31.923450 2180 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:19:31.924084 kubelet[2180]: I0909 00:19:31.924056 2180 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:19:31.925250 kubelet[2180]: W0909 00:19:31.925230 2180 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 00:19:31.975522 kubelet[2180]: I0909 00:19:31.975272 2180 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:19:31.975522 kubelet[2180]: I0909 00:19:31.975348 2180 server.go:1289] "Started kubelet" Sep 9 00:19:31.977132 kubelet[2180]: I0909 00:19:31.976177 2180 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:19:31.977387 kubelet[2180]: I0909 00:19:31.977341 2180 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:19:31.977521 kubelet[2180]: I0909 00:19:31.977340 2180 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:19:31.978474 kubelet[2180]: I0909 00:19:31.977418 2180 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:19:31.978985 kubelet[2180]: I0909 00:19:31.978958 2180 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:19:31.980264 kubelet[2180]: E0909 00:19:31.979949 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:19:31.980264 kubelet[2180]: I0909 00:19:31.979986 2180 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:19:31.980264 kubelet[2180]: I0909 00:19:31.980145 2180 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:19:31.980264 kubelet[2180]: I0909 00:19:31.980233 2180 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:19:31.980718 kubelet[2180]: E0909 00:19:31.980671 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:19:31.981113 kubelet[2180]: E0909 00:19:31.981080 2180 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:19:31.981242 kubelet[2180]: I0909 00:19:31.981225 2180 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:19:31.981323 kubelet[2180]: I0909 00:19:31.981307 2180 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:19:31.982825 kubelet[2180]: I0909 00:19:31.982597 2180 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:19:31.982825 kubelet[2180]: I0909 00:19:31.982723 2180 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:19:31.984531 kubelet[2180]: E0909 00:19:31.981405 2180 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.15:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.15:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863753a17f7fa45 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:19:31.975305797 +0000 UTC m=+2.221979448,LastTimestamp:2025-09-09 00:19:31.975305797 +0000 UTC m=+2.221979448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:19:31.984846 kubelet[2180]: E0909 00:19:31.982600 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="200ms" Sep 9 00:19:31.988006 kubelet[2180]: I0909 00:19:31.987960 2180 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:19:32.000217 kubelet[2180]: I0909 00:19:32.000173 2180 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:19:32.000217 kubelet[2180]: I0909 00:19:32.000204 2180 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:19:32.000217 kubelet[2180]: I0909 00:19:32.000224 2180 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:19:32.069093 kubelet[2180]: I0909 00:19:32.068982 2180 policy_none.go:49] "None policy: Start" Sep 9 00:19:32.069093 kubelet[2180]: I0909 00:19:32.069035 2180 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:19:32.069093 kubelet[2180]: I0909 00:19:32.069060 2180 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:19:32.071180 kubelet[2180]: I0909 00:19:32.071140 2180 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:19:32.071224 kubelet[2180]: I0909 00:19:32.071187 2180 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:19:32.071259 kubelet[2180]: I0909 00:19:32.071226 2180 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:19:32.071259 kubelet[2180]: I0909 00:19:32.071239 2180 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:19:32.071317 kubelet[2180]: E0909 00:19:32.071286 2180 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:19:32.073037 kubelet[2180]: E0909 00:19:32.073012 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:19:32.081136 kubelet[2180]: E0909 00:19:32.081092 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:19:32.094280 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 00:19:32.108530 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 00:19:32.114249 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 00:19:32.126083 kubelet[2180]: E0909 00:19:32.126021 2180 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:19:32.126533 kubelet[2180]: I0909 00:19:32.126510 2180 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:19:32.126633 kubelet[2180]: I0909 00:19:32.126534 2180 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:19:32.126952 kubelet[2180]: I0909 00:19:32.126928 2180 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:19:32.127950 kubelet[2180]: E0909 00:19:32.127896 2180 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:19:32.127950 kubelet[2180]: E0909 00:19:32.127949 2180 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 00:19:32.182203 kubelet[2180]: I0909 00:19:32.182119 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b89365fedd762b6e9a347057cb98e7bb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b89365fedd762b6e9a347057cb98e7bb\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:32.182203 kubelet[2180]: I0909 00:19:32.182176 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b89365fedd762b6e9a347057cb98e7bb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b89365fedd762b6e9a347057cb98e7bb\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:32.182203 kubelet[2180]: I0909 00:19:32.182205 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b89365fedd762b6e9a347057cb98e7bb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b89365fedd762b6e9a347057cb98e7bb\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:32.185898 kubelet[2180]: E0909 00:19:32.185838 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="400ms" Sep 9 00:19:32.199354 systemd[1]: Created slice kubepods-burstable-podb89365fedd762b6e9a347057cb98e7bb.slice - libcontainer container kubepods-burstable-podb89365fedd762b6e9a347057cb98e7bb.slice. Sep 9 00:19:32.221685 kubelet[2180]: E0909 00:19:32.221621 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:32.226468 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 9 00:19:32.228734 kubelet[2180]: I0909 00:19:32.228321 2180 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:32.228932 kubelet[2180]: E0909 00:19:32.228889 2180 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 9 00:19:32.229938 kubelet[2180]: E0909 00:19:32.229580 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:32.243549 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 9 00:19:32.245730 kubelet[2180]: E0909 00:19:32.245673 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:32.282596 kubelet[2180]: I0909 00:19:32.282527 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:32.282596 kubelet[2180]: I0909 00:19:32.282585 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:32.282835 kubelet[2180]: I0909 00:19:32.282615 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:32.282835 kubelet[2180]: I0909 00:19:32.282699 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:32.282835 kubelet[2180]: I0909 00:19:32.282750 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:32.282835 kubelet[2180]: I0909 00:19:32.282791 2180 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:32.430983 kubelet[2180]: I0909 00:19:32.430932 2180 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:32.431333 kubelet[2180]: E0909 00:19:32.431277 2180 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 9 00:19:32.522637 kubelet[2180]: E0909 00:19:32.522419 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:32.523616 containerd[1473]: time="2025-09-09T00:19:32.523509185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b89365fedd762b6e9a347057cb98e7bb,Namespace:kube-system,Attempt:0,}" Sep 9 00:19:32.531206 kubelet[2180]: E0909 00:19:32.531156 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:32.531961 containerd[1473]: time="2025-09-09T00:19:32.531905015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 9 00:19:32.547469 kubelet[2180]: E0909 00:19:32.547393 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:32.548248 containerd[1473]: time="2025-09-09T00:19:32.548166520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 9 00:19:32.586738 kubelet[2180]: E0909 00:19:32.586654 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="800ms" Sep 9 00:19:32.754003 kubelet[2180]: E0909 00:19:32.753919 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:19:32.832699 update_engine[1458]: I20250909 00:19:32.832317 1458 update_attempter.cc:509] Updating boot flags... Sep 9 00:19:32.834505 kubelet[2180]: I0909 00:19:32.833640 2180 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:32.834505 kubelet[2180]: E0909 00:19:32.834119 2180 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 9 00:19:32.870401 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2224) Sep 9 00:19:32.889976 kubelet[2180]: E0909 00:19:32.889914 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:19:32.931415 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2226) Sep 9 00:19:32.983483 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2226) Sep 9 00:19:33.030474 kubelet[2180]: E0909 00:19:33.030412 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:19:33.387443 kubelet[2180]: E0909 00:19:33.387356 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="1.6s" Sep 9 00:19:33.525263 kubelet[2180]: E0909 00:19:33.525182 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 9 00:19:33.635777 kubelet[2180]: I0909 00:19:33.635728 2180 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:33.636152 kubelet[2180]: E0909 00:19:33.636119 2180 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 9 00:19:33.734947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1469120701.mount: Deactivated successfully. Sep 9 00:19:34.003190 containerd[1473]: time="2025-09-09T00:19:34.002999227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:19:34.003817 kubelet[2180]: E0909 00:19:34.003145 2180 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 9 00:19:34.010097 containerd[1473]: time="2025-09-09T00:19:34.010012153Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 9 00:19:34.021012 containerd[1473]: time="2025-09-09T00:19:34.020937887Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:19:34.027198 containerd[1473]: time="2025-09-09T00:19:34.027137009Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:19:34.029008 containerd[1473]: time="2025-09-09T00:19:34.028943284Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:19:34.040012 containerd[1473]: time="2025-09-09T00:19:34.039960882Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:19:34.042867 containerd[1473]: time="2025-09-09T00:19:34.042766180Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 9 00:19:34.045603 containerd[1473]: time="2025-09-09T00:19:34.045544988Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 00:19:34.050277 containerd[1473]: time="2025-09-09T00:19:34.050212708Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.501912323s" Sep 9 00:19:34.051076 containerd[1473]: time="2025-09-09T00:19:34.050988378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.518985287s" Sep 9 00:19:34.051915 containerd[1473]: time="2025-09-09T00:19:34.051865861Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.528234344s" Sep 9 00:19:34.392888 containerd[1473]: time="2025-09-09T00:19:34.390222979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:19:34.392888 containerd[1473]: time="2025-09-09T00:19:34.392518701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:19:34.392888 containerd[1473]: time="2025-09-09T00:19:34.392536515Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:34.392888 containerd[1473]: time="2025-09-09T00:19:34.392682753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:34.396022 containerd[1473]: time="2025-09-09T00:19:34.395487960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:19:34.396022 containerd[1473]: time="2025-09-09T00:19:34.395552193Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:19:34.396022 containerd[1473]: time="2025-09-09T00:19:34.395591957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:34.396022 containerd[1473]: time="2025-09-09T00:19:34.395756841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:34.402418 containerd[1473]: time="2025-09-09T00:19:34.401895479Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:19:34.402418 containerd[1473]: time="2025-09-09T00:19:34.402135753Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:19:34.402418 containerd[1473]: time="2025-09-09T00:19:34.402154279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:34.402418 containerd[1473]: time="2025-09-09T00:19:34.402253177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:34.425688 systemd[1]: Started cri-containerd-b6f4876c05a43193324a1298642e0ef2d45c37a957e6487cf2e2c06d4b295da3.scope - libcontainer container b6f4876c05a43193324a1298642e0ef2d45c37a957e6487cf2e2c06d4b295da3. Sep 9 00:19:34.431217 systemd[1]: Started cri-containerd-79b90fe61354fafd904fec9f35cda99274e988258ca631c02d2c5c90ce477dbc.scope - libcontainer container 79b90fe61354fafd904fec9f35cda99274e988258ca631c02d2c5c90ce477dbc. Sep 9 00:19:34.439044 systemd[1]: Started cri-containerd-695363a5d1f07e760d5c212f956b5e16426fddda5a70d99e2ee18c1c7dcc88d3.scope - libcontainer container 695363a5d1f07e760d5c212f956b5e16426fddda5a70d99e2ee18c1c7dcc88d3. Sep 9 00:19:34.514585 containerd[1473]: time="2025-09-09T00:19:34.513206729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"79b90fe61354fafd904fec9f35cda99274e988258ca631c02d2c5c90ce477dbc\"" Sep 9 00:19:34.514738 kubelet[2180]: E0909 00:19:34.514652 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:34.518108 containerd[1473]: time="2025-09-09T00:19:34.518067243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b89365fedd762b6e9a347057cb98e7bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6f4876c05a43193324a1298642e0ef2d45c37a957e6487cf2e2c06d4b295da3\"" Sep 9 00:19:34.518771 kubelet[2180]: E0909 00:19:34.518741 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:34.530854 containerd[1473]: time="2025-09-09T00:19:34.530743596Z" level=info msg="CreateContainer within sandbox \"79b90fe61354fafd904fec9f35cda99274e988258ca631c02d2c5c90ce477dbc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 00:19:34.533654 containerd[1473]: time="2025-09-09T00:19:34.533617464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"695363a5d1f07e760d5c212f956b5e16426fddda5a70d99e2ee18c1c7dcc88d3\"" Sep 9 00:19:34.534675 kubelet[2180]: E0909 00:19:34.534421 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:34.550021 containerd[1473]: time="2025-09-09T00:19:34.549982861Z" level=info msg="CreateContainer within sandbox \"b6f4876c05a43193324a1298642e0ef2d45c37a957e6487cf2e2c06d4b295da3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 00:19:34.591990 containerd[1473]: time="2025-09-09T00:19:34.591957328Z" level=info msg="CreateContainer within sandbox \"695363a5d1f07e760d5c212f956b5e16426fddda5a70d99e2ee18c1c7dcc88d3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 00:19:34.693675 kubelet[2180]: E0909 00:19:34.693630 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.15:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 9 00:19:34.769626 kubelet[2180]: E0909 00:19:34.769578 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 9 00:19:34.883276 containerd[1473]: time="2025-09-09T00:19:34.883206096Z" level=info msg="CreateContainer within sandbox \"79b90fe61354fafd904fec9f35cda99274e988258ca631c02d2c5c90ce477dbc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4bcb49550bbf02e2af40cf22113e5618aa9bc7513dc8ab29f1652bbf64516885\"" Sep 9 00:19:34.885421 containerd[1473]: time="2025-09-09T00:19:34.884022544Z" level=info msg="StartContainer for \"4bcb49550bbf02e2af40cf22113e5618aa9bc7513dc8ab29f1652bbf64516885\"" Sep 9 00:19:34.919639 systemd[1]: Started cri-containerd-4bcb49550bbf02e2af40cf22113e5618aa9bc7513dc8ab29f1652bbf64516885.scope - libcontainer container 4bcb49550bbf02e2af40cf22113e5618aa9bc7513dc8ab29f1652bbf64516885. Sep 9 00:19:34.961032 kubelet[2180]: E0909 00:19:34.960882 2180 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.15:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 9 00:19:34.987891 kubelet[2180]: E0909 00:19:34.987836 2180 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.15:6443: connect: connection refused" interval="3.2s" Sep 9 00:19:35.117960 containerd[1473]: time="2025-09-09T00:19:35.117895166Z" level=info msg="StartContainer for \"4bcb49550bbf02e2af40cf22113e5618aa9bc7513dc8ab29f1652bbf64516885\" returns successfully" Sep 9 00:19:35.118642 containerd[1473]: time="2025-09-09T00:19:35.118580714Z" level=info msg="CreateContainer within sandbox \"b6f4876c05a43193324a1298642e0ef2d45c37a957e6487cf2e2c06d4b295da3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c52bdf4f6cbe6aad5b2bdc703819c5dea5f2b1d024bd8799f37973c5602f0d7c\"" Sep 9 00:19:35.120875 containerd[1473]: time="2025-09-09T00:19:35.119598913Z" level=info msg="StartContainer for \"c52bdf4f6cbe6aad5b2bdc703819c5dea5f2b1d024bd8799f37973c5602f0d7c\"" Sep 9 00:19:35.122736 kubelet[2180]: E0909 00:19:35.122694 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:35.123072 kubelet[2180]: E0909 00:19:35.122832 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:35.146521 systemd[1]: Started cri-containerd-c52bdf4f6cbe6aad5b2bdc703819c5dea5f2b1d024bd8799f37973c5602f0d7c.scope - libcontainer container c52bdf4f6cbe6aad5b2bdc703819c5dea5f2b1d024bd8799f37973c5602f0d7c. Sep 9 00:19:35.238774 kubelet[2180]: I0909 00:19:35.237872 2180 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:35.238774 kubelet[2180]: E0909 00:19:35.238301 2180 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.15:6443/api/v1/nodes\": dial tcp 10.0.0.15:6443: connect: connection refused" node="localhost" Sep 9 00:19:35.350727 containerd[1473]: time="2025-09-09T00:19:35.350578697Z" level=info msg="StartContainer for \"c52bdf4f6cbe6aad5b2bdc703819c5dea5f2b1d024bd8799f37973c5602f0d7c\" returns successfully" Sep 9 00:19:35.350727 containerd[1473]: time="2025-09-09T00:19:35.350590270Z" level=info msg="CreateContainer within sandbox \"695363a5d1f07e760d5c212f956b5e16426fddda5a70d99e2ee18c1c7dcc88d3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dc62930a97858cdf1d8cf9c2b2342c7e102394e7e32b33ed5efd5668b6381649\"" Sep 9 00:19:35.352969 containerd[1473]: time="2025-09-09T00:19:35.352927498Z" level=info msg="StartContainer for \"dc62930a97858cdf1d8cf9c2b2342c7e102394e7e32b33ed5efd5668b6381649\"" Sep 9 00:19:35.397641 systemd[1]: Started cri-containerd-dc62930a97858cdf1d8cf9c2b2342c7e102394e7e32b33ed5efd5668b6381649.scope - libcontainer container dc62930a97858cdf1d8cf9c2b2342c7e102394e7e32b33ed5efd5668b6381649. Sep 9 00:19:35.567548 containerd[1473]: time="2025-09-09T00:19:35.566856868Z" level=info msg="StartContainer for \"dc62930a97858cdf1d8cf9c2b2342c7e102394e7e32b33ed5efd5668b6381649\" returns successfully" Sep 9 00:19:36.137869 kubelet[2180]: E0909 00:19:36.137413 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:36.137869 kubelet[2180]: E0909 00:19:36.137648 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:36.142583 kubelet[2180]: E0909 00:19:36.142093 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:36.142583 kubelet[2180]: E0909 00:19:36.142191 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:36.142583 kubelet[2180]: E0909 00:19:36.142409 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:36.142583 kubelet[2180]: E0909 00:19:36.142491 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:37.143625 kubelet[2180]: E0909 00:19:37.143572 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:37.143625 kubelet[2180]: E0909 00:19:37.143718 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:37.143625 kubelet[2180]: E0909 00:19:37.143725 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:37.143625 kubelet[2180]: E0909 00:19:37.143894 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:37.143625 kubelet[2180]: E0909 00:19:37.143905 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:37.143625 kubelet[2180]: E0909 00:19:37.144031 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:37.738189 kubelet[2180]: E0909 00:19:37.738022 2180 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.1863753a17f7fa45 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 00:19:31.975305797 +0000 UTC m=+2.221979448,LastTimestamp:2025-09-09 00:19:31.975305797 +0000 UTC m=+2.221979448,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 00:19:38.145207 kubelet[2180]: E0909 00:19:38.145055 2180 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 9 00:19:38.145762 kubelet[2180]: E0909 00:19:38.145236 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:38.230595 kubelet[2180]: E0909 00:19:38.230535 2180 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 00:19:38.312563 kubelet[2180]: E0909 00:19:38.312516 2180 csi_plugin.go:397] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "localhost" not found Sep 9 00:19:38.441239 kubelet[2180]: I0909 00:19:38.441062 2180 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:38.448206 kubelet[2180]: I0909 00:19:38.448139 2180 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:19:38.448206 kubelet[2180]: E0909 00:19:38.448186 2180 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 00:19:38.464464 kubelet[2180]: E0909 00:19:38.464405 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:19:38.565677 kubelet[2180]: E0909 00:19:38.565582 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:19:38.666665 kubelet[2180]: E0909 00:19:38.666579 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:19:38.768215 kubelet[2180]: E0909 00:19:38.768132 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:19:38.868725 kubelet[2180]: E0909 00:19:38.868658 2180 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 00:19:38.983764 kubelet[2180]: I0909 00:19:38.983242 2180 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:38.999153 kubelet[2180]: I0909 00:19:38.998738 2180 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:39.004044 kubelet[2180]: I0909 00:19:39.003981 2180 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:39.868411 kubelet[2180]: I0909 00:19:39.868288 2180 apiserver.go:52] "Watching apiserver" Sep 9 00:19:39.872006 kubelet[2180]: E0909 00:19:39.871959 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:39.873276 kubelet[2180]: E0909 00:19:39.873122 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:39.875601 kubelet[2180]: E0909 00:19:39.873315 2180 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:39.880591 kubelet[2180]: I0909 00:19:39.880534 2180 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:19:40.716164 systemd[1]: Reloading requested from client PID 2478 ('systemctl') (unit session-9.scope)... Sep 9 00:19:40.716189 systemd[1]: Reloading... Sep 9 00:19:40.846426 zram_generator::config[2520]: No configuration found. Sep 9 00:19:40.938557 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 9 00:19:41.044018 systemd[1]: Reloading finished in 327 ms. Sep 9 00:19:41.088109 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:41.115883 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 00:19:41.116232 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:41.116288 systemd[1]: kubelet.service: Consumed 1.647s CPU time, 134.5M memory peak, 0B memory swap peak. Sep 9 00:19:41.123935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 00:19:41.301581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 00:19:41.307674 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 00:19:41.347496 kubelet[2562]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:19:41.347496 kubelet[2562]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 9 00:19:41.347496 kubelet[2562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 00:19:41.348025 kubelet[2562]: I0909 00:19:41.347531 2562 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 00:19:41.354754 kubelet[2562]: I0909 00:19:41.354696 2562 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 9 00:19:41.354754 kubelet[2562]: I0909 00:19:41.354729 2562 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 00:19:41.355007 kubelet[2562]: I0909 00:19:41.354980 2562 server.go:956] "Client rotation is on, will bootstrap in background" Sep 9 00:19:41.356375 kubelet[2562]: I0909 00:19:41.356339 2562 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 9 00:19:41.360788 kubelet[2562]: I0909 00:19:41.360749 2562 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 00:19:41.368460 kubelet[2562]: E0909 00:19:41.368308 2562 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 9 00:19:41.368460 kubelet[2562]: I0909 00:19:41.368453 2562 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 9 00:19:41.376690 kubelet[2562]: I0909 00:19:41.375881 2562 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 00:19:41.376690 kubelet[2562]: I0909 00:19:41.376167 2562 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 00:19:41.376690 kubelet[2562]: I0909 00:19:41.376207 2562 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 00:19:41.376690 kubelet[2562]: I0909 00:19:41.376420 2562 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 00:19:41.377028 kubelet[2562]: I0909 00:19:41.376431 2562 container_manager_linux.go:303] "Creating device plugin manager" Sep 9 00:19:41.378044 kubelet[2562]: I0909 00:19:41.377998 2562 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:19:41.378302 kubelet[2562]: I0909 00:19:41.378286 2562 kubelet.go:480] "Attempting to sync node with API server" Sep 9 00:19:41.381427 kubelet[2562]: I0909 00:19:41.378306 2562 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 00:19:41.381427 kubelet[2562]: I0909 00:19:41.378341 2562 kubelet.go:386] "Adding apiserver pod source" Sep 9 00:19:41.381427 kubelet[2562]: I0909 00:19:41.378381 2562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 00:19:41.381427 kubelet[2562]: I0909 00:19:41.379559 2562 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 9 00:19:41.381427 kubelet[2562]: I0909 00:19:41.380164 2562 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 9 00:19:41.393094 kubelet[2562]: I0909 00:19:41.393056 2562 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 9 00:19:41.393215 kubelet[2562]: I0909 00:19:41.393156 2562 server.go:1289] "Started kubelet" Sep 9 00:19:41.394937 kubelet[2562]: I0909 00:19:41.394499 2562 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 00:19:41.395243 kubelet[2562]: I0909 00:19:41.395210 2562 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 00:19:41.396213 kubelet[2562]: I0909 00:19:41.396187 2562 server.go:317] "Adding debug handlers to kubelet server" Sep 9 00:19:41.398963 kubelet[2562]: I0909 00:19:41.398933 2562 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 00:19:41.401397 kubelet[2562]: I0909 00:19:41.399919 2562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 00:19:41.401717 kubelet[2562]: I0909 00:19:41.401689 2562 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 00:19:41.407292 kubelet[2562]: E0909 00:19:41.407246 2562 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 00:19:41.409445 kubelet[2562]: I0909 00:19:41.409422 2562 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 9 00:19:41.409711 kubelet[2562]: I0909 00:19:41.409693 2562 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 9 00:19:41.409998 kubelet[2562]: I0909 00:19:41.409981 2562 reconciler.go:26] "Reconciler: start to sync state" Sep 9 00:19:41.412248 kubelet[2562]: I0909 00:19:41.412218 2562 factory.go:223] Registration of the containerd container factory successfully Sep 9 00:19:41.412378 kubelet[2562]: I0909 00:19:41.412347 2562 factory.go:223] Registration of the systemd container factory successfully Sep 9 00:19:41.412672 kubelet[2562]: I0909 00:19:41.412608 2562 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 00:19:41.423668 kubelet[2562]: I0909 00:19:41.423629 2562 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 9 00:19:41.425839 kubelet[2562]: I0909 00:19:41.425805 2562 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 9 00:19:41.425931 kubelet[2562]: I0909 00:19:41.425852 2562 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 9 00:19:41.426057 kubelet[2562]: I0909 00:19:41.425884 2562 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 9 00:19:41.426057 kubelet[2562]: I0909 00:19:41.426053 2562 kubelet.go:2436] "Starting kubelet main sync loop" Sep 9 00:19:41.426124 kubelet[2562]: E0909 00:19:41.426107 2562 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 00:19:41.461616 kubelet[2562]: I0909 00:19:41.461578 2562 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 9 00:19:41.461838 kubelet[2562]: I0909 00:19:41.461824 2562 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 9 00:19:41.461922 kubelet[2562]: I0909 00:19:41.461912 2562 state_mem.go:36] "Initialized new in-memory state store" Sep 9 00:19:41.462110 kubelet[2562]: I0909 00:19:41.462093 2562 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 00:19:41.462202 kubelet[2562]: I0909 00:19:41.462178 2562 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 00:19:41.462250 kubelet[2562]: I0909 00:19:41.462242 2562 policy_none.go:49] "None policy: Start" Sep 9 00:19:41.462312 kubelet[2562]: I0909 00:19:41.462300 2562 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 9 00:19:41.462420 kubelet[2562]: I0909 00:19:41.462411 2562 state_mem.go:35] "Initializing new in-memory state store" Sep 9 00:19:41.462607 kubelet[2562]: I0909 00:19:41.462571 2562 state_mem.go:75] "Updated machine memory state" Sep 9 00:19:41.467084 kubelet[2562]: E0909 00:19:41.467063 2562 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 9 00:19:41.467662 kubelet[2562]: I0909 00:19:41.467649 2562 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 00:19:41.467808 kubelet[2562]: I0909 00:19:41.467757 2562 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 00:19:41.468272 kubelet[2562]: I0909 00:19:41.468254 2562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 00:19:41.472638 kubelet[2562]: E0909 00:19:41.472574 2562 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 9 00:19:41.527830 kubelet[2562]: I0909 00:19:41.527790 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:41.527965 kubelet[2562]: I0909 00:19:41.527914 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:41.528055 kubelet[2562]: I0909 00:19:41.527787 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:41.537005 kubelet[2562]: E0909 00:19:41.536964 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:41.537005 kubelet[2562]: E0909 00:19:41.536996 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:41.537222 kubelet[2562]: E0909 00:19:41.536964 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:41.581140 kubelet[2562]: I0909 00:19:41.580646 2562 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 9 00:19:41.588772 kubelet[2562]: I0909 00:19:41.588710 2562 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 9 00:19:41.588915 kubelet[2562]: I0909 00:19:41.588819 2562 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 9 00:19:41.611305 kubelet[2562]: I0909 00:19:41.611181 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b89365fedd762b6e9a347057cb98e7bb-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b89365fedd762b6e9a347057cb98e7bb\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:41.611305 kubelet[2562]: I0909 00:19:41.611274 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:41.611305 kubelet[2562]: I0909 00:19:41.611304 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:41.611631 kubelet[2562]: I0909 00:19:41.611336 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:41.611631 kubelet[2562]: I0909 00:19:41.611393 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 9 00:19:41.611631 kubelet[2562]: I0909 00:19:41.611414 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b89365fedd762b6e9a347057cb98e7bb-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b89365fedd762b6e9a347057cb98e7bb\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:41.611631 kubelet[2562]: I0909 00:19:41.611433 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b89365fedd762b6e9a347057cb98e7bb-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b89365fedd762b6e9a347057cb98e7bb\") " pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:41.611631 kubelet[2562]: I0909 00:19:41.611451 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:41.611821 kubelet[2562]: I0909 00:19:41.611469 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 00:19:41.838130 kubelet[2562]: E0909 00:19:41.837532 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:41.838130 kubelet[2562]: E0909 00:19:41.837540 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:41.838130 kubelet[2562]: E0909 00:19:41.837698 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:42.379727 kubelet[2562]: I0909 00:19:42.379668 2562 apiserver.go:52] "Watching apiserver" Sep 9 00:19:42.410836 kubelet[2562]: I0909 00:19:42.410750 2562 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 9 00:19:42.444678 kubelet[2562]: I0909 00:19:42.444507 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:42.444678 kubelet[2562]: E0909 00:19:42.444506 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:42.445692 kubelet[2562]: E0909 00:19:42.445660 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:42.453352 kubelet[2562]: E0909 00:19:42.453309 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 00:19:42.453936 kubelet[2562]: E0909 00:19:42.453797 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:42.478400 kubelet[2562]: I0909 00:19:42.476835 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=4.476809187 podStartE2EDuration="4.476809187s" podCreationTimestamp="2025-09-09 00:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:19:42.469098872 +0000 UTC m=+1.157089371" watchObservedRunningTime="2025-09-09 00:19:42.476809187 +0000 UTC m=+1.164799686" Sep 9 00:19:42.484868 kubelet[2562]: I0909 00:19:42.484816 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.484799801 podStartE2EDuration="3.484799801s" podCreationTimestamp="2025-09-09 00:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:19:42.476987153 +0000 UTC m=+1.164977652" watchObservedRunningTime="2025-09-09 00:19:42.484799801 +0000 UTC m=+1.172790300" Sep 9 00:19:42.499438 kubelet[2562]: I0909 00:19:42.498334 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.4983165769999998 podStartE2EDuration="3.498316577s" podCreationTimestamp="2025-09-09 00:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:19:42.484905211 +0000 UTC m=+1.172895710" watchObservedRunningTime="2025-09-09 00:19:42.498316577 +0000 UTC m=+1.186307076" Sep 9 00:19:43.446593 kubelet[2562]: E0909 00:19:43.446517 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:43.446593 kubelet[2562]: E0909 00:19:43.446572 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:44.448985 kubelet[2562]: E0909 00:19:44.448922 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:46.224958 kubelet[2562]: I0909 00:19:46.224886 2562 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 00:19:46.225559 containerd[1473]: time="2025-09-09T00:19:46.225417865Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 00:19:46.225920 kubelet[2562]: I0909 00:19:46.225742 2562 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 00:19:47.038260 systemd[1]: Created slice kubepods-besteffort-pod7e46ecb5_d260_43b5_9a0a_3b5a76702f2d.slice - libcontainer container kubepods-besteffort-pod7e46ecb5_d260_43b5_9a0a_3b5a76702f2d.slice. Sep 9 00:19:47.045651 kubelet[2562]: I0909 00:19:47.045597 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7e46ecb5-d260-43b5-9a0a-3b5a76702f2d-lib-modules\") pod \"kube-proxy-br7n5\" (UID: \"7e46ecb5-d260-43b5-9a0a-3b5a76702f2d\") " pod="kube-system/kube-proxy-br7n5" Sep 9 00:19:47.045651 kubelet[2562]: I0909 00:19:47.045642 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7e46ecb5-d260-43b5-9a0a-3b5a76702f2d-kube-proxy\") pod \"kube-proxy-br7n5\" (UID: \"7e46ecb5-d260-43b5-9a0a-3b5a76702f2d\") " pod="kube-system/kube-proxy-br7n5" Sep 9 00:19:47.045885 kubelet[2562]: I0909 00:19:47.045667 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7e46ecb5-d260-43b5-9a0a-3b5a76702f2d-xtables-lock\") pod \"kube-proxy-br7n5\" (UID: \"7e46ecb5-d260-43b5-9a0a-3b5a76702f2d\") " pod="kube-system/kube-proxy-br7n5" Sep 9 00:19:47.045885 kubelet[2562]: I0909 00:19:47.045691 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns2dr\" (UniqueName: \"kubernetes.io/projected/7e46ecb5-d260-43b5-9a0a-3b5a76702f2d-kube-api-access-ns2dr\") pod \"kube-proxy-br7n5\" (UID: \"7e46ecb5-d260-43b5-9a0a-3b5a76702f2d\") " pod="kube-system/kube-proxy-br7n5" Sep 9 00:19:47.348399 kubelet[2562]: E0909 00:19:47.348240 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:47.349339 containerd[1473]: time="2025-09-09T00:19:47.349276387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-br7n5,Uid:7e46ecb5-d260-43b5-9a0a-3b5a76702f2d,Namespace:kube-system,Attempt:0,}" Sep 9 00:19:47.360843 systemd[1]: Created slice kubepods-besteffort-podd916b0f2_df7d_4257_96ab_fc35460eaa0a.slice - libcontainer container kubepods-besteffort-podd916b0f2_df7d_4257_96ab_fc35460eaa0a.slice. Sep 9 00:19:47.383536 containerd[1473]: time="2025-09-09T00:19:47.383350557Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:19:47.383536 containerd[1473]: time="2025-09-09T00:19:47.383452469Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:19:47.383536 containerd[1473]: time="2025-09-09T00:19:47.383467137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:47.383803 containerd[1473]: time="2025-09-09T00:19:47.383588075Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:47.406591 systemd[1]: Started cri-containerd-fa952dfa1853d838eba78b1b07417f2ea374505af9e13ca0272bd8a4f2f82174.scope - libcontainer container fa952dfa1853d838eba78b1b07417f2ea374505af9e13ca0272bd8a4f2f82174. Sep 9 00:19:47.437182 containerd[1473]: time="2025-09-09T00:19:47.437133637Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-br7n5,Uid:7e46ecb5-d260-43b5-9a0a-3b5a76702f2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"fa952dfa1853d838eba78b1b07417f2ea374505af9e13ca0272bd8a4f2f82174\"" Sep 9 00:19:47.438132 kubelet[2562]: E0909 00:19:47.437941 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:47.444877 containerd[1473]: time="2025-09-09T00:19:47.444834379Z" level=info msg="CreateContainer within sandbox \"fa952dfa1853d838eba78b1b07417f2ea374505af9e13ca0272bd8a4f2f82174\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 00:19:47.448415 kubelet[2562]: I0909 00:19:47.448375 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d916b0f2-df7d-4257-96ab-fc35460eaa0a-var-lib-calico\") pod \"tigera-operator-755d956888-gjmmd\" (UID: \"d916b0f2-df7d-4257-96ab-fc35460eaa0a\") " pod="tigera-operator/tigera-operator-755d956888-gjmmd" Sep 9 00:19:47.448415 kubelet[2562]: I0909 00:19:47.448415 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqh49\" (UniqueName: \"kubernetes.io/projected/d916b0f2-df7d-4257-96ab-fc35460eaa0a-kube-api-access-nqh49\") pod \"tigera-operator-755d956888-gjmmd\" (UID: \"d916b0f2-df7d-4257-96ab-fc35460eaa0a\") " pod="tigera-operator/tigera-operator-755d956888-gjmmd" Sep 9 00:19:47.467804 containerd[1473]: time="2025-09-09T00:19:47.467742146Z" level=info msg="CreateContainer within sandbox \"fa952dfa1853d838eba78b1b07417f2ea374505af9e13ca0272bd8a4f2f82174\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"16da60cd303c3e21ee87b0d744209d999c6553f342550f7789a4179094383c12\"" Sep 9 00:19:47.468554 containerd[1473]: time="2025-09-09T00:19:47.468524651Z" level=info msg="StartContainer for \"16da60cd303c3e21ee87b0d744209d999c6553f342550f7789a4179094383c12\"" Sep 9 00:19:47.504643 systemd[1]: Started cri-containerd-16da60cd303c3e21ee87b0d744209d999c6553f342550f7789a4179094383c12.scope - libcontainer container 16da60cd303c3e21ee87b0d744209d999c6553f342550f7789a4179094383c12. Sep 9 00:19:47.543770 containerd[1473]: time="2025-09-09T00:19:47.543713228Z" level=info msg="StartContainer for \"16da60cd303c3e21ee87b0d744209d999c6553f342550f7789a4179094383c12\" returns successfully" Sep 9 00:19:47.664960 containerd[1473]: time="2025-09-09T00:19:47.664792963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-gjmmd,Uid:d916b0f2-df7d-4257-96ab-fc35460eaa0a,Namespace:tigera-operator,Attempt:0,}" Sep 9 00:19:47.694315 containerd[1473]: time="2025-09-09T00:19:47.694173007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:19:47.694315 containerd[1473]: time="2025-09-09T00:19:47.694244742Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:19:47.694315 containerd[1473]: time="2025-09-09T00:19:47.694260923Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:47.694797 containerd[1473]: time="2025-09-09T00:19:47.694689360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:19:47.721454 systemd[1]: Started cri-containerd-6697a89503ccdd665c8a599adfb6acfba8dd8df4eee77721f654a003e1b0abc1.scope - libcontainer container 6697a89503ccdd665c8a599adfb6acfba8dd8df4eee77721f654a003e1b0abc1. Sep 9 00:19:47.773813 containerd[1473]: time="2025-09-09T00:19:47.773749222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-gjmmd,Uid:d916b0f2-df7d-4257-96ab-fc35460eaa0a,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6697a89503ccdd665c8a599adfb6acfba8dd8df4eee77721f654a003e1b0abc1\"" Sep 9 00:19:47.776827 containerd[1473]: time="2025-09-09T00:19:47.776496328Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 9 00:19:48.164778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2501185562.mount: Deactivated successfully. Sep 9 00:19:48.285296 kubelet[2562]: E0909 00:19:48.285232 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:48.457759 kubelet[2562]: E0909 00:19:48.457721 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:48.458821 kubelet[2562]: E0909 00:19:48.458783 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:49.460866 kubelet[2562]: E0909 00:19:49.460803 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:49.877323 kubelet[2562]: E0909 00:19:49.877215 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:49.891395 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2479932709.mount: Deactivated successfully. Sep 9 00:19:49.893343 kubelet[2562]: I0909 00:19:49.893297 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-br7n5" podStartSLOduration=3.893275113 podStartE2EDuration="3.893275113s" podCreationTimestamp="2025-09-09 00:19:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:19:48.897891799 +0000 UTC m=+7.585882298" watchObservedRunningTime="2025-09-09 00:19:49.893275113 +0000 UTC m=+8.581265612" Sep 9 00:19:50.462074 kubelet[2562]: E0909 00:19:50.462030 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:51.464322 kubelet[2562]: E0909 00:19:51.464283 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:52.153172 containerd[1473]: time="2025-09-09T00:19:52.153086726Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:52.154316 containerd[1473]: time="2025-09-09T00:19:52.154258270Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 9 00:19:52.155581 containerd[1473]: time="2025-09-09T00:19:52.155547296Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:52.157757 containerd[1473]: time="2025-09-09T00:19:52.157688226Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:19:52.158357 containerd[1473]: time="2025-09-09T00:19:52.158315106Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 4.381768632s" Sep 9 00:19:52.158357 containerd[1473]: time="2025-09-09T00:19:52.158346795Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 9 00:19:52.163695 containerd[1473]: time="2025-09-09T00:19:52.163646028Z" level=info msg="CreateContainer within sandbox \"6697a89503ccdd665c8a599adfb6acfba8dd8df4eee77721f654a003e1b0abc1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 9 00:19:52.177553 containerd[1473]: time="2025-09-09T00:19:52.177507706Z" level=info msg="CreateContainer within sandbox \"6697a89503ccdd665c8a599adfb6acfba8dd8df4eee77721f654a003e1b0abc1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"e8d0570daff8786065ec282a614b382f4dc31cd4bc6ab02cb977c290d0f75a83\"" Sep 9 00:19:52.178127 containerd[1473]: time="2025-09-09T00:19:52.178068973Z" level=info msg="StartContainer for \"e8d0570daff8786065ec282a614b382f4dc31cd4bc6ab02cb977c290d0f75a83\"" Sep 9 00:19:52.213579 systemd[1]: Started cri-containerd-e8d0570daff8786065ec282a614b382f4dc31cd4bc6ab02cb977c290d0f75a83.scope - libcontainer container e8d0570daff8786065ec282a614b382f4dc31cd4bc6ab02cb977c290d0f75a83. Sep 9 00:19:52.241993 containerd[1473]: time="2025-09-09T00:19:52.241947433Z" level=info msg="StartContainer for \"e8d0570daff8786065ec282a614b382f4dc31cd4bc6ab02cb977c290d0f75a83\" returns successfully" Sep 9 00:19:52.507558 kubelet[2562]: I0909 00:19:52.507480 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-gjmmd" podStartSLOduration=1.12382703 podStartE2EDuration="5.50745875s" podCreationTimestamp="2025-09-09 00:19:47 +0000 UTC" firstStartedPulling="2025-09-09 00:19:47.775720737 +0000 UTC m=+6.463711236" lastFinishedPulling="2025-09-09 00:19:52.159352457 +0000 UTC m=+10.847342956" observedRunningTime="2025-09-09 00:19:52.504713193 +0000 UTC m=+11.192703702" watchObservedRunningTime="2025-09-09 00:19:52.50745875 +0000 UTC m=+11.195449249" Sep 9 00:19:54.009234 kubelet[2562]: E0909 00:19:54.009130 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:19:59.080382 sudo[1663]: pam_unix(sudo:session): session closed for user root Sep 9 00:19:59.085839 sshd[1660]: pam_unix(sshd:session): session closed for user core Sep 9 00:19:59.092052 systemd[1]: sshd@8-10.0.0.15:22-10.0.0.1:42490.service: Deactivated successfully. Sep 9 00:19:59.096037 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 00:19:59.096428 systemd[1]: session-9.scope: Consumed 6.397s CPU time, 163.3M memory peak, 0B memory swap peak. Sep 9 00:19:59.098546 systemd-logind[1454]: Session 9 logged out. Waiting for processes to exit. Sep 9 00:19:59.100521 systemd-logind[1454]: Removed session 9. Sep 9 00:20:05.500608 systemd[1]: Created slice kubepods-besteffort-pod05212130_66ef_41a7_a791_f9bd0add1e97.slice - libcontainer container kubepods-besteffort-pod05212130_66ef_41a7_a791_f9bd0add1e97.slice. Sep 9 00:20:05.619881 systemd[1]: Created slice kubepods-besteffort-podc47a8ccc_fa7c_45c8_b4e3_619657ac6bdc.slice - libcontainer container kubepods-besteffort-podc47a8ccc_fa7c_45c8_b4e3_619657ac6bdc.slice. Sep 9 00:20:05.667399 kubelet[2562]: I0909 00:20:05.667281 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/05212130-66ef-41a7-a791-f9bd0add1e97-tigera-ca-bundle\") pod \"calico-typha-57fd76d568-92v7x\" (UID: \"05212130-66ef-41a7-a791-f9bd0add1e97\") " pod="calico-system/calico-typha-57fd76d568-92v7x" Sep 9 00:20:05.667399 kubelet[2562]: I0909 00:20:05.667334 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrnbt\" (UniqueName: \"kubernetes.io/projected/05212130-66ef-41a7-a791-f9bd0add1e97-kube-api-access-lrnbt\") pod \"calico-typha-57fd76d568-92v7x\" (UID: \"05212130-66ef-41a7-a791-f9bd0add1e97\") " pod="calico-system/calico-typha-57fd76d568-92v7x" Sep 9 00:20:05.667399 kubelet[2562]: I0909 00:20:05.667351 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/05212130-66ef-41a7-a791-f9bd0add1e97-typha-certs\") pod \"calico-typha-57fd76d568-92v7x\" (UID: \"05212130-66ef-41a7-a791-f9bd0add1e97\") " pod="calico-system/calico-typha-57fd76d568-92v7x" Sep 9 00:20:05.742981 kubelet[2562]: E0909 00:20:05.742903 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zmjc" podUID="167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe" Sep 9 00:20:05.768543 kubelet[2562]: I0909 00:20:05.767763 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc-cni-log-dir\") pod \"calico-node-dxgnk\" (UID: \"c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc\") " pod="calico-system/calico-node-dxgnk" Sep 9 00:20:05.768543 kubelet[2562]: I0909 00:20:05.767842 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc-lib-modules\") pod \"calico-node-dxgnk\" (UID: \"c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc\") " pod="calico-system/calico-node-dxgnk" Sep 9 00:20:05.768543 kubelet[2562]: I0909 00:20:05.767868 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvcj2\" (UniqueName: \"kubernetes.io/projected/c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc-kube-api-access-pvcj2\") pod \"calico-node-dxgnk\" (UID: \"c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc\") " pod="calico-system/calico-node-dxgnk" Sep 9 00:20:05.768543 kubelet[2562]: I0909 00:20:05.767948 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc-tigera-ca-bundle\") pod \"calico-node-dxgnk\" (UID: \"c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc\") " pod="calico-system/calico-node-dxgnk" Sep 9 00:20:05.768543 kubelet[2562]: I0909 00:20:05.768040 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc-var-lib-calico\") pod \"calico-node-dxgnk\" (UID: \"c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc\") " pod="calico-system/calico-node-dxgnk" Sep 9 00:20:05.768867 kubelet[2562]: I0909 00:20:05.768064 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc-node-certs\") pod \"calico-node-dxgnk\" (UID: \"c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc\") " pod="calico-system/calico-node-dxgnk" Sep 9 00:20:05.768867 kubelet[2562]: I0909 00:20:05.768133 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc-var-run-calico\") pod \"calico-node-dxgnk\" (UID: \"c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc\") " pod="calico-system/calico-node-dxgnk" Sep 9 00:20:05.768867 kubelet[2562]: I0909 00:20:05.768264 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc-cni-net-dir\") pod \"calico-node-dxgnk\" (UID: \"c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc\") " pod="calico-system/calico-node-dxgnk" Sep 9 00:20:05.768867 kubelet[2562]: I0909 00:20:05.768289 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc-policysync\") pod \"calico-node-dxgnk\" (UID: \"c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc\") " pod="calico-system/calico-node-dxgnk" Sep 9 00:20:05.768867 kubelet[2562]: I0909 00:20:05.768307 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc-xtables-lock\") pod \"calico-node-dxgnk\" (UID: \"c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc\") " pod="calico-system/calico-node-dxgnk" Sep 9 00:20:05.769099 kubelet[2562]: I0909 00:20:05.768329 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc-cni-bin-dir\") pod \"calico-node-dxgnk\" (UID: \"c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc\") " pod="calico-system/calico-node-dxgnk" Sep 9 00:20:05.769099 kubelet[2562]: I0909 00:20:05.768351 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc-flexvol-driver-host\") pod \"calico-node-dxgnk\" (UID: \"c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc\") " pod="calico-system/calico-node-dxgnk" Sep 9 00:20:05.807644 kubelet[2562]: E0909 00:20:05.807586 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:05.809408 containerd[1473]: time="2025-09-09T00:20:05.808322418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57fd76d568-92v7x,Uid:05212130-66ef-41a7-a791-f9bd0add1e97,Namespace:calico-system,Attempt:0,}" Sep 9 00:20:05.843478 containerd[1473]: time="2025-09-09T00:20:05.843219580Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:20:05.843478 containerd[1473]: time="2025-09-09T00:20:05.843294059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:20:05.843478 containerd[1473]: time="2025-09-09T00:20:05.843308116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:05.843739 containerd[1473]: time="2025-09-09T00:20:05.843456895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:05.870043 kubelet[2562]: I0909 00:20:05.868581 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpmvj\" (UniqueName: \"kubernetes.io/projected/167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe-kube-api-access-qpmvj\") pod \"csi-node-driver-7zmjc\" (UID: \"167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe\") " pod="calico-system/csi-node-driver-7zmjc" Sep 9 00:20:05.870043 kubelet[2562]: I0909 00:20:05.868626 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe-kubelet-dir\") pod \"csi-node-driver-7zmjc\" (UID: \"167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe\") " pod="calico-system/csi-node-driver-7zmjc" Sep 9 00:20:05.870043 kubelet[2562]: I0909 00:20:05.868643 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe-socket-dir\") pod \"csi-node-driver-7zmjc\" (UID: \"167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe\") " pod="calico-system/csi-node-driver-7zmjc" Sep 9 00:20:05.870043 kubelet[2562]: I0909 00:20:05.868676 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe-registration-dir\") pod \"csi-node-driver-7zmjc\" (UID: \"167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe\") " pod="calico-system/csi-node-driver-7zmjc" Sep 9 00:20:05.870043 kubelet[2562]: I0909 00:20:05.869028 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe-varrun\") pod \"csi-node-driver-7zmjc\" (UID: \"167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe\") " pod="calico-system/csi-node-driver-7zmjc" Sep 9 00:20:05.871993 systemd[1]: Started cri-containerd-1e3ced04873f76d9a21d91b6f607e241a2a55a87283731887bb018317da58d0a.scope - libcontainer container 1e3ced04873f76d9a21d91b6f607e241a2a55a87283731887bb018317da58d0a. Sep 9 00:20:05.873026 kubelet[2562]: E0909 00:20:05.872993 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.873026 kubelet[2562]: W0909 00:20:05.873019 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.873133 kubelet[2562]: E0909 00:20:05.873070 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.876982 kubelet[2562]: E0909 00:20:05.876948 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.876982 kubelet[2562]: W0909 00:20:05.876976 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.877108 kubelet[2562]: E0909 00:20:05.876997 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.883057 kubelet[2562]: E0909 00:20:05.883008 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.883057 kubelet[2562]: W0909 00:20:05.883045 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.883236 kubelet[2562]: E0909 00:20:05.883076 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.926349 containerd[1473]: time="2025-09-09T00:20:05.926288209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dxgnk,Uid:c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc,Namespace:calico-system,Attempt:0,}" Sep 9 00:20:05.926349 containerd[1473]: time="2025-09-09T00:20:05.926390942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57fd76d568-92v7x,Uid:05212130-66ef-41a7-a791-f9bd0add1e97,Namespace:calico-system,Attempt:0,} returns sandbox id \"1e3ced04873f76d9a21d91b6f607e241a2a55a87283731887bb018317da58d0a\"" Sep 9 00:20:05.927711 kubelet[2562]: E0909 00:20:05.927684 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:05.929304 containerd[1473]: time="2025-09-09T00:20:05.929266073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 9 00:20:05.958740 containerd[1473]: time="2025-09-09T00:20:05.958589205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:20:05.958740 containerd[1473]: time="2025-09-09T00:20:05.958674014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:20:05.959028 containerd[1473]: time="2025-09-09T00:20:05.958715091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:05.959028 containerd[1473]: time="2025-09-09T00:20:05.958859632Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:05.970425 kubelet[2562]: E0909 00:20:05.970358 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.970425 kubelet[2562]: W0909 00:20:05.970414 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.970633 kubelet[2562]: E0909 00:20:05.970442 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.970792 kubelet[2562]: E0909 00:20:05.970769 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.970792 kubelet[2562]: W0909 00:20:05.970783 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.970876 kubelet[2562]: E0909 00:20:05.970795 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.971121 kubelet[2562]: E0909 00:20:05.971102 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.971121 kubelet[2562]: W0909 00:20:05.971119 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.971229 kubelet[2562]: E0909 00:20:05.971134 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.971465 kubelet[2562]: E0909 00:20:05.971445 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.971465 kubelet[2562]: W0909 00:20:05.971460 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.971558 kubelet[2562]: E0909 00:20:05.971472 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.971758 kubelet[2562]: E0909 00:20:05.971737 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.971758 kubelet[2562]: W0909 00:20:05.971753 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.971854 kubelet[2562]: E0909 00:20:05.971764 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.972187 kubelet[2562]: E0909 00:20:05.972119 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.972187 kubelet[2562]: W0909 00:20:05.972138 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.972187 kubelet[2562]: E0909 00:20:05.972153 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.972988 kubelet[2562]: E0909 00:20:05.972462 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.972988 kubelet[2562]: W0909 00:20:05.972477 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.972988 kubelet[2562]: E0909 00:20:05.972488 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.975929 kubelet[2562]: E0909 00:20:05.975598 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.975929 kubelet[2562]: W0909 00:20:05.975621 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.975929 kubelet[2562]: E0909 00:20:05.975641 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.976101 kubelet[2562]: E0909 00:20:05.975985 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.976101 kubelet[2562]: W0909 00:20:05.975998 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.976101 kubelet[2562]: E0909 00:20:05.976009 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.977302 kubelet[2562]: E0909 00:20:05.977214 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.977302 kubelet[2562]: W0909 00:20:05.977228 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.977302 kubelet[2562]: E0909 00:20:05.977240 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.978194 kubelet[2562]: E0909 00:20:05.977717 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.978194 kubelet[2562]: W0909 00:20:05.977733 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.978194 kubelet[2562]: E0909 00:20:05.977745 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.978194 kubelet[2562]: E0909 00:20:05.978024 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.978194 kubelet[2562]: W0909 00:20:05.978035 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.978194 kubelet[2562]: E0909 00:20:05.978047 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.978700 kubelet[2562]: E0909 00:20:05.978526 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.978700 kubelet[2562]: W0909 00:20:05.978542 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.978700 kubelet[2562]: E0909 00:20:05.978554 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.978898 kubelet[2562]: E0909 00:20:05.978865 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.978898 kubelet[2562]: W0909 00:20:05.978883 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.978968 kubelet[2562]: E0909 00:20:05.978896 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.979182 kubelet[2562]: E0909 00:20:05.979155 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.979182 kubelet[2562]: W0909 00:20:05.979172 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.979273 kubelet[2562]: E0909 00:20:05.979184 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.979568 kubelet[2562]: E0909 00:20:05.979464 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.979568 kubelet[2562]: W0909 00:20:05.979478 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.979568 kubelet[2562]: E0909 00:20:05.979490 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.979786 kubelet[2562]: E0909 00:20:05.979768 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.979786 kubelet[2562]: W0909 00:20:05.979781 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.979912 kubelet[2562]: E0909 00:20:05.979793 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.980850 kubelet[2562]: E0909 00:20:05.980318 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.980850 kubelet[2562]: W0909 00:20:05.980335 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.980850 kubelet[2562]: E0909 00:20:05.980348 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.981543 kubelet[2562]: E0909 00:20:05.981016 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.981543 kubelet[2562]: W0909 00:20:05.981034 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.981543 kubelet[2562]: E0909 00:20:05.981068 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.982663 kubelet[2562]: E0909 00:20:05.981979 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.982663 kubelet[2562]: W0909 00:20:05.981999 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.982663 kubelet[2562]: E0909 00:20:05.982013 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.984558 kubelet[2562]: E0909 00:20:05.983995 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.984558 kubelet[2562]: W0909 00:20:05.984015 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.984558 kubelet[2562]: E0909 00:20:05.984031 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.985879 kubelet[2562]: E0909 00:20:05.985840 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.985951 kubelet[2562]: W0909 00:20:05.985876 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.985951 kubelet[2562]: E0909 00:20:05.985907 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.986568 kubelet[2562]: E0909 00:20:05.986543 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.986568 kubelet[2562]: W0909 00:20:05.986561 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.986654 kubelet[2562]: E0909 00:20:05.986573 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.986867 kubelet[2562]: E0909 00:20:05.986844 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.986867 kubelet[2562]: W0909 00:20:05.986859 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.986867 kubelet[2562]: E0909 00:20:05.986869 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.987207 kubelet[2562]: E0909 00:20:05.987181 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.987207 kubelet[2562]: W0909 00:20:05.987202 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.987284 kubelet[2562]: E0909 00:20:05.987215 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:05.992722 systemd[1]: Started cri-containerd-ad246ce074a8f2ab6a6fdfe0e3f072e7b0c2250ec1689a00bfa655a5be861b63.scope - libcontainer container ad246ce074a8f2ab6a6fdfe0e3f072e7b0c2250ec1689a00bfa655a5be861b63. Sep 9 00:20:05.995740 kubelet[2562]: E0909 00:20:05.995716 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:05.995740 kubelet[2562]: W0909 00:20:05.995737 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:05.995859 kubelet[2562]: E0909 00:20:05.995756 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:06.026060 containerd[1473]: time="2025-09-09T00:20:06.025899238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-dxgnk,Uid:c47a8ccc-fa7c-45c8-b4e3-619657ac6bdc,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad246ce074a8f2ab6a6fdfe0e3f072e7b0c2250ec1689a00bfa655a5be861b63\"" Sep 9 00:20:07.431614 kubelet[2562]: E0909 00:20:07.431551 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zmjc" podUID="167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe" Sep 9 00:20:07.463123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2895253569.mount: Deactivated successfully. Sep 9 00:20:09.427532 kubelet[2562]: E0909 00:20:09.427153 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zmjc" podUID="167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe" Sep 9 00:20:09.811939 containerd[1473]: time="2025-09-09T00:20:09.811871908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:09.814002 containerd[1473]: time="2025-09-09T00:20:09.813946994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 9 00:20:09.815656 containerd[1473]: time="2025-09-09T00:20:09.815616189Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:09.818749 containerd[1473]: time="2025-09-09T00:20:09.818699308Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:09.819318 containerd[1473]: time="2025-09-09T00:20:09.819287612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.889982426s" Sep 9 00:20:09.819428 containerd[1473]: time="2025-09-09T00:20:09.819321806Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 9 00:20:09.820629 containerd[1473]: time="2025-09-09T00:20:09.820579559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 9 00:20:09.842880 containerd[1473]: time="2025-09-09T00:20:09.842806447Z" level=info msg="CreateContainer within sandbox \"1e3ced04873f76d9a21d91b6f607e241a2a55a87283731887bb018317da58d0a\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 9 00:20:09.864300 containerd[1473]: time="2025-09-09T00:20:09.864225858Z" level=info msg="CreateContainer within sandbox \"1e3ced04873f76d9a21d91b6f607e241a2a55a87283731887bb018317da58d0a\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ae91626cf8c44e895c138c13ca50c5a8342cdd76e31fd9793d3f2d957eef207e\"" Sep 9 00:20:09.864775 containerd[1473]: time="2025-09-09T00:20:09.864736828Z" level=info msg="StartContainer for \"ae91626cf8c44e895c138c13ca50c5a8342cdd76e31fd9793d3f2d957eef207e\"" Sep 9 00:20:09.898491 systemd[1]: Started cri-containerd-ae91626cf8c44e895c138c13ca50c5a8342cdd76e31fd9793d3f2d957eef207e.scope - libcontainer container ae91626cf8c44e895c138c13ca50c5a8342cdd76e31fd9793d3f2d957eef207e. Sep 9 00:20:09.946478 containerd[1473]: time="2025-09-09T00:20:09.946413902Z" level=info msg="StartContainer for \"ae91626cf8c44e895c138c13ca50c5a8342cdd76e31fd9793d3f2d957eef207e\" returns successfully" Sep 9 00:20:10.523593 kubelet[2562]: E0909 00:20:10.523551 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:10.536105 kubelet[2562]: I0909 00:20:10.536024 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57fd76d568-92v7x" podStartSLOduration=1.644699026 podStartE2EDuration="5.536008204s" podCreationTimestamp="2025-09-09 00:20:05 +0000 UTC" firstStartedPulling="2025-09-09 00:20:05.928895727 +0000 UTC m=+24.616886226" lastFinishedPulling="2025-09-09 00:20:09.820204905 +0000 UTC m=+28.508195404" observedRunningTime="2025-09-09 00:20:10.535078669 +0000 UTC m=+29.223069178" watchObservedRunningTime="2025-09-09 00:20:10.536008204 +0000 UTC m=+29.223998703" Sep 9 00:20:10.596622 kubelet[2562]: E0909 00:20:10.596580 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.596622 kubelet[2562]: W0909 00:20:10.596604 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.596622 kubelet[2562]: E0909 00:20:10.596627 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.596879 kubelet[2562]: E0909 00:20:10.596817 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.596879 kubelet[2562]: W0909 00:20:10.596826 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.596879 kubelet[2562]: E0909 00:20:10.596835 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.597045 kubelet[2562]: E0909 00:20:10.597027 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.597045 kubelet[2562]: W0909 00:20:10.597038 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.597108 kubelet[2562]: E0909 00:20:10.597047 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.597275 kubelet[2562]: E0909 00:20:10.597258 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.597275 kubelet[2562]: W0909 00:20:10.597269 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.597326 kubelet[2562]: E0909 00:20:10.597278 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.597545 kubelet[2562]: E0909 00:20:10.597480 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.597545 kubelet[2562]: W0909 00:20:10.597506 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.597545 kubelet[2562]: E0909 00:20:10.597516 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.597751 kubelet[2562]: E0909 00:20:10.597731 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.597751 kubelet[2562]: W0909 00:20:10.597745 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.597805 kubelet[2562]: E0909 00:20:10.597757 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.598072 kubelet[2562]: E0909 00:20:10.598056 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.598072 kubelet[2562]: W0909 00:20:10.598067 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.598139 kubelet[2562]: E0909 00:20:10.598076 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.598311 kubelet[2562]: E0909 00:20:10.598297 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.598311 kubelet[2562]: W0909 00:20:10.598308 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.598357 kubelet[2562]: E0909 00:20:10.598316 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.598536 kubelet[2562]: E0909 00:20:10.598522 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.598536 kubelet[2562]: W0909 00:20:10.598535 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.598599 kubelet[2562]: E0909 00:20:10.598544 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.598884 kubelet[2562]: E0909 00:20:10.598861 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.598884 kubelet[2562]: W0909 00:20:10.598874 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.598884 kubelet[2562]: E0909 00:20:10.598883 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.599097 kubelet[2562]: E0909 00:20:10.599084 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.599097 kubelet[2562]: W0909 00:20:10.599094 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.599160 kubelet[2562]: E0909 00:20:10.599102 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.599421 kubelet[2562]: E0909 00:20:10.599377 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.599421 kubelet[2562]: W0909 00:20:10.599408 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.599421 kubelet[2562]: E0909 00:20:10.599437 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.599758 kubelet[2562]: E0909 00:20:10.599740 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.599758 kubelet[2562]: W0909 00:20:10.599755 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.599825 kubelet[2562]: E0909 00:20:10.599767 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.599995 kubelet[2562]: E0909 00:20:10.599971 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.599995 kubelet[2562]: W0909 00:20:10.599984 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.599995 kubelet[2562]: E0909 00:20:10.599993 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.600185 kubelet[2562]: E0909 00:20:10.600171 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.600213 kubelet[2562]: W0909 00:20:10.600185 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.600213 kubelet[2562]: E0909 00:20:10.600194 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.608527 kubelet[2562]: E0909 00:20:10.608493 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.608527 kubelet[2562]: W0909 00:20:10.608507 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.608527 kubelet[2562]: E0909 00:20:10.608517 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.608759 kubelet[2562]: E0909 00:20:10.608739 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.608759 kubelet[2562]: W0909 00:20:10.608753 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.608830 kubelet[2562]: E0909 00:20:10.608764 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.609013 kubelet[2562]: E0909 00:20:10.608986 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.609013 kubelet[2562]: W0909 00:20:10.608999 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.609013 kubelet[2562]: E0909 00:20:10.609009 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.609297 kubelet[2562]: E0909 00:20:10.609267 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.609297 kubelet[2562]: W0909 00:20:10.609285 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.609380 kubelet[2562]: E0909 00:20:10.609297 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.609596 kubelet[2562]: E0909 00:20:10.609571 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.609596 kubelet[2562]: W0909 00:20:10.609583 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.609596 kubelet[2562]: E0909 00:20:10.609592 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.609822 kubelet[2562]: E0909 00:20:10.609806 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.609822 kubelet[2562]: W0909 00:20:10.609818 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.609880 kubelet[2562]: E0909 00:20:10.609826 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.610094 kubelet[2562]: E0909 00:20:10.610075 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.610094 kubelet[2562]: W0909 00:20:10.610086 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.610094 kubelet[2562]: E0909 00:20:10.610095 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.610325 kubelet[2562]: E0909 00:20:10.610308 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.610325 kubelet[2562]: W0909 00:20:10.610318 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.610387 kubelet[2562]: E0909 00:20:10.610326 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.610578 kubelet[2562]: E0909 00:20:10.610560 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.610578 kubelet[2562]: W0909 00:20:10.610571 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.610640 kubelet[2562]: E0909 00:20:10.610580 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.610811 kubelet[2562]: E0909 00:20:10.610792 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.610811 kubelet[2562]: W0909 00:20:10.610803 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.610811 kubelet[2562]: E0909 00:20:10.610811 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.611051 kubelet[2562]: E0909 00:20:10.611024 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.611051 kubelet[2562]: W0909 00:20:10.611037 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.611051 kubelet[2562]: E0909 00:20:10.611044 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.611406 kubelet[2562]: E0909 00:20:10.611348 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.611473 kubelet[2562]: W0909 00:20:10.611403 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.611473 kubelet[2562]: E0909 00:20:10.611440 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.611752 kubelet[2562]: E0909 00:20:10.611725 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.611752 kubelet[2562]: W0909 00:20:10.611742 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.611814 kubelet[2562]: E0909 00:20:10.611753 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.611951 kubelet[2562]: E0909 00:20:10.611929 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.611951 kubelet[2562]: W0909 00:20:10.611941 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.611951 kubelet[2562]: E0909 00:20:10.611950 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.612166 kubelet[2562]: E0909 00:20:10.612151 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.612166 kubelet[2562]: W0909 00:20:10.612163 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.612214 kubelet[2562]: E0909 00:20:10.612171 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.612384 kubelet[2562]: E0909 00:20:10.612344 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.612384 kubelet[2562]: W0909 00:20:10.612355 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.612384 kubelet[2562]: E0909 00:20:10.612382 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.612610 kubelet[2562]: E0909 00:20:10.612595 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.612610 kubelet[2562]: W0909 00:20:10.612606 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.612658 kubelet[2562]: E0909 00:20:10.612615 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:10.613019 kubelet[2562]: E0909 00:20:10.613002 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:10.613019 kubelet[2562]: W0909 00:20:10.613018 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:10.613096 kubelet[2562]: E0909 00:20:10.613027 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.422002 containerd[1473]: time="2025-09-09T00:20:11.421871335Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:11.423417 containerd[1473]: time="2025-09-09T00:20:11.422675917Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 9 00:20:11.424761 containerd[1473]: time="2025-09-09T00:20:11.424694396Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:11.429405 kubelet[2562]: E0909 00:20:11.427636 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zmjc" podUID="167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe" Sep 9 00:20:11.430024 containerd[1473]: time="2025-09-09T00:20:11.429858662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:11.432922 containerd[1473]: time="2025-09-09T00:20:11.432850690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.612223482s" Sep 9 00:20:11.432922 containerd[1473]: time="2025-09-09T00:20:11.432921193Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 9 00:20:11.441588 containerd[1473]: time="2025-09-09T00:20:11.441488278Z" level=info msg="CreateContainer within sandbox \"ad246ce074a8f2ab6a6fdfe0e3f072e7b0c2250ec1689a00bfa655a5be861b63\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 9 00:20:11.470829 containerd[1473]: time="2025-09-09T00:20:11.470718259Z" level=info msg="CreateContainer within sandbox \"ad246ce074a8f2ab6a6fdfe0e3f072e7b0c2250ec1689a00bfa655a5be861b63\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bf09a5756b3d5e694fb0ec8f9b25cbf31c18cc6d4aea8cf960b86ee725e1f787\"" Sep 9 00:20:11.475917 containerd[1473]: time="2025-09-09T00:20:11.472698877Z" level=info msg="StartContainer for \"bf09a5756b3d5e694fb0ec8f9b25cbf31c18cc6d4aea8cf960b86ee725e1f787\"" Sep 9 00:20:11.530889 kubelet[2562]: E0909 00:20:11.530821 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:11.574945 systemd[1]: Started cri-containerd-bf09a5756b3d5e694fb0ec8f9b25cbf31c18cc6d4aea8cf960b86ee725e1f787.scope - libcontainer container bf09a5756b3d5e694fb0ec8f9b25cbf31c18cc6d4aea8cf960b86ee725e1f787. Sep 9 00:20:11.610149 kubelet[2562]: E0909 00:20:11.610049 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.610149 kubelet[2562]: W0909 00:20:11.610136 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.610644 kubelet[2562]: E0909 00:20:11.610217 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.611106 kubelet[2562]: E0909 00:20:11.611065 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.611106 kubelet[2562]: W0909 00:20:11.611090 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.611106 kubelet[2562]: E0909 00:20:11.611104 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.611630 kubelet[2562]: E0909 00:20:11.611584 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.611630 kubelet[2562]: W0909 00:20:11.611617 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.611766 kubelet[2562]: E0909 00:20:11.611642 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.612706 kubelet[2562]: E0909 00:20:11.612631 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.612853 kubelet[2562]: W0909 00:20:11.612718 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.612853 kubelet[2562]: E0909 00:20:11.612732 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.613962 kubelet[2562]: E0909 00:20:11.613876 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.613962 kubelet[2562]: W0909 00:20:11.613933 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.613962 kubelet[2562]: E0909 00:20:11.613956 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.614521 kubelet[2562]: E0909 00:20:11.614459 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.614521 kubelet[2562]: W0909 00:20:11.614509 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.614628 kubelet[2562]: E0909 00:20:11.614553 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.616118 kubelet[2562]: E0909 00:20:11.616073 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.616118 kubelet[2562]: W0909 00:20:11.616100 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.616118 kubelet[2562]: E0909 00:20:11.616125 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.616576 kubelet[2562]: E0909 00:20:11.616532 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.616576 kubelet[2562]: W0909 00:20:11.616551 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.616576 kubelet[2562]: E0909 00:20:11.616578 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.617442 kubelet[2562]: E0909 00:20:11.617271 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.617442 kubelet[2562]: W0909 00:20:11.617297 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.617442 kubelet[2562]: E0909 00:20:11.617315 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.619535 kubelet[2562]: E0909 00:20:11.619425 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.619535 kubelet[2562]: W0909 00:20:11.619445 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.619535 kubelet[2562]: E0909 00:20:11.619455 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.623398 kubelet[2562]: E0909 00:20:11.621263 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.623398 kubelet[2562]: W0909 00:20:11.621306 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.623398 kubelet[2562]: E0909 00:20:11.621327 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.623398 kubelet[2562]: E0909 00:20:11.621898 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.623398 kubelet[2562]: W0909 00:20:11.621922 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.623398 kubelet[2562]: E0909 00:20:11.621958 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.623666 kubelet[2562]: E0909 00:20:11.623608 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.623666 kubelet[2562]: W0909 00:20:11.623633 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.623666 kubelet[2562]: E0909 00:20:11.623644 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.623989 kubelet[2562]: E0909 00:20:11.623930 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.623989 kubelet[2562]: W0909 00:20:11.623976 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.623989 kubelet[2562]: E0909 00:20:11.623993 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.625220 kubelet[2562]: E0909 00:20:11.624357 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.625220 kubelet[2562]: W0909 00:20:11.624775 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.625220 kubelet[2562]: E0909 00:20:11.624802 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.626271 kubelet[2562]: E0909 00:20:11.626244 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.626271 kubelet[2562]: W0909 00:20:11.626263 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.626458 kubelet[2562]: E0909 00:20:11.626283 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.626776 kubelet[2562]: E0909 00:20:11.626754 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.626776 kubelet[2562]: W0909 00:20:11.626767 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.626776 kubelet[2562]: E0909 00:20:11.626777 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.628648 kubelet[2562]: E0909 00:20:11.628609 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.628648 kubelet[2562]: W0909 00:20:11.628626 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.628648 kubelet[2562]: E0909 00:20:11.628639 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.629059 kubelet[2562]: E0909 00:20:11.629024 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.629059 kubelet[2562]: W0909 00:20:11.629037 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.629059 kubelet[2562]: E0909 00:20:11.629047 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.630968 kubelet[2562]: E0909 00:20:11.630919 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.630968 kubelet[2562]: W0909 00:20:11.630942 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.630968 kubelet[2562]: E0909 00:20:11.630953 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.631852 kubelet[2562]: E0909 00:20:11.631686 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.631852 kubelet[2562]: W0909 00:20:11.631711 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.631852 kubelet[2562]: E0909 00:20:11.631726 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.632608 kubelet[2562]: E0909 00:20:11.632565 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.632608 kubelet[2562]: W0909 00:20:11.632596 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.632954 kubelet[2562]: E0909 00:20:11.632624 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.635511 kubelet[2562]: E0909 00:20:11.635474 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.635511 kubelet[2562]: W0909 00:20:11.635495 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.635511 kubelet[2562]: E0909 00:20:11.635507 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.636061 kubelet[2562]: E0909 00:20:11.636020 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.636061 kubelet[2562]: W0909 00:20:11.636052 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.636061 kubelet[2562]: E0909 00:20:11.636066 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.636515 kubelet[2562]: E0909 00:20:11.636484 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.636515 kubelet[2562]: W0909 00:20:11.636503 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.636515 kubelet[2562]: E0909 00:20:11.636515 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.636907 kubelet[2562]: E0909 00:20:11.636885 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.636907 kubelet[2562]: W0909 00:20:11.636899 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.636907 kubelet[2562]: E0909 00:20:11.636910 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.637492 kubelet[2562]: E0909 00:20:11.637462 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.637492 kubelet[2562]: W0909 00:20:11.637487 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.637625 kubelet[2562]: E0909 00:20:11.637509 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.638275 kubelet[2562]: E0909 00:20:11.638229 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.638275 kubelet[2562]: W0909 00:20:11.638259 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.638275 kubelet[2562]: E0909 00:20:11.638278 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.638820 kubelet[2562]: E0909 00:20:11.638782 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.638820 kubelet[2562]: W0909 00:20:11.638812 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.638971 kubelet[2562]: E0909 00:20:11.638828 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.640702 kubelet[2562]: E0909 00:20:11.639545 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.640702 kubelet[2562]: W0909 00:20:11.639567 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.640702 kubelet[2562]: E0909 00:20:11.639577 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.640702 kubelet[2562]: E0909 00:20:11.640071 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.640702 kubelet[2562]: W0909 00:20:11.640094 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.640702 kubelet[2562]: E0909 00:20:11.640113 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.640702 kubelet[2562]: E0909 00:20:11.640575 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.640702 kubelet[2562]: W0909 00:20:11.640587 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.640702 kubelet[2562]: E0909 00:20:11.640602 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.644559 kubelet[2562]: E0909 00:20:11.644496 2562 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 9 00:20:11.644559 kubelet[2562]: W0909 00:20:11.644534 2562 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 9 00:20:11.644559 kubelet[2562]: E0909 00:20:11.644570 2562 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 9 00:20:11.647693 containerd[1473]: time="2025-09-09T00:20:11.647225493Z" level=info msg="StartContainer for \"bf09a5756b3d5e694fb0ec8f9b25cbf31c18cc6d4aea8cf960b86ee725e1f787\" returns successfully" Sep 9 00:20:11.662380 systemd[1]: cri-containerd-bf09a5756b3d5e694fb0ec8f9b25cbf31c18cc6d4aea8cf960b86ee725e1f787.scope: Deactivated successfully. Sep 9 00:20:11.834379 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf09a5756b3d5e694fb0ec8f9b25cbf31c18cc6d4aea8cf960b86ee725e1f787-rootfs.mount: Deactivated successfully. Sep 9 00:20:11.978062 containerd[1473]: time="2025-09-09T00:20:11.975568146Z" level=info msg="shim disconnected" id=bf09a5756b3d5e694fb0ec8f9b25cbf31c18cc6d4aea8cf960b86ee725e1f787 namespace=k8s.io Sep 9 00:20:11.978062 containerd[1473]: time="2025-09-09T00:20:11.978053854Z" level=warning msg="cleaning up after shim disconnected" id=bf09a5756b3d5e694fb0ec8f9b25cbf31c18cc6d4aea8cf960b86ee725e1f787 namespace=k8s.io Sep 9 00:20:11.978062 containerd[1473]: time="2025-09-09T00:20:11.978068451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:20:12.533505 kubelet[2562]: E0909 00:20:12.533452 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:12.534067 containerd[1473]: time="2025-09-09T00:20:12.533656211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 9 00:20:13.427082 kubelet[2562]: E0909 00:20:13.427013 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zmjc" podUID="167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe" Sep 9 00:20:15.427302 kubelet[2562]: E0909 00:20:15.427166 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zmjc" podUID="167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe" Sep 9 00:20:16.822664 containerd[1473]: time="2025-09-09T00:20:16.822600664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:16.823660 containerd[1473]: time="2025-09-09T00:20:16.823615720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 9 00:20:16.824940 containerd[1473]: time="2025-09-09T00:20:16.824912874Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:16.827219 containerd[1473]: time="2025-09-09T00:20:16.827172936Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:16.828056 containerd[1473]: time="2025-09-09T00:20:16.828017743Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.294321055s" Sep 9 00:20:16.828056 containerd[1473]: time="2025-09-09T00:20:16.828052678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 9 00:20:16.840476 containerd[1473]: time="2025-09-09T00:20:16.840415463Z" level=info msg="CreateContainer within sandbox \"ad246ce074a8f2ab6a6fdfe0e3f072e7b0c2250ec1689a00bfa655a5be861b63\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 9 00:20:16.859723 containerd[1473]: time="2025-09-09T00:20:16.859655298Z" level=info msg="CreateContainer within sandbox \"ad246ce074a8f2ab6a6fdfe0e3f072e7b0c2250ec1689a00bfa655a5be861b63\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4b8aa5483152a8dee4778b913036d4bc1f394ef380460d3d10bbc580c0d2546a\"" Sep 9 00:20:16.860231 containerd[1473]: time="2025-09-09T00:20:16.860165596Z" level=info msg="StartContainer for \"4b8aa5483152a8dee4778b913036d4bc1f394ef380460d3d10bbc580c0d2546a\"" Sep 9 00:20:16.903623 systemd[1]: Started cri-containerd-4b8aa5483152a8dee4778b913036d4bc1f394ef380460d3d10bbc580c0d2546a.scope - libcontainer container 4b8aa5483152a8dee4778b913036d4bc1f394ef380460d3d10bbc580c0d2546a. Sep 9 00:20:16.941864 containerd[1473]: time="2025-09-09T00:20:16.941794287Z" level=info msg="StartContainer for \"4b8aa5483152a8dee4778b913036d4bc1f394ef380460d3d10bbc580c0d2546a\" returns successfully" Sep 9 00:20:17.428218 kubelet[2562]: E0909 00:20:17.428144 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7zmjc" podUID="167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe" Sep 9 00:20:18.238265 systemd[1]: cri-containerd-4b8aa5483152a8dee4778b913036d4bc1f394ef380460d3d10bbc580c0d2546a.scope: Deactivated successfully. Sep 9 00:20:18.262628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b8aa5483152a8dee4778b913036d4bc1f394ef380460d3d10bbc580c0d2546a-rootfs.mount: Deactivated successfully. Sep 9 00:20:18.310827 kubelet[2562]: I0909 00:20:18.310785 2562 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 9 00:20:18.584793 containerd[1473]: time="2025-09-09T00:20:18.584578053Z" level=info msg="shim disconnected" id=4b8aa5483152a8dee4778b913036d4bc1f394ef380460d3d10bbc580c0d2546a namespace=k8s.io Sep 9 00:20:18.584793 containerd[1473]: time="2025-09-09T00:20:18.584652152Z" level=warning msg="cleaning up after shim disconnected" id=4b8aa5483152a8dee4778b913036d4bc1f394ef380460d3d10bbc580c0d2546a namespace=k8s.io Sep 9 00:20:18.584793 containerd[1473]: time="2025-09-09T00:20:18.584665457Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 00:20:18.633010 systemd[1]: Created slice kubepods-burstable-pod9eba6dae_45e2_4ee0_9d05_4984a3603e03.slice - libcontainer container kubepods-burstable-pod9eba6dae_45e2_4ee0_9d05_4984a3603e03.slice. Sep 9 00:20:18.642244 systemd[1]: Created slice kubepods-besteffort-pod4f10fd88_00c6_468e_a12c_ae8ac5f160de.slice - libcontainer container kubepods-besteffort-pod4f10fd88_00c6_468e_a12c_ae8ac5f160de.slice. Sep 9 00:20:18.650069 systemd[1]: Created slice kubepods-burstable-pod6169582d_5f41_430b_9890_0f5959297de0.slice - libcontainer container kubepods-burstable-pod6169582d_5f41_430b_9890_0f5959297de0.slice. Sep 9 00:20:18.657183 systemd[1]: Created slice kubepods-besteffort-pod8bf0de85_d410_4847_a3a5_0ed12ae23e80.slice - libcontainer container kubepods-besteffort-pod8bf0de85_d410_4847_a3a5_0ed12ae23e80.slice. Sep 9 00:20:18.665742 systemd[1]: Created slice kubepods-besteffort-pode65cbb6f_ba37_4168_a027_ea0ff3dac6d4.slice - libcontainer container kubepods-besteffort-pode65cbb6f_ba37_4168_a027_ea0ff3dac6d4.slice. Sep 9 00:20:18.670466 kubelet[2562]: I0909 00:20:18.669938 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d5d79a3-0364-4187-9384-d9371101170a-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-b8v4d\" (UID: \"2d5d79a3-0364-4187-9384-d9371101170a\") " pod="calico-system/goldmane-54d579b49d-b8v4d" Sep 9 00:20:18.670466 kubelet[2562]: I0909 00:20:18.669973 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1747ee41-b82d-4e7c-9b85-9f845fa3552f-calico-apiserver-certs\") pod \"calico-apiserver-c9b45b4c5-8gmqn\" (UID: \"1747ee41-b82d-4e7c-9b85-9f845fa3552f\") " pod="calico-apiserver/calico-apiserver-c9b45b4c5-8gmqn" Sep 9 00:20:18.670466 kubelet[2562]: I0909 00:20:18.669998 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8bf0de85-d410-4847-a3a5-0ed12ae23e80-whisker-backend-key-pair\") pod \"whisker-54cbddff85-98lc8\" (UID: \"8bf0de85-d410-4847-a3a5-0ed12ae23e80\") " pod="calico-system/whisker-54cbddff85-98lc8" Sep 9 00:20:18.670466 kubelet[2562]: I0909 00:20:18.670011 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9eba6dae-45e2-4ee0-9d05-4984a3603e03-config-volume\") pod \"coredns-674b8bbfcf-88jjn\" (UID: \"9eba6dae-45e2-4ee0-9d05-4984a3603e03\") " pod="kube-system/coredns-674b8bbfcf-88jjn" Sep 9 00:20:18.670466 kubelet[2562]: I0909 00:20:18.670028 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrc6s\" (UniqueName: \"kubernetes.io/projected/1747ee41-b82d-4e7c-9b85-9f845fa3552f-kube-api-access-xrc6s\") pod \"calico-apiserver-c9b45b4c5-8gmqn\" (UID: \"1747ee41-b82d-4e7c-9b85-9f845fa3552f\") " pod="calico-apiserver/calico-apiserver-c9b45b4c5-8gmqn" Sep 9 00:20:18.671041 kubelet[2562]: I0909 00:20:18.670044 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d5d79a3-0364-4187-9384-d9371101170a-config\") pod \"goldmane-54d579b49d-b8v4d\" (UID: \"2d5d79a3-0364-4187-9384-d9371101170a\") " pod="calico-system/goldmane-54d579b49d-b8v4d" Sep 9 00:20:18.671041 kubelet[2562]: I0909 00:20:18.670066 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nscqx\" (UniqueName: \"kubernetes.io/projected/2d5d79a3-0364-4187-9384-d9371101170a-kube-api-access-nscqx\") pod \"goldmane-54d579b49d-b8v4d\" (UID: \"2d5d79a3-0364-4187-9384-d9371101170a\") " pod="calico-system/goldmane-54d579b49d-b8v4d" Sep 9 00:20:18.671041 kubelet[2562]: I0909 00:20:18.670081 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prhfv\" (UniqueName: \"kubernetes.io/projected/4f10fd88-00c6-468e-a12c-ae8ac5f160de-kube-api-access-prhfv\") pod \"calico-apiserver-c9b45b4c5-f5xxs\" (UID: \"4f10fd88-00c6-468e-a12c-ae8ac5f160de\") " pod="calico-apiserver/calico-apiserver-c9b45b4c5-f5xxs" Sep 9 00:20:18.671041 kubelet[2562]: I0909 00:20:18.670119 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bf0de85-d410-4847-a3a5-0ed12ae23e80-whisker-ca-bundle\") pod \"whisker-54cbddff85-98lc8\" (UID: \"8bf0de85-d410-4847-a3a5-0ed12ae23e80\") " pod="calico-system/whisker-54cbddff85-98lc8" Sep 9 00:20:18.671041 kubelet[2562]: I0909 00:20:18.670144 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e65cbb6f-ba37-4168-a027-ea0ff3dac6d4-tigera-ca-bundle\") pod \"calico-kube-controllers-5cbbb5d746-lpbgl\" (UID: \"e65cbb6f-ba37-4168-a027-ea0ff3dac6d4\") " pod="calico-system/calico-kube-controllers-5cbbb5d746-lpbgl" Sep 9 00:20:18.670684 systemd[1]: Created slice kubepods-besteffort-pod1747ee41_b82d_4e7c_9b85_9f845fa3552f.slice - libcontainer container kubepods-besteffort-pod1747ee41_b82d_4e7c_9b85_9f845fa3552f.slice. Sep 9 00:20:18.671598 kubelet[2562]: I0909 00:20:18.670164 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcrkb\" (UniqueName: \"kubernetes.io/projected/e65cbb6f-ba37-4168-a027-ea0ff3dac6d4-kube-api-access-lcrkb\") pod \"calico-kube-controllers-5cbbb5d746-lpbgl\" (UID: \"e65cbb6f-ba37-4168-a027-ea0ff3dac6d4\") " pod="calico-system/calico-kube-controllers-5cbbb5d746-lpbgl" Sep 9 00:20:18.671598 kubelet[2562]: I0909 00:20:18.670188 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbvhd\" (UniqueName: \"kubernetes.io/projected/8bf0de85-d410-4847-a3a5-0ed12ae23e80-kube-api-access-vbvhd\") pod \"whisker-54cbddff85-98lc8\" (UID: \"8bf0de85-d410-4847-a3a5-0ed12ae23e80\") " pod="calico-system/whisker-54cbddff85-98lc8" Sep 9 00:20:18.671598 kubelet[2562]: I0909 00:20:18.670210 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4f10fd88-00c6-468e-a12c-ae8ac5f160de-calico-apiserver-certs\") pod \"calico-apiserver-c9b45b4c5-f5xxs\" (UID: \"4f10fd88-00c6-468e-a12c-ae8ac5f160de\") " pod="calico-apiserver/calico-apiserver-c9b45b4c5-f5xxs" Sep 9 00:20:18.671598 kubelet[2562]: I0909 00:20:18.670229 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cljkv\" (UniqueName: \"kubernetes.io/projected/6169582d-5f41-430b-9890-0f5959297de0-kube-api-access-cljkv\") pod \"coredns-674b8bbfcf-zpbrv\" (UID: \"6169582d-5f41-430b-9890-0f5959297de0\") " pod="kube-system/coredns-674b8bbfcf-zpbrv" Sep 9 00:20:18.671598 kubelet[2562]: I0909 00:20:18.670245 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2d5d79a3-0364-4187-9384-d9371101170a-goldmane-key-pair\") pod \"goldmane-54d579b49d-b8v4d\" (UID: \"2d5d79a3-0364-4187-9384-d9371101170a\") " pod="calico-system/goldmane-54d579b49d-b8v4d" Sep 9 00:20:18.672492 kubelet[2562]: I0909 00:20:18.670260 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6169582d-5f41-430b-9890-0f5959297de0-config-volume\") pod \"coredns-674b8bbfcf-zpbrv\" (UID: \"6169582d-5f41-430b-9890-0f5959297de0\") " pod="kube-system/coredns-674b8bbfcf-zpbrv" Sep 9 00:20:18.672492 kubelet[2562]: I0909 00:20:18.670274 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csqxk\" (UniqueName: \"kubernetes.io/projected/9eba6dae-45e2-4ee0-9d05-4984a3603e03-kube-api-access-csqxk\") pod \"coredns-674b8bbfcf-88jjn\" (UID: \"9eba6dae-45e2-4ee0-9d05-4984a3603e03\") " pod="kube-system/coredns-674b8bbfcf-88jjn" Sep 9 00:20:18.678653 systemd[1]: Created slice kubepods-besteffort-pod2d5d79a3_0364_4187_9384_d9371101170a.slice - libcontainer container kubepods-besteffort-pod2d5d79a3_0364_4187_9384_d9371101170a.slice. Sep 9 00:20:18.939569 kubelet[2562]: E0909 00:20:18.939427 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:18.940445 containerd[1473]: time="2025-09-09T00:20:18.940230883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-88jjn,Uid:9eba6dae-45e2-4ee0-9d05-4984a3603e03,Namespace:kube-system,Attempt:0,}" Sep 9 00:20:18.946840 containerd[1473]: time="2025-09-09T00:20:18.946793391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c9b45b4c5-f5xxs,Uid:4f10fd88-00c6-468e-a12c-ae8ac5f160de,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:20:18.954082 kubelet[2562]: E0909 00:20:18.954053 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:18.954453 containerd[1473]: time="2025-09-09T00:20:18.954412661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zpbrv,Uid:6169582d-5f41-430b-9890-0f5959297de0,Namespace:kube-system,Attempt:0,}" Sep 9 00:20:18.961699 containerd[1473]: time="2025-09-09T00:20:18.961661647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54cbddff85-98lc8,Uid:8bf0de85-d410-4847-a3a5-0ed12ae23e80,Namespace:calico-system,Attempt:0,}" Sep 9 00:20:18.980494 containerd[1473]: time="2025-09-09T00:20:18.980442006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c9b45b4c5-8gmqn,Uid:1747ee41-b82d-4e7c-9b85-9f845fa3552f,Namespace:calico-apiserver,Attempt:0,}" Sep 9 00:20:18.983006 containerd[1473]: time="2025-09-09T00:20:18.982966454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbbb5d746-lpbgl,Uid:e65cbb6f-ba37-4168-a027-ea0ff3dac6d4,Namespace:calico-system,Attempt:0,}" Sep 9 00:20:18.983271 containerd[1473]: time="2025-09-09T00:20:18.983107709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-b8v4d,Uid:2d5d79a3-0364-4187-9384-d9371101170a,Namespace:calico-system,Attempt:0,}" Sep 9 00:20:19.113779 containerd[1473]: time="2025-09-09T00:20:19.113698105Z" level=error msg="Failed to destroy network for sandbox \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.118148 containerd[1473]: time="2025-09-09T00:20:19.118101730Z" level=error msg="Failed to destroy network for sandbox \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.119482 containerd[1473]: time="2025-09-09T00:20:19.119450662Z" level=error msg="Failed to destroy network for sandbox \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.121589 containerd[1473]: time="2025-09-09T00:20:19.121544652Z" level=error msg="encountered an error cleaning up failed sandbox \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.121643 containerd[1473]: time="2025-09-09T00:20:19.121614763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-54cbddff85-98lc8,Uid:8bf0de85-d410-4847-a3a5-0ed12ae23e80,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.121960 kubelet[2562]: E0909 00:20:19.121906 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.122019 kubelet[2562]: E0909 00:20:19.121988 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54cbddff85-98lc8" Sep 9 00:20:19.122047 kubelet[2562]: E0909 00:20:19.122033 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-54cbddff85-98lc8" Sep 9 00:20:19.122130 kubelet[2562]: E0909 00:20:19.122095 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-54cbddff85-98lc8_calico-system(8bf0de85-d410-4847-a3a5-0ed12ae23e80)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-54cbddff85-98lc8_calico-system(8bf0de85-d410-4847-a3a5-0ed12ae23e80)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54cbddff85-98lc8" podUID="8bf0de85-d410-4847-a3a5-0ed12ae23e80" Sep 9 00:20:19.122853 containerd[1473]: time="2025-09-09T00:20:19.122186867Z" level=error msg="encountered an error cleaning up failed sandbox \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.122853 containerd[1473]: time="2025-09-09T00:20:19.122246609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-88jjn,Uid:9eba6dae-45e2-4ee0-9d05-4984a3603e03,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.122853 containerd[1473]: time="2025-09-09T00:20:19.122752238Z" level=error msg="encountered an error cleaning up failed sandbox \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.122853 containerd[1473]: time="2025-09-09T00:20:19.122794176Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c9b45b4c5-f5xxs,Uid:4f10fd88-00c6-468e-a12c-ae8ac5f160de,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.122987 kubelet[2562]: E0909 00:20:19.122410 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.122987 kubelet[2562]: E0909 00:20:19.122448 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-88jjn" Sep 9 00:20:19.122987 kubelet[2562]: E0909 00:20:19.122470 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-88jjn" Sep 9 00:20:19.123071 kubelet[2562]: E0909 00:20:19.122513 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-88jjn_kube-system(9eba6dae-45e2-4ee0-9d05-4984a3603e03)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-88jjn_kube-system(9eba6dae-45e2-4ee0-9d05-4984a3603e03)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-88jjn" podUID="9eba6dae-45e2-4ee0-9d05-4984a3603e03" Sep 9 00:20:19.123071 kubelet[2562]: E0909 00:20:19.122929 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.123071 kubelet[2562]: E0909 00:20:19.122963 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c9b45b4c5-f5xxs" Sep 9 00:20:19.123170 kubelet[2562]: E0909 00:20:19.122983 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c9b45b4c5-f5xxs" Sep 9 00:20:19.123170 kubelet[2562]: E0909 00:20:19.123024 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c9b45b4c5-f5xxs_calico-apiserver(4f10fd88-00c6-468e-a12c-ae8ac5f160de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c9b45b4c5-f5xxs_calico-apiserver(4f10fd88-00c6-468e-a12c-ae8ac5f160de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c9b45b4c5-f5xxs" podUID="4f10fd88-00c6-468e-a12c-ae8ac5f160de" Sep 9 00:20:19.281840 containerd[1473]: time="2025-09-09T00:20:19.281773231Z" level=error msg="Failed to destroy network for sandbox \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.282146 containerd[1473]: time="2025-09-09T00:20:19.282055552Z" level=error msg="Failed to destroy network for sandbox \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.282640 containerd[1473]: time="2025-09-09T00:20:19.282595324Z" level=error msg="encountered an error cleaning up failed sandbox \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.282785 containerd[1473]: time="2025-09-09T00:20:19.282753641Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbbb5d746-lpbgl,Uid:e65cbb6f-ba37-4168-a027-ea0ff3dac6d4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.283274 kubelet[2562]: E0909 00:20:19.283229 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.283944 kubelet[2562]: E0909 00:20:19.283505 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbbb5d746-lpbgl" Sep 9 00:20:19.283944 kubelet[2562]: E0909 00:20:19.283543 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5cbbb5d746-lpbgl" Sep 9 00:20:19.283944 kubelet[2562]: E0909 00:20:19.283620 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5cbbb5d746-lpbgl_calico-system(e65cbb6f-ba37-4168-a027-ea0ff3dac6d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5cbbb5d746-lpbgl_calico-system(e65cbb6f-ba37-4168-a027-ea0ff3dac6d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cbbb5d746-lpbgl" podUID="e65cbb6f-ba37-4168-a027-ea0ff3dac6d4" Sep 9 00:20:19.284852 containerd[1473]: time="2025-09-09T00:20:19.284802026Z" level=error msg="Failed to destroy network for sandbox \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.285044 containerd[1473]: time="2025-09-09T00:20:19.284991081Z" level=error msg="encountered an error cleaning up failed sandbox \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.285095 containerd[1473]: time="2025-09-09T00:20:19.285058067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c9b45b4c5-8gmqn,Uid:1747ee41-b82d-4e7c-9b85-9f845fa3552f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.285304 kubelet[2562]: E0909 00:20:19.285262 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.285415 kubelet[2562]: E0909 00:20:19.285311 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c9b45b4c5-8gmqn" Sep 9 00:20:19.285415 kubelet[2562]: E0909 00:20:19.285346 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c9b45b4c5-8gmqn" Sep 9 00:20:19.285750 kubelet[2562]: E0909 00:20:19.285408 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c9b45b4c5-8gmqn_calico-apiserver(1747ee41-b82d-4e7c-9b85-9f845fa3552f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c9b45b4c5-8gmqn_calico-apiserver(1747ee41-b82d-4e7c-9b85-9f845fa3552f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c9b45b4c5-8gmqn" podUID="1747ee41-b82d-4e7c-9b85-9f845fa3552f" Sep 9 00:20:19.286137 containerd[1473]: time="2025-09-09T00:20:19.286026404Z" level=error msg="encountered an error cleaning up failed sandbox \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.286137 containerd[1473]: time="2025-09-09T00:20:19.286083521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zpbrv,Uid:6169582d-5f41-430b-9890-0f5959297de0,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.286150 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519-shm.mount: Deactivated successfully. Sep 9 00:20:19.286289 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa-shm.mount: Deactivated successfully. Sep 9 00:20:19.288786 kubelet[2562]: E0909 00:20:19.288467 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.288786 kubelet[2562]: E0909 00:20:19.288623 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zpbrv" Sep 9 00:20:19.288786 kubelet[2562]: E0909 00:20:19.288675 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-zpbrv" Sep 9 00:20:19.289085 kubelet[2562]: E0909 00:20:19.288904 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-zpbrv_kube-system(6169582d-5f41-430b-9890-0f5959297de0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-zpbrv_kube-system(6169582d-5f41-430b-9890-0f5959297de0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zpbrv" podUID="6169582d-5f41-430b-9890-0f5959297de0" Sep 9 00:20:19.291313 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5-shm.mount: Deactivated successfully. Sep 9 00:20:19.291846 containerd[1473]: time="2025-09-09T00:20:19.291798827Z" level=error msg="Failed to destroy network for sandbox \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.292313 containerd[1473]: time="2025-09-09T00:20:19.292280231Z" level=error msg="encountered an error cleaning up failed sandbox \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.292422 containerd[1473]: time="2025-09-09T00:20:19.292346255Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-b8v4d,Uid:2d5d79a3-0364-4187-9384-d9371101170a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.292622 kubelet[2562]: E0909 00:20:19.292583 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.292692 kubelet[2562]: E0909 00:20:19.292642 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-b8v4d" Sep 9 00:20:19.292692 kubelet[2562]: E0909 00:20:19.292667 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-b8v4d" Sep 9 00:20:19.292823 kubelet[2562]: E0909 00:20:19.292722 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-b8v4d_calico-system(2d5d79a3-0364-4187-9384-d9371101170a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-b8v4d_calico-system(2d5d79a3-0364-4187-9384-d9371101170a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-b8v4d" podUID="2d5d79a3-0364-4187-9384-d9371101170a" Sep 9 00:20:19.294938 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79-shm.mount: Deactivated successfully. Sep 9 00:20:19.433530 systemd[1]: Created slice kubepods-besteffort-pod167309ad_7f53_41fb_a5c4_b6c3ac0a5dbe.slice - libcontainer container kubepods-besteffort-pod167309ad_7f53_41fb_a5c4_b6c3ac0a5dbe.slice. Sep 9 00:20:19.435743 containerd[1473]: time="2025-09-09T00:20:19.435699213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7zmjc,Uid:167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe,Namespace:calico-system,Attempt:0,}" Sep 9 00:20:19.502204 containerd[1473]: time="2025-09-09T00:20:19.502092962Z" level=error msg="Failed to destroy network for sandbox \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.502753 containerd[1473]: time="2025-09-09T00:20:19.502694791Z" level=error msg="encountered an error cleaning up failed sandbox \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.502853 containerd[1473]: time="2025-09-09T00:20:19.502791653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7zmjc,Uid:167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.503208 kubelet[2562]: E0909 00:20:19.503140 2562 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.503289 kubelet[2562]: E0909 00:20:19.503244 2562 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7zmjc" Sep 9 00:20:19.503355 kubelet[2562]: E0909 00:20:19.503298 2562 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7zmjc" Sep 9 00:20:19.503470 kubelet[2562]: E0909 00:20:19.503424 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7zmjc_calico-system(167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7zmjc_calico-system(167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7zmjc" podUID="167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe" Sep 9 00:20:19.551239 containerd[1473]: time="2025-09-09T00:20:19.551085401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 9 00:20:19.551352 kubelet[2562]: I0909 00:20:19.551269 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Sep 9 00:20:19.552680 containerd[1473]: time="2025-09-09T00:20:19.552023682Z" level=info msg="StopPodSandbox for \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\"" Sep 9 00:20:19.553234 kubelet[2562]: I0909 00:20:19.553196 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Sep 9 00:20:19.553893 containerd[1473]: time="2025-09-09T00:20:19.553831104Z" level=info msg="Ensure that sandbox 5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5 in task-service has been cleanup successfully" Sep 9 00:20:19.553949 containerd[1473]: time="2025-09-09T00:20:19.553886729Z" level=info msg="StopPodSandbox for \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\"" Sep 9 00:20:19.554169 containerd[1473]: time="2025-09-09T00:20:19.554058492Z" level=info msg="Ensure that sandbox 86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242 in task-service has been cleanup successfully" Sep 9 00:20:19.557020 kubelet[2562]: I0909 00:20:19.556988 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Sep 9 00:20:19.557968 containerd[1473]: time="2025-09-09T00:20:19.557932432Z" level=info msg="StopPodSandbox for \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\"" Sep 9 00:20:19.559833 kubelet[2562]: I0909 00:20:19.559351 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Sep 9 00:20:19.560974 containerd[1473]: time="2025-09-09T00:20:19.560943032Z" level=info msg="Ensure that sandbox beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618 in task-service has been cleanup successfully" Sep 9 00:20:19.561429 containerd[1473]: time="2025-09-09T00:20:19.561402125Z" level=info msg="StopPodSandbox for \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\"" Sep 9 00:20:19.561696 containerd[1473]: time="2025-09-09T00:20:19.561655490Z" level=info msg="Ensure that sandbox dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79 in task-service has been cleanup successfully" Sep 9 00:20:19.564477 kubelet[2562]: I0909 00:20:19.564126 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Sep 9 00:20:19.566180 containerd[1473]: time="2025-09-09T00:20:19.566129497Z" level=info msg="StopPodSandbox for \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\"" Sep 9 00:20:19.566342 containerd[1473]: time="2025-09-09T00:20:19.566311939Z" level=info msg="Ensure that sandbox 7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519 in task-service has been cleanup successfully" Sep 9 00:20:19.569442 kubelet[2562]: I0909 00:20:19.569352 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Sep 9 00:20:19.576182 containerd[1473]: time="2025-09-09T00:20:19.575711491Z" level=info msg="StopPodSandbox for \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\"" Sep 9 00:20:19.576182 containerd[1473]: time="2025-09-09T00:20:19.575925212Z" level=info msg="Ensure that sandbox 723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa in task-service has been cleanup successfully" Sep 9 00:20:19.586100 kubelet[2562]: I0909 00:20:19.586055 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Sep 9 00:20:19.589774 containerd[1473]: time="2025-09-09T00:20:19.589714712Z" level=info msg="StopPodSandbox for \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\"" Sep 9 00:20:19.590874 containerd[1473]: time="2025-09-09T00:20:19.590539871Z" level=info msg="Ensure that sandbox 6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb in task-service has been cleanup successfully" Sep 9 00:20:19.598152 kubelet[2562]: I0909 00:20:19.598112 2562 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Sep 9 00:20:19.600256 containerd[1473]: time="2025-09-09T00:20:19.600194050Z" level=info msg="StopPodSandbox for \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\"" Sep 9 00:20:19.604920 containerd[1473]: time="2025-09-09T00:20:19.604598977Z" level=info msg="Ensure that sandbox c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378 in task-service has been cleanup successfully" Sep 9 00:20:19.616099 containerd[1473]: time="2025-09-09T00:20:19.616041372Z" level=error msg="StopPodSandbox for \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\" failed" error="failed to destroy network for sandbox \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.616538 kubelet[2562]: E0909 00:20:19.616498 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Sep 9 00:20:19.617545 kubelet[2562]: E0909 00:20:19.616676 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5"} Sep 9 00:20:19.617545 kubelet[2562]: E0909 00:20:19.617486 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6169582d-5f41-430b-9890-0f5959297de0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:20:19.617545 kubelet[2562]: E0909 00:20:19.617511 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6169582d-5f41-430b-9890-0f5959297de0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-zpbrv" podUID="6169582d-5f41-430b-9890-0f5959297de0" Sep 9 00:20:19.625036 containerd[1473]: time="2025-09-09T00:20:19.624973055Z" level=error msg="StopPodSandbox for \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\" failed" error="failed to destroy network for sandbox \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.625552 kubelet[2562]: E0909 00:20:19.625492 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Sep 9 00:20:19.625639 kubelet[2562]: E0909 00:20:19.625560 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242"} Sep 9 00:20:19.625639 kubelet[2562]: E0909 00:20:19.625604 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:20:19.625766 kubelet[2562]: E0909 00:20:19.625636 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7zmjc" podUID="167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe" Sep 9 00:20:19.644857 containerd[1473]: time="2025-09-09T00:20:19.644722221Z" level=error msg="StopPodSandbox for \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\" failed" error="failed to destroy network for sandbox \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.645593 kubelet[2562]: E0909 00:20:19.645326 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Sep 9 00:20:19.645593 kubelet[2562]: E0909 00:20:19.645443 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618"} Sep 9 00:20:19.645593 kubelet[2562]: E0909 00:20:19.645493 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9eba6dae-45e2-4ee0-9d05-4984a3603e03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:20:19.645593 kubelet[2562]: E0909 00:20:19.645527 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9eba6dae-45e2-4ee0-9d05-4984a3603e03\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-88jjn" podUID="9eba6dae-45e2-4ee0-9d05-4984a3603e03" Sep 9 00:20:19.654545 containerd[1473]: time="2025-09-09T00:20:19.654484373Z" level=error msg="StopPodSandbox for \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\" failed" error="failed to destroy network for sandbox \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.654879 kubelet[2562]: E0909 00:20:19.654823 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Sep 9 00:20:19.654931 kubelet[2562]: E0909 00:20:19.654898 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa"} Sep 9 00:20:19.654979 kubelet[2562]: E0909 00:20:19.654956 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1747ee41-b82d-4e7c-9b85-9f845fa3552f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:20:19.655042 kubelet[2562]: E0909 00:20:19.654993 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1747ee41-b82d-4e7c-9b85-9f845fa3552f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c9b45b4c5-8gmqn" podUID="1747ee41-b82d-4e7c-9b85-9f845fa3552f" Sep 9 00:20:19.658560 containerd[1473]: time="2025-09-09T00:20:19.658383451Z" level=error msg="StopPodSandbox for \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\" failed" error="failed to destroy network for sandbox \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.658791 kubelet[2562]: E0909 00:20:19.658720 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Sep 9 00:20:19.658863 kubelet[2562]: E0909 00:20:19.658809 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79"} Sep 9 00:20:19.658863 kubelet[2562]: E0909 00:20:19.658854 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2d5d79a3-0364-4187-9384-d9371101170a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:20:19.658973 kubelet[2562]: E0909 00:20:19.658887 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2d5d79a3-0364-4187-9384-d9371101170a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-b8v4d" podUID="2d5d79a3-0364-4187-9384-d9371101170a" Sep 9 00:20:19.659736 containerd[1473]: time="2025-09-09T00:20:19.659682689Z" level=error msg="StopPodSandbox for \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\" failed" error="failed to destroy network for sandbox \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.660489 kubelet[2562]: E0909 00:20:19.660436 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Sep 9 00:20:19.660564 kubelet[2562]: E0909 00:20:19.660499 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378"} Sep 9 00:20:19.660564 kubelet[2562]: E0909 00:20:19.660542 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f10fd88-00c6-468e-a12c-ae8ac5f160de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:20:19.660647 kubelet[2562]: E0909 00:20:19.660571 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f10fd88-00c6-468e-a12c-ae8ac5f160de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c9b45b4c5-f5xxs" podUID="4f10fd88-00c6-468e-a12c-ae8ac5f160de" Sep 9 00:20:19.660841 containerd[1473]: time="2025-09-09T00:20:19.660790599Z" level=error msg="StopPodSandbox for \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\" failed" error="failed to destroy network for sandbox \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.661051 kubelet[2562]: E0909 00:20:19.661019 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Sep 9 00:20:19.661107 kubelet[2562]: E0909 00:20:19.661052 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519"} Sep 9 00:20:19.661107 kubelet[2562]: E0909 00:20:19.661089 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e65cbb6f-ba37-4168-a027-ea0ff3dac6d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:20:19.661186 kubelet[2562]: E0909 00:20:19.661109 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e65cbb6f-ba37-4168-a027-ea0ff3dac6d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5cbbb5d746-lpbgl" podUID="e65cbb6f-ba37-4168-a027-ea0ff3dac6d4" Sep 9 00:20:19.666036 containerd[1473]: time="2025-09-09T00:20:19.665996759Z" level=error msg="StopPodSandbox for \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\" failed" error="failed to destroy network for sandbox \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 9 00:20:19.666181 kubelet[2562]: E0909 00:20:19.666153 2562 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Sep 9 00:20:19.666223 kubelet[2562]: E0909 00:20:19.666184 2562 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb"} Sep 9 00:20:19.666223 kubelet[2562]: E0909 00:20:19.666205 2562 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8bf0de85-d410-4847-a3a5-0ed12ae23e80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 9 00:20:19.666298 kubelet[2562]: E0909 00:20:19.666223 2562 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8bf0de85-d410-4847-a3a5-0ed12ae23e80\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-54cbddff85-98lc8" podUID="8bf0de85-d410-4847-a3a5-0ed12ae23e80" Sep 9 00:20:20.263142 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242-shm.mount: Deactivated successfully. Sep 9 00:20:26.284798 systemd[1]: Started sshd@9-10.0.0.15:22-10.0.0.1:52578.service - OpenSSH per-connection server daemon (10.0.0.1:52578). Sep 9 00:20:26.336422 sshd[3783]: Accepted publickey for core from 10.0.0.1 port 52578 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:26.337779 sshd[3783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:26.342844 systemd-logind[1454]: New session 10 of user core. Sep 9 00:20:26.350517 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 00:20:26.510853 sshd[3783]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:26.516460 systemd[1]: sshd@9-10.0.0.15:22-10.0.0.1:52578.service: Deactivated successfully. Sep 9 00:20:26.519421 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 00:20:26.520469 systemd-logind[1454]: Session 10 logged out. Waiting for processes to exit. Sep 9 00:20:26.521700 systemd-logind[1454]: Removed session 10. Sep 9 00:20:28.244971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount764019369.mount: Deactivated successfully. Sep 9 00:20:30.171208 containerd[1473]: time="2025-09-09T00:20:30.171116500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:30.172055 containerd[1473]: time="2025-09-09T00:20:30.172012185Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 9 00:20:30.173513 containerd[1473]: time="2025-09-09T00:20:30.173480002Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:30.176560 containerd[1473]: time="2025-09-09T00:20:30.176485642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:30.177111 containerd[1473]: time="2025-09-09T00:20:30.177054017Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.625914593s" Sep 9 00:20:30.177111 containerd[1473]: time="2025-09-09T00:20:30.177105767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 9 00:20:30.193996 containerd[1473]: time="2025-09-09T00:20:30.193933811Z" level=info msg="CreateContainer within sandbox \"ad246ce074a8f2ab6a6fdfe0e3f072e7b0c2250ec1689a00bfa655a5be861b63\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 9 00:20:30.220039 containerd[1473]: time="2025-09-09T00:20:30.219952389Z" level=info msg="CreateContainer within sandbox \"ad246ce074a8f2ab6a6fdfe0e3f072e7b0c2250ec1689a00bfa655a5be861b63\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5413a5b2228ece116e21eb4c3ecc5719b906978c0812f1b213a7ae487632920d\"" Sep 9 00:20:30.220819 containerd[1473]: time="2025-09-09T00:20:30.220772608Z" level=info msg="StartContainer for \"5413a5b2228ece116e21eb4c3ecc5719b906978c0812f1b213a7ae487632920d\"" Sep 9 00:20:30.276511 systemd[1]: Started cri-containerd-5413a5b2228ece116e21eb4c3ecc5719b906978c0812f1b213a7ae487632920d.scope - libcontainer container 5413a5b2228ece116e21eb4c3ecc5719b906978c0812f1b213a7ae487632920d. Sep 9 00:20:30.343670 containerd[1473]: time="2025-09-09T00:20:30.343595232Z" level=info msg="StartContainer for \"5413a5b2228ece116e21eb4c3ecc5719b906978c0812f1b213a7ae487632920d\" returns successfully" Sep 9 00:20:30.424391 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 9 00:20:30.425218 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 9 00:20:30.814561 kubelet[2562]: I0909 00:20:30.814432 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-dxgnk" podStartSLOduration=1.6645029409999998 podStartE2EDuration="25.814414915s" podCreationTimestamp="2025-09-09 00:20:05 +0000 UTC" firstStartedPulling="2025-09-09 00:20:06.02796581 +0000 UTC m=+24.715956319" lastFinishedPulling="2025-09-09 00:20:30.177877794 +0000 UTC m=+48.865868293" observedRunningTime="2025-09-09 00:20:30.812683179 +0000 UTC m=+49.500673678" watchObservedRunningTime="2025-09-09 00:20:30.814414915 +0000 UTC m=+49.502405414" Sep 9 00:20:30.884719 containerd[1473]: time="2025-09-09T00:20:30.884642739Z" level=info msg="StopPodSandbox for \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\"" Sep 9 00:20:31.144530 containerd[1473]: 2025-09-09 00:20:31.063 [INFO][3863] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Sep 9 00:20:31.144530 containerd[1473]: 2025-09-09 00:20:31.063 [INFO][3863] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" iface="eth0" netns="/var/run/netns/cni-7f743887-ed05-d12c-5136-554979007109" Sep 9 00:20:31.144530 containerd[1473]: 2025-09-09 00:20:31.064 [INFO][3863] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" iface="eth0" netns="/var/run/netns/cni-7f743887-ed05-d12c-5136-554979007109" Sep 9 00:20:31.144530 containerd[1473]: 2025-09-09 00:20:31.064 [INFO][3863] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" iface="eth0" netns="/var/run/netns/cni-7f743887-ed05-d12c-5136-554979007109" Sep 9 00:20:31.144530 containerd[1473]: 2025-09-09 00:20:31.064 [INFO][3863] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Sep 9 00:20:31.144530 containerd[1473]: 2025-09-09 00:20:31.064 [INFO][3863] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Sep 9 00:20:31.144530 containerd[1473]: 2025-09-09 00:20:31.128 [INFO][3894] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" HandleID="k8s-pod-network.6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Workload="localhost-k8s-whisker--54cbddff85--98lc8-eth0" Sep 9 00:20:31.144530 containerd[1473]: 2025-09-09 00:20:31.128 [INFO][3894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:31.144530 containerd[1473]: 2025-09-09 00:20:31.129 [INFO][3894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:31.144530 containerd[1473]: 2025-09-09 00:20:31.136 [WARNING][3894] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" HandleID="k8s-pod-network.6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Workload="localhost-k8s-whisker--54cbddff85--98lc8-eth0" Sep 9 00:20:31.144530 containerd[1473]: 2025-09-09 00:20:31.136 [INFO][3894] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" HandleID="k8s-pod-network.6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Workload="localhost-k8s-whisker--54cbddff85--98lc8-eth0" Sep 9 00:20:31.144530 containerd[1473]: 2025-09-09 00:20:31.138 [INFO][3894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:31.144530 containerd[1473]: 2025-09-09 00:20:31.141 [INFO][3863] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Sep 9 00:20:31.144935 containerd[1473]: time="2025-09-09T00:20:31.144637630Z" level=info msg="TearDown network for sandbox \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\" successfully" Sep 9 00:20:31.144935 containerd[1473]: time="2025-09-09T00:20:31.144671635Z" level=info msg="StopPodSandbox for \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\" returns successfully" Sep 9 00:20:31.186385 systemd[1]: run-netns-cni\x2d7f743887\x2ded05\x2dd12c\x2d5136\x2d554979007109.mount: Deactivated successfully. Sep 9 00:20:31.249186 kubelet[2562]: I0909 00:20:31.249128 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbvhd\" (UniqueName: \"kubernetes.io/projected/8bf0de85-d410-4847-a3a5-0ed12ae23e80-kube-api-access-vbvhd\") pod \"8bf0de85-d410-4847-a3a5-0ed12ae23e80\" (UID: \"8bf0de85-d410-4847-a3a5-0ed12ae23e80\") " Sep 9 00:20:31.249186 kubelet[2562]: I0909 00:20:31.249182 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bf0de85-d410-4847-a3a5-0ed12ae23e80-whisker-ca-bundle\") pod \"8bf0de85-d410-4847-a3a5-0ed12ae23e80\" (UID: \"8bf0de85-d410-4847-a3a5-0ed12ae23e80\") " Sep 9 00:20:31.249186 kubelet[2562]: I0909 00:20:31.249202 2562 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8bf0de85-d410-4847-a3a5-0ed12ae23e80-whisker-backend-key-pair\") pod \"8bf0de85-d410-4847-a3a5-0ed12ae23e80\" (UID: \"8bf0de85-d410-4847-a3a5-0ed12ae23e80\") " Sep 9 00:20:31.250781 kubelet[2562]: I0909 00:20:31.250714 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bf0de85-d410-4847-a3a5-0ed12ae23e80-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8bf0de85-d410-4847-a3a5-0ed12ae23e80" (UID: "8bf0de85-d410-4847-a3a5-0ed12ae23e80"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 9 00:20:31.255000 kubelet[2562]: I0909 00:20:31.254957 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bf0de85-d410-4847-a3a5-0ed12ae23e80-kube-api-access-vbvhd" (OuterVolumeSpecName: "kube-api-access-vbvhd") pod "8bf0de85-d410-4847-a3a5-0ed12ae23e80" (UID: "8bf0de85-d410-4847-a3a5-0ed12ae23e80"). InnerVolumeSpecName "kube-api-access-vbvhd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 9 00:20:31.256515 kubelet[2562]: I0909 00:20:31.256474 2562 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bf0de85-d410-4847-a3a5-0ed12ae23e80-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8bf0de85-d410-4847-a3a5-0ed12ae23e80" (UID: "8bf0de85-d410-4847-a3a5-0ed12ae23e80"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 9 00:20:31.256851 systemd[1]: var-lib-kubelet-pods-8bf0de85\x2dd410\x2d4847\x2da3a5\x2d0ed12ae23e80-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvbvhd.mount: Deactivated successfully. Sep 9 00:20:31.256993 systemd[1]: var-lib-kubelet-pods-8bf0de85\x2dd410\x2d4847\x2da3a5\x2d0ed12ae23e80-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 9 00:20:31.350013 kubelet[2562]: I0909 00:20:31.349957 2562 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8bf0de85-d410-4847-a3a5-0ed12ae23e80-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Sep 9 00:20:31.350013 kubelet[2562]: I0909 00:20:31.350001 2562 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vbvhd\" (UniqueName: \"kubernetes.io/projected/8bf0de85-d410-4847-a3a5-0ed12ae23e80-kube-api-access-vbvhd\") on node \"localhost\" DevicePath \"\"" Sep 9 00:20:31.350013 kubelet[2562]: I0909 00:20:31.350011 2562 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bf0de85-d410-4847-a3a5-0ed12ae23e80-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Sep 9 00:20:31.430173 containerd[1473]: time="2025-09-09T00:20:31.428527687Z" level=info msg="StopPodSandbox for \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\"" Sep 9 00:20:31.430173 containerd[1473]: time="2025-09-09T00:20:31.429711947Z" level=info msg="StopPodSandbox for \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\"" Sep 9 00:20:31.444711 systemd[1]: Removed slice kubepods-besteffort-pod8bf0de85_d410_4847_a3a5_0ed12ae23e80.slice - libcontainer container kubepods-besteffort-pod8bf0de85_d410_4847_a3a5_0ed12ae23e80.slice. Sep 9 00:20:31.526334 systemd[1]: Started sshd@10-10.0.0.15:22-10.0.0.1:41422.service - OpenSSH per-connection server daemon (10.0.0.1:41422). Sep 9 00:20:31.533516 containerd[1473]: 2025-09-09 00:20:31.490 [INFO][3937] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Sep 9 00:20:31.533516 containerd[1473]: 2025-09-09 00:20:31.490 [INFO][3937] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" iface="eth0" netns="/var/run/netns/cni-cf91c5cd-7e75-ddbf-b286-07da24ede3a4" Sep 9 00:20:31.533516 containerd[1473]: 2025-09-09 00:20:31.490 [INFO][3937] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" iface="eth0" netns="/var/run/netns/cni-cf91c5cd-7e75-ddbf-b286-07da24ede3a4" Sep 9 00:20:31.533516 containerd[1473]: 2025-09-09 00:20:31.490 [INFO][3937] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" iface="eth0" netns="/var/run/netns/cni-cf91c5cd-7e75-ddbf-b286-07da24ede3a4" Sep 9 00:20:31.533516 containerd[1473]: 2025-09-09 00:20:31.491 [INFO][3937] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Sep 9 00:20:31.533516 containerd[1473]: 2025-09-09 00:20:31.491 [INFO][3937] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Sep 9 00:20:31.533516 containerd[1473]: 2025-09-09 00:20:31.517 [INFO][3955] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" HandleID="k8s-pod-network.86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Workload="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:31.533516 containerd[1473]: 2025-09-09 00:20:31.517 [INFO][3955] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:31.533516 containerd[1473]: 2025-09-09 00:20:31.517 [INFO][3955] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:31.533516 containerd[1473]: 2025-09-09 00:20:31.523 [WARNING][3955] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" HandleID="k8s-pod-network.86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Workload="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:31.533516 containerd[1473]: 2025-09-09 00:20:31.523 [INFO][3955] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" HandleID="k8s-pod-network.86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Workload="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:31.533516 containerd[1473]: 2025-09-09 00:20:31.525 [INFO][3955] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:31.533516 containerd[1473]: 2025-09-09 00:20:31.529 [INFO][3937] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Sep 9 00:20:31.537074 systemd[1]: run-netns-cni\x2dcf91c5cd\x2d7e75\x2dddbf\x2db286\x2d07da24ede3a4.mount: Deactivated successfully. Sep 9 00:20:31.538311 containerd[1473]: time="2025-09-09T00:20:31.538245137Z" level=info msg="TearDown network for sandbox \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\" successfully" Sep 9 00:20:31.538311 containerd[1473]: time="2025-09-09T00:20:31.538297549Z" level=info msg="StopPodSandbox for \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\" returns successfully" Sep 9 00:20:31.540443 containerd[1473]: time="2025-09-09T00:20:31.539667535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7zmjc,Uid:167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe,Namespace:calico-system,Attempt:1,}" Sep 9 00:20:31.547936 containerd[1473]: 2025-09-09 00:20:31.488 [INFO][3938] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Sep 9 00:20:31.547936 containerd[1473]: 2025-09-09 00:20:31.488 [INFO][3938] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" iface="eth0" netns="/var/run/netns/cni-8198860a-913a-b9bf-2a8e-da38ff8fb387" Sep 9 00:20:31.547936 containerd[1473]: 2025-09-09 00:20:31.489 [INFO][3938] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" iface="eth0" netns="/var/run/netns/cni-8198860a-913a-b9bf-2a8e-da38ff8fb387" Sep 9 00:20:31.547936 containerd[1473]: 2025-09-09 00:20:31.490 [INFO][3938] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" iface="eth0" netns="/var/run/netns/cni-8198860a-913a-b9bf-2a8e-da38ff8fb387" Sep 9 00:20:31.547936 containerd[1473]: 2025-09-09 00:20:31.490 [INFO][3938] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Sep 9 00:20:31.547936 containerd[1473]: 2025-09-09 00:20:31.490 [INFO][3938] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Sep 9 00:20:31.547936 containerd[1473]: 2025-09-09 00:20:31.518 [INFO][3954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" HandleID="k8s-pod-network.7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Workload="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:31.547936 containerd[1473]: 2025-09-09 00:20:31.518 [INFO][3954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:31.547936 containerd[1473]: 2025-09-09 00:20:31.525 [INFO][3954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:31.547936 containerd[1473]: 2025-09-09 00:20:31.536 [WARNING][3954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" HandleID="k8s-pod-network.7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Workload="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:31.547936 containerd[1473]: 2025-09-09 00:20:31.537 [INFO][3954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" HandleID="k8s-pod-network.7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Workload="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:31.547936 containerd[1473]: 2025-09-09 00:20:31.541 [INFO][3954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:31.547936 containerd[1473]: 2025-09-09 00:20:31.544 [INFO][3938] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Sep 9 00:20:31.549768 containerd[1473]: time="2025-09-09T00:20:31.548127861Z" level=info msg="TearDown network for sandbox \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\" successfully" Sep 9 00:20:31.549768 containerd[1473]: time="2025-09-09T00:20:31.548162286Z" level=info msg="StopPodSandbox for \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\" returns successfully" Sep 9 00:20:31.549768 containerd[1473]: time="2025-09-09T00:20:31.549426751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbbb5d746-lpbgl,Uid:e65cbb6f-ba37-4168-a027-ea0ff3dac6d4,Namespace:calico-system,Attempt:1,}" Sep 9 00:20:31.550914 systemd[1]: run-netns-cni\x2d8198860a\x2d913a\x2db9bf\x2d2a8e\x2dda38ff8fb387.mount: Deactivated successfully. Sep 9 00:20:31.601984 sshd[3970]: Accepted publickey for core from 10.0.0.1 port 41422 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:31.604323 sshd[3970]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:31.610337 systemd-logind[1454]: New session 11 of user core. Sep 9 00:20:31.616545 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 00:20:31.750067 systemd[1]: Created slice kubepods-besteffort-pod50ba3402_3a49_4439_a48b_b3db549fac71.slice - libcontainer container kubepods-besteffort-pod50ba3402_3a49_4439_a48b_b3db549fac71.slice. Sep 9 00:20:31.755718 kubelet[2562]: I0909 00:20:31.755655 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blbfg\" (UniqueName: \"kubernetes.io/projected/50ba3402-3a49-4439-a48b-b3db549fac71-kube-api-access-blbfg\") pod \"whisker-5db6d46c68-6wzfx\" (UID: \"50ba3402-3a49-4439-a48b-b3db549fac71\") " pod="calico-system/whisker-5db6d46c68-6wzfx" Sep 9 00:20:31.758077 kubelet[2562]: I0909 00:20:31.758047 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/50ba3402-3a49-4439-a48b-b3db549fac71-whisker-backend-key-pair\") pod \"whisker-5db6d46c68-6wzfx\" (UID: \"50ba3402-3a49-4439-a48b-b3db549fac71\") " pod="calico-system/whisker-5db6d46c68-6wzfx" Sep 9 00:20:31.758225 kubelet[2562]: I0909 00:20:31.758211 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50ba3402-3a49-4439-a48b-b3db549fac71-whisker-ca-bundle\") pod \"whisker-5db6d46c68-6wzfx\" (UID: \"50ba3402-3a49-4439-a48b-b3db549fac71\") " pod="calico-system/whisker-5db6d46c68-6wzfx" Sep 9 00:20:31.824754 systemd-networkd[1410]: cali17e7d8a29c3: Link UP Sep 9 00:20:31.825588 systemd-networkd[1410]: cali17e7d8a29c3: Gained carrier Sep 9 00:20:31.843001 sshd[3970]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.594 [INFO][3971] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.611 [INFO][3971] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--7zmjc-eth0 csi-node-driver- calico-system 167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe 1042 0 2025-09-09 00:20:05 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-7zmjc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali17e7d8a29c3 [] [] }} ContainerID="c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" Namespace="calico-system" Pod="csi-node-driver-7zmjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7zmjc-" Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.611 [INFO][3971] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" Namespace="calico-system" Pod="csi-node-driver-7zmjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.660 [INFO][4001] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" HandleID="k8s-pod-network.c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" Workload="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.661 [INFO][4001] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" HandleID="k8s-pod-network.c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" Workload="localhost-k8s-csi--node--driver--7zmjc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e790), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-7zmjc", "timestamp":"2025-09-09 00:20:31.658699525 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.661 [INFO][4001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.661 [INFO][4001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.661 [INFO][4001] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.677 [INFO][4001] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" host="localhost" Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.706 [INFO][4001] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.748 [INFO][4001] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.759 [INFO][4001] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.766 [INFO][4001] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.766 [INFO][4001] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" host="localhost" Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.769 [INFO][4001] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626 Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.779 [INFO][4001] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" host="localhost" Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.785 [INFO][4001] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" host="localhost" Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.786 [INFO][4001] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" host="localhost" Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.786 [INFO][4001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:31.844381 containerd[1473]: 2025-09-09 00:20:31.786 [INFO][4001] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" HandleID="k8s-pod-network.c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" Workload="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:31.845250 containerd[1473]: 2025-09-09 00:20:31.795 [INFO][3971] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" Namespace="calico-system" Pod="csi-node-driver-7zmjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7zmjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7zmjc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-7zmjc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali17e7d8a29c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:31.845250 containerd[1473]: 2025-09-09 00:20:31.797 [INFO][3971] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" Namespace="calico-system" Pod="csi-node-driver-7zmjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:31.845250 containerd[1473]: 2025-09-09 00:20:31.797 [INFO][3971] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali17e7d8a29c3 ContainerID="c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" Namespace="calico-system" Pod="csi-node-driver-7zmjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:31.845250 containerd[1473]: 2025-09-09 00:20:31.822 [INFO][3971] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" Namespace="calico-system" Pod="csi-node-driver-7zmjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:31.845250 containerd[1473]: 2025-09-09 00:20:31.823 [INFO][3971] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" Namespace="calico-system" Pod="csi-node-driver-7zmjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7zmjc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7zmjc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626", Pod:"csi-node-driver-7zmjc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali17e7d8a29c3", MAC:"3e:15:ce:18:29:48", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:31.845250 containerd[1473]: 2025-09-09 00:20:31.837 [INFO][3971] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626" Namespace="calico-system" Pod="csi-node-driver-7zmjc" WorkloadEndpoint="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:31.847882 systemd-logind[1454]: Session 11 logged out. Waiting for processes to exit. Sep 9 00:20:31.848586 systemd[1]: sshd@10-10.0.0.15:22-10.0.0.1:41422.service: Deactivated successfully. Sep 9 00:20:31.854654 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 00:20:31.856717 systemd-logind[1454]: Removed session 11. Sep 9 00:20:31.889152 systemd-networkd[1410]: cali276af539ac3: Link UP Sep 9 00:20:31.889562 systemd-networkd[1410]: cali276af539ac3: Gained carrier Sep 9 00:20:31.896439 containerd[1473]: time="2025-09-09T00:20:31.896285584Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:20:31.896439 containerd[1473]: time="2025-09-09T00:20:31.896356310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:20:31.896439 containerd[1473]: time="2025-09-09T00:20:31.896404864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:31.896664 containerd[1473]: time="2025-09-09T00:20:31.896535805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.615 [INFO][3986] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.633 [INFO][3986] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0 calico-kube-controllers-5cbbb5d746- calico-system e65cbb6f-ba37-4168-a027-ea0ff3dac6d4 1041 0 2025-09-09 00:20:05 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5cbbb5d746 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5cbbb5d746-lpbgl eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali276af539ac3 [] [] }} ContainerID="09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" Namespace="calico-system" Pod="calico-kube-controllers-5cbbb5d746-lpbgl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-" Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.633 [INFO][3986] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" Namespace="calico-system" Pod="calico-kube-controllers-5cbbb5d746-lpbgl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.718 [INFO][4009] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" HandleID="k8s-pod-network.09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" Workload="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.719 [INFO][4009] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" HandleID="k8s-pod-network.09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" Workload="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f850), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5cbbb5d746-lpbgl", "timestamp":"2025-09-09 00:20:31.718418866 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.719 [INFO][4009] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.787 [INFO][4009] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.787 [INFO][4009] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.821 [INFO][4009] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" host="localhost" Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.838 [INFO][4009] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.851 [INFO][4009] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.853 [INFO][4009] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.859 [INFO][4009] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.861 [INFO][4009] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" host="localhost" Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.865 [INFO][4009] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703 Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.870 [INFO][4009] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" host="localhost" Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.878 [INFO][4009] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" host="localhost" Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.878 [INFO][4009] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" host="localhost" Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.878 [INFO][4009] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:31.910031 containerd[1473]: 2025-09-09 00:20:31.878 [INFO][4009] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" HandleID="k8s-pod-network.09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" Workload="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:31.910801 containerd[1473]: 2025-09-09 00:20:31.883 [INFO][3986] cni-plugin/k8s.go 418: Populated endpoint ContainerID="09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" Namespace="calico-system" Pod="calico-kube-controllers-5cbbb5d746-lpbgl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0", GenerateName:"calico-kube-controllers-5cbbb5d746-", Namespace:"calico-system", SelfLink:"", UID:"e65cbb6f-ba37-4168-a027-ea0ff3dac6d4", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cbbb5d746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5cbbb5d746-lpbgl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali276af539ac3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:31.910801 containerd[1473]: 2025-09-09 00:20:31.883 [INFO][3986] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" Namespace="calico-system" Pod="calico-kube-controllers-5cbbb5d746-lpbgl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:31.910801 containerd[1473]: 2025-09-09 00:20:31.883 [INFO][3986] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali276af539ac3 ContainerID="09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" Namespace="calico-system" Pod="calico-kube-controllers-5cbbb5d746-lpbgl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:31.910801 containerd[1473]: 2025-09-09 00:20:31.890 [INFO][3986] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" Namespace="calico-system" Pod="calico-kube-controllers-5cbbb5d746-lpbgl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:31.910801 containerd[1473]: 2025-09-09 00:20:31.890 [INFO][3986] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" Namespace="calico-system" Pod="calico-kube-controllers-5cbbb5d746-lpbgl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0", GenerateName:"calico-kube-controllers-5cbbb5d746-", Namespace:"calico-system", SelfLink:"", UID:"e65cbb6f-ba37-4168-a027-ea0ff3dac6d4", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cbbb5d746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703", Pod:"calico-kube-controllers-5cbbb5d746-lpbgl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali276af539ac3", MAC:"56:e6:bc:cc:a9:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:31.910801 containerd[1473]: 2025-09-09 00:20:31.905 [INFO][3986] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703" Namespace="calico-system" Pod="calico-kube-controllers-5cbbb5d746-lpbgl" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:31.928620 systemd[1]: Started cri-containerd-c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626.scope - libcontainer container c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626. Sep 9 00:20:31.943248 containerd[1473]: time="2025-09-09T00:20:31.939576993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:20:31.943248 containerd[1473]: time="2025-09-09T00:20:31.940828513Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:20:31.943248 containerd[1473]: time="2025-09-09T00:20:31.940847960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:31.943248 containerd[1473]: time="2025-09-09T00:20:31.940976147Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:31.945492 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:20:31.960417 containerd[1473]: time="2025-09-09T00:20:31.960376920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7zmjc,Uid:167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe,Namespace:calico-system,Attempt:1,} returns sandbox id \"c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626\"" Sep 9 00:20:31.964839 containerd[1473]: time="2025-09-09T00:20:31.964608907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 9 00:20:31.968242 systemd[1]: Started cri-containerd-09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703.scope - libcontainer container 09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703. Sep 9 00:20:31.982907 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:20:32.011635 containerd[1473]: time="2025-09-09T00:20:32.011481174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5cbbb5d746-lpbgl,Uid:e65cbb6f-ba37-4168-a027-ea0ff3dac6d4,Namespace:calico-system,Attempt:1,} returns sandbox id \"09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703\"" Sep 9 00:20:32.055320 containerd[1473]: time="2025-09-09T00:20:32.055268009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5db6d46c68-6wzfx,Uid:50ba3402-3a49-4439-a48b-b3db549fac71,Namespace:calico-system,Attempt:0,}" Sep 9 00:20:32.208038 systemd-networkd[1410]: calif35a710e5a0: Link UP Sep 9 00:20:32.208563 systemd-networkd[1410]: calif35a710e5a0: Gained carrier Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.108 [INFO][4159] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.120 [INFO][4159] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--5db6d46c68--6wzfx-eth0 whisker-5db6d46c68- calico-system 50ba3402-3a49-4439-a48b-b3db549fac71 1059 0 2025-09-09 00:20:31 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5db6d46c68 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-5db6d46c68-6wzfx eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calif35a710e5a0 [] [] }} ContainerID="01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" Namespace="calico-system" Pod="whisker-5db6d46c68-6wzfx" WorkloadEndpoint="localhost-k8s-whisker--5db6d46c68--6wzfx-" Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.120 [INFO][4159] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" Namespace="calico-system" Pod="whisker-5db6d46c68-6wzfx" WorkloadEndpoint="localhost-k8s-whisker--5db6d46c68--6wzfx-eth0" Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.152 [INFO][4170] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" HandleID="k8s-pod-network.01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" Workload="localhost-k8s-whisker--5db6d46c68--6wzfx-eth0" Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.153 [INFO][4170] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" HandleID="k8s-pod-network.01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" Workload="localhost-k8s-whisker--5db6d46c68--6wzfx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7190), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-5db6d46c68-6wzfx", "timestamp":"2025-09-09 00:20:32.15290648 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.153 [INFO][4170] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.153 [INFO][4170] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.153 [INFO][4170] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.161 [INFO][4170] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" host="localhost" Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.168 [INFO][4170] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.175 [INFO][4170] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.178 [INFO][4170] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.182 [INFO][4170] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.182 [INFO][4170] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" host="localhost" Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.184 [INFO][4170] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4 Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.192 [INFO][4170] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" host="localhost" Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.202 [INFO][4170] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" host="localhost" Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.202 [INFO][4170] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" host="localhost" Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.202 [INFO][4170] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:32.231730 containerd[1473]: 2025-09-09 00:20:32.202 [INFO][4170] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" HandleID="k8s-pod-network.01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" Workload="localhost-k8s-whisker--5db6d46c68--6wzfx-eth0" Sep 9 00:20:32.232592 containerd[1473]: 2025-09-09 00:20:32.205 [INFO][4159] cni-plugin/k8s.go 418: Populated endpoint ContainerID="01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" Namespace="calico-system" Pod="whisker-5db6d46c68-6wzfx" WorkloadEndpoint="localhost-k8s-whisker--5db6d46c68--6wzfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5db6d46c68--6wzfx-eth0", GenerateName:"whisker-5db6d46c68-", Namespace:"calico-system", SelfLink:"", UID:"50ba3402-3a49-4439-a48b-b3db549fac71", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5db6d46c68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-5db6d46c68-6wzfx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif35a710e5a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:32.232592 containerd[1473]: 2025-09-09 00:20:32.205 [INFO][4159] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" Namespace="calico-system" Pod="whisker-5db6d46c68-6wzfx" WorkloadEndpoint="localhost-k8s-whisker--5db6d46c68--6wzfx-eth0" Sep 9 00:20:32.232592 containerd[1473]: 2025-09-09 00:20:32.205 [INFO][4159] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif35a710e5a0 ContainerID="01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" Namespace="calico-system" Pod="whisker-5db6d46c68-6wzfx" WorkloadEndpoint="localhost-k8s-whisker--5db6d46c68--6wzfx-eth0" Sep 9 00:20:32.232592 containerd[1473]: 2025-09-09 00:20:32.208 [INFO][4159] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" Namespace="calico-system" Pod="whisker-5db6d46c68-6wzfx" WorkloadEndpoint="localhost-k8s-whisker--5db6d46c68--6wzfx-eth0" Sep 9 00:20:32.232592 containerd[1473]: 2025-09-09 00:20:32.209 [INFO][4159] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" Namespace="calico-system" Pod="whisker-5db6d46c68-6wzfx" WorkloadEndpoint="localhost-k8s-whisker--5db6d46c68--6wzfx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--5db6d46c68--6wzfx-eth0", GenerateName:"whisker-5db6d46c68-", Namespace:"calico-system", SelfLink:"", UID:"50ba3402-3a49-4439-a48b-b3db549fac71", ResourceVersion:"1059", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5db6d46c68", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4", Pod:"whisker-5db6d46c68-6wzfx", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calif35a710e5a0", MAC:"02:97:6c:78:e5:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:32.232592 containerd[1473]: 2025-09-09 00:20:32.227 [INFO][4159] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4" Namespace="calico-system" Pod="whisker-5db6d46c68-6wzfx" WorkloadEndpoint="localhost-k8s-whisker--5db6d46c68--6wzfx-eth0" Sep 9 00:20:32.253934 containerd[1473]: time="2025-09-09T00:20:32.252619842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:20:32.253934 containerd[1473]: time="2025-09-09T00:20:32.253623593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:20:32.253934 containerd[1473]: time="2025-09-09T00:20:32.253646307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:32.254308 containerd[1473]: time="2025-09-09T00:20:32.253950331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:32.283570 systemd[1]: Started cri-containerd-01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4.scope - libcontainer container 01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4. Sep 9 00:20:32.298523 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:20:32.328659 containerd[1473]: time="2025-09-09T00:20:32.328605744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5db6d46c68-6wzfx,Uid:50ba3402-3a49-4439-a48b-b3db549fac71,Namespace:calico-system,Attempt:0,} returns sandbox id \"01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4\"" Sep 9 00:20:32.429403 containerd[1473]: time="2025-09-09T00:20:32.429287338Z" level=info msg="StopPodSandbox for \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\"" Sep 9 00:20:32.431923 containerd[1473]: time="2025-09-09T00:20:32.430667313Z" level=info msg="StopPodSandbox for \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\"" Sep 9 00:20:32.433098 containerd[1473]: time="2025-09-09T00:20:32.433032522Z" level=info msg="StopPodSandbox for \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\"" Sep 9 00:20:32.791508 containerd[1473]: 2025-09-09 00:20:32.710 [INFO][4329] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Sep 9 00:20:32.791508 containerd[1473]: 2025-09-09 00:20:32.712 [INFO][4329] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" iface="eth0" netns="/var/run/netns/cni-49ae0260-ce7e-a17d-91a0-8a43c214bff7" Sep 9 00:20:32.791508 containerd[1473]: 2025-09-09 00:20:32.712 [INFO][4329] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" iface="eth0" netns="/var/run/netns/cni-49ae0260-ce7e-a17d-91a0-8a43c214bff7" Sep 9 00:20:32.791508 containerd[1473]: 2025-09-09 00:20:32.712 [INFO][4329] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" iface="eth0" netns="/var/run/netns/cni-49ae0260-ce7e-a17d-91a0-8a43c214bff7" Sep 9 00:20:32.791508 containerd[1473]: 2025-09-09 00:20:32.713 [INFO][4329] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Sep 9 00:20:32.791508 containerd[1473]: 2025-09-09 00:20:32.713 [INFO][4329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Sep 9 00:20:32.791508 containerd[1473]: 2025-09-09 00:20:32.755 [INFO][4384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" HandleID="k8s-pod-network.5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Workload="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:32.791508 containerd[1473]: 2025-09-09 00:20:32.755 [INFO][4384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:32.791508 containerd[1473]: 2025-09-09 00:20:32.755 [INFO][4384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:32.791508 containerd[1473]: 2025-09-09 00:20:32.769 [WARNING][4384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" HandleID="k8s-pod-network.5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Workload="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:32.791508 containerd[1473]: 2025-09-09 00:20:32.770 [INFO][4384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" HandleID="k8s-pod-network.5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Workload="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:32.791508 containerd[1473]: 2025-09-09 00:20:32.772 [INFO][4384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:32.791508 containerd[1473]: 2025-09-09 00:20:32.780 [INFO][4329] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Sep 9 00:20:32.793058 systemd[1]: run-netns-cni\x2d49ae0260\x2dce7e\x2da17d\x2d91a0\x2d8a43c214bff7.mount: Deactivated successfully. Sep 9 00:20:32.794566 containerd[1473]: time="2025-09-09T00:20:32.793501467Z" level=info msg="TearDown network for sandbox \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\" successfully" Sep 9 00:20:32.794566 containerd[1473]: time="2025-09-09T00:20:32.793552434Z" level=info msg="StopPodSandbox for \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\" returns successfully" Sep 9 00:20:32.796494 kubelet[2562]: E0909 00:20:32.795742 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:32.797828 containerd[1473]: time="2025-09-09T00:20:32.797327165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zpbrv,Uid:6169582d-5f41-430b-9890-0f5959297de0,Namespace:kube-system,Attempt:1,}" Sep 9 00:20:32.808891 containerd[1473]: 2025-09-09 00:20:32.728 [INFO][4309] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Sep 9 00:20:32.808891 containerd[1473]: 2025-09-09 00:20:32.729 [INFO][4309] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" iface="eth0" netns="/var/run/netns/cni-f4927780-8e1a-37aa-9903-f550ef03f257" Sep 9 00:20:32.808891 containerd[1473]: 2025-09-09 00:20:32.730 [INFO][4309] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" iface="eth0" netns="/var/run/netns/cni-f4927780-8e1a-37aa-9903-f550ef03f257" Sep 9 00:20:32.808891 containerd[1473]: 2025-09-09 00:20:32.732 [INFO][4309] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" iface="eth0" netns="/var/run/netns/cni-f4927780-8e1a-37aa-9903-f550ef03f257" Sep 9 00:20:32.808891 containerd[1473]: 2025-09-09 00:20:32.732 [INFO][4309] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Sep 9 00:20:32.808891 containerd[1473]: 2025-09-09 00:20:32.732 [INFO][4309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Sep 9 00:20:32.808891 containerd[1473]: 2025-09-09 00:20:32.783 [INFO][4392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" HandleID="k8s-pod-network.dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Workload="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:32.808891 containerd[1473]: 2025-09-09 00:20:32.784 [INFO][4392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:32.808891 containerd[1473]: 2025-09-09 00:20:32.784 [INFO][4392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:32.808891 containerd[1473]: 2025-09-09 00:20:32.792 [WARNING][4392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" HandleID="k8s-pod-network.dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Workload="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:32.808891 containerd[1473]: 2025-09-09 00:20:32.792 [INFO][4392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" HandleID="k8s-pod-network.dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Workload="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:32.808891 containerd[1473]: 2025-09-09 00:20:32.794 [INFO][4392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:32.808891 containerd[1473]: 2025-09-09 00:20:32.797 [INFO][4309] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Sep 9 00:20:32.814320 containerd[1473]: time="2025-09-09T00:20:32.813669230Z" level=info msg="TearDown network for sandbox \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\" successfully" Sep 9 00:20:32.814320 containerd[1473]: time="2025-09-09T00:20:32.813710870Z" level=info msg="StopPodSandbox for \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\" returns successfully" Sep 9 00:20:32.814749 systemd[1]: run-netns-cni\x2df4927780\x2d8e1a\x2d37aa\x2d9903\x2df550ef03f257.mount: Deactivated successfully. Sep 9 00:20:32.816811 containerd[1473]: time="2025-09-09T00:20:32.816751048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-b8v4d,Uid:2d5d79a3-0364-4187-9384-d9371101170a,Namespace:calico-system,Attempt:1,}" Sep 9 00:20:32.854475 containerd[1473]: 2025-09-09 00:20:32.733 [INFO][4308] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Sep 9 00:20:32.854475 containerd[1473]: 2025-09-09 00:20:32.735 [INFO][4308] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" iface="eth0" netns="/var/run/netns/cni-a1df17ef-ae46-a1d2-6a13-db2877def179" Sep 9 00:20:32.854475 containerd[1473]: 2025-09-09 00:20:32.735 [INFO][4308] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" iface="eth0" netns="/var/run/netns/cni-a1df17ef-ae46-a1d2-6a13-db2877def179" Sep 9 00:20:32.854475 containerd[1473]: 2025-09-09 00:20:32.735 [INFO][4308] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" iface="eth0" netns="/var/run/netns/cni-a1df17ef-ae46-a1d2-6a13-db2877def179" Sep 9 00:20:32.854475 containerd[1473]: 2025-09-09 00:20:32.735 [INFO][4308] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Sep 9 00:20:32.854475 containerd[1473]: 2025-09-09 00:20:32.735 [INFO][4308] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Sep 9 00:20:32.854475 containerd[1473]: 2025-09-09 00:20:32.813 [INFO][4394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" HandleID="k8s-pod-network.c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:32.854475 containerd[1473]: 2025-09-09 00:20:32.814 [INFO][4394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:32.854475 containerd[1473]: 2025-09-09 00:20:32.814 [INFO][4394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:32.854475 containerd[1473]: 2025-09-09 00:20:32.825 [WARNING][4394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" HandleID="k8s-pod-network.c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:32.854475 containerd[1473]: 2025-09-09 00:20:32.825 [INFO][4394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" HandleID="k8s-pod-network.c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:32.854475 containerd[1473]: 2025-09-09 00:20:32.829 [INFO][4394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:32.854475 containerd[1473]: 2025-09-09 00:20:32.846 [INFO][4308] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Sep 9 00:20:32.857386 containerd[1473]: time="2025-09-09T00:20:32.856282037Z" level=info msg="TearDown network for sandbox \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\" successfully" Sep 9 00:20:32.857386 containerd[1473]: time="2025-09-09T00:20:32.856317896Z" level=info msg="StopPodSandbox for \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\" returns successfully" Sep 9 00:20:32.858046 containerd[1473]: time="2025-09-09T00:20:32.857990684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c9b45b4c5-f5xxs,Uid:4f10fd88-00c6-468e-a12c-ae8ac5f160de,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:20:32.908418 kernel: bpftool[4482]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 9 00:20:32.942577 systemd-networkd[1410]: cali17e7d8a29c3: Gained IPv6LL Sep 9 00:20:33.014447 systemd-networkd[1410]: cali1cd4e6a6ecf: Link UP Sep 9 00:20:33.014727 systemd-networkd[1410]: cali1cd4e6a6ecf: Gained carrier Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.927 [INFO][4443] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--54d579b49d--b8v4d-eth0 goldmane-54d579b49d- calico-system 2d5d79a3-0364-4187-9384-d9371101170a 1083 0 2025-09-09 00:20:04 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-54d579b49d-b8v4d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1cd4e6a6ecf [] [] }} ContainerID="84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" Namespace="calico-system" Pod="goldmane-54d579b49d-b8v4d" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b8v4d-" Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.927 [INFO][4443] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" Namespace="calico-system" Pod="goldmane-54d579b49d-b8v4d" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.971 [INFO][4491] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" HandleID="k8s-pod-network.84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" Workload="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.971 [INFO][4491] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" HandleID="k8s-pod-network.84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" Workload="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00012ddf0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-54d579b49d-b8v4d", "timestamp":"2025-09-09 00:20:32.971079414 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.971 [INFO][4491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.971 [INFO][4491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.971 [INFO][4491] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.980 [INFO][4491] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" host="localhost" Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.984 [INFO][4491] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.989 [INFO][4491] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.990 [INFO][4491] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.993 [INFO][4491] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.993 [INFO][4491] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" host="localhost" Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.994 [INFO][4491] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247 Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:32.998 [INFO][4491] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" host="localhost" Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:33.006 [INFO][4491] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" host="localhost" Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:33.006 [INFO][4491] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" host="localhost" Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:33.006 [INFO][4491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:33.037040 containerd[1473]: 2025-09-09 00:20:33.006 [INFO][4491] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" HandleID="k8s-pod-network.84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" Workload="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:33.038340 containerd[1473]: 2025-09-09 00:20:33.009 [INFO][4443] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" Namespace="calico-system" Pod="goldmane-54d579b49d-b8v4d" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--b8v4d-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"2d5d79a3-0364-4187-9384-d9371101170a", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-54d579b49d-b8v4d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1cd4e6a6ecf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:33.038340 containerd[1473]: 2025-09-09 00:20:33.009 [INFO][4443] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" Namespace="calico-system" Pod="goldmane-54d579b49d-b8v4d" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:33.038340 containerd[1473]: 2025-09-09 00:20:33.009 [INFO][4443] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cd4e6a6ecf ContainerID="84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" Namespace="calico-system" Pod="goldmane-54d579b49d-b8v4d" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:33.038340 containerd[1473]: 2025-09-09 00:20:33.014 [INFO][4443] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" Namespace="calico-system" Pod="goldmane-54d579b49d-b8v4d" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:33.038340 containerd[1473]: 2025-09-09 00:20:33.017 [INFO][4443] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" Namespace="calico-system" Pod="goldmane-54d579b49d-b8v4d" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--b8v4d-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"2d5d79a3-0364-4187-9384-d9371101170a", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247", Pod:"goldmane-54d579b49d-b8v4d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1cd4e6a6ecf", MAC:"66:3a:06:e0:0d:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:33.038340 containerd[1473]: 2025-09-09 00:20:33.033 [INFO][4443] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247" Namespace="calico-system" Pod="goldmane-54d579b49d-b8v4d" WorkloadEndpoint="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:33.072691 containerd[1473]: time="2025-09-09T00:20:33.068259085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:20:33.072691 containerd[1473]: time="2025-09-09T00:20:33.072600901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:20:33.073201 containerd[1473]: time="2025-09-09T00:20:33.072666508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:33.074385 containerd[1473]: time="2025-09-09T00:20:33.074251675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:33.109586 systemd[1]: Started cri-containerd-84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247.scope - libcontainer container 84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247. Sep 9 00:20:33.137399 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:20:33.139347 systemd-networkd[1410]: cali5ba7b1e0171: Link UP Sep 9 00:20:33.139833 systemd-networkd[1410]: cali5ba7b1e0171: Gained carrier Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:32.894 [INFO][4414] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0 coredns-674b8bbfcf- kube-system 6169582d-5f41-430b-9890-0f5959297de0 1082 0 2025-09-09 00:19:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-zpbrv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5ba7b1e0171 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-zpbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zpbrv-" Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:32.896 [INFO][4414] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-zpbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:32.972 [INFO][4480] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" HandleID="k8s-pod-network.aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" Workload="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:32.973 [INFO][4480] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" HandleID="k8s-pod-network.aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" Workload="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003c6aa0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-zpbrv", "timestamp":"2025-09-09 00:20:32.972648723 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:32.973 [INFO][4480] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:33.007 [INFO][4480] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:33.008 [INFO][4480] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:33.083 [INFO][4480] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" host="localhost" Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:33.094 [INFO][4480] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:33.102 [INFO][4480] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:33.105 [INFO][4480] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:33.108 [INFO][4480] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:33.109 [INFO][4480] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" host="localhost" Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:33.112 [INFO][4480] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:33.118 [INFO][4480] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" host="localhost" Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:33.130 [INFO][4480] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" host="localhost" Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:33.130 [INFO][4480] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" host="localhost" Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:33.131 [INFO][4480] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:33.164537 containerd[1473]: 2025-09-09 00:20:33.132 [INFO][4480] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" HandleID="k8s-pod-network.aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" Workload="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:33.165153 containerd[1473]: 2025-09-09 00:20:33.137 [INFO][4414] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-zpbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6169582d-5f41-430b-9890-0f5959297de0", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-zpbrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ba7b1e0171", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:33.165153 containerd[1473]: 2025-09-09 00:20:33.137 [INFO][4414] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-zpbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:33.165153 containerd[1473]: 2025-09-09 00:20:33.137 [INFO][4414] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ba7b1e0171 ContainerID="aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-zpbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:33.165153 containerd[1473]: 2025-09-09 00:20:33.139 [INFO][4414] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-zpbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:33.165153 containerd[1473]: 2025-09-09 00:20:33.143 [INFO][4414] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-zpbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6169582d-5f41-430b-9890-0f5959297de0", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd", Pod:"coredns-674b8bbfcf-zpbrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ba7b1e0171", MAC:"e2:8f:b2:4a:ea:c6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:33.165153 containerd[1473]: 2025-09-09 00:20:33.159 [INFO][4414] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd" Namespace="kube-system" Pod="coredns-674b8bbfcf-zpbrv" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:33.193458 containerd[1473]: time="2025-09-09T00:20:33.192592356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-b8v4d,Uid:2d5d79a3-0364-4187-9384-d9371101170a,Namespace:calico-system,Attempt:1,} returns sandbox id \"84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247\"" Sep 9 00:20:33.198101 systemd[1]: run-netns-cni\x2da1df17ef\x2dae46\x2da1d2\x2d6a13\x2ddb2877def179.mount: Deactivated successfully. Sep 9 00:20:33.235194 containerd[1473]: time="2025-09-09T00:20:33.229651492Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:20:33.235194 containerd[1473]: time="2025-09-09T00:20:33.229722930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:20:33.235194 containerd[1473]: time="2025-09-09T00:20:33.229734391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:33.235194 containerd[1473]: time="2025-09-09T00:20:33.229839744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:33.262596 systemd-networkd[1410]: calif35a710e5a0: Gained IPv6LL Sep 9 00:20:33.267179 systemd[1]: Started cri-containerd-aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd.scope - libcontainer container aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd. Sep 9 00:20:33.291677 systemd-networkd[1410]: calida0972dd047: Link UP Sep 9 00:20:33.292101 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:20:33.293944 systemd-networkd[1410]: calida0972dd047: Gained carrier Sep 9 00:20:33.312643 systemd-networkd[1410]: vxlan.calico: Link UP Sep 9 00:20:33.313094 systemd-networkd[1410]: vxlan.calico: Gained carrier Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:32.966 [INFO][4462] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0 calico-apiserver-c9b45b4c5- calico-apiserver 4f10fd88-00c6-468e-a12c-ae8ac5f160de 1084 0 2025-09-09 00:20:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c9b45b4c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c9b45b4c5-f5xxs eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calida0972dd047 [] [] }} ContainerID="3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-f5xxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-" Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:32.969 [INFO][4462] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-f5xxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.017 [INFO][4503] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" HandleID="k8s-pod-network.3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.017 [INFO][4503] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" HandleID="k8s-pod-network.3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00026d600), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c9b45b4c5-f5xxs", "timestamp":"2025-09-09 00:20:33.017068083 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.017 [INFO][4503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.130 [INFO][4503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.130 [INFO][4503] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.191 [INFO][4503] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" host="localhost" Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.202 [INFO][4503] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.233 [INFO][4503] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.244 [INFO][4503] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.257 [INFO][4503] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.257 [INFO][4503] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" host="localhost" Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.260 [INFO][4503] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854 Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.271 [INFO][4503] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" host="localhost" Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.279 [INFO][4503] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" host="localhost" Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.280 [INFO][4503] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" host="localhost" Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.280 [INFO][4503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:33.327156 containerd[1473]: 2025-09-09 00:20:33.280 [INFO][4503] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" HandleID="k8s-pod-network.3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:33.327759 containerd[1473]: 2025-09-09 00:20:33.286 [INFO][4462] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-f5xxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0", GenerateName:"calico-apiserver-c9b45b4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f10fd88-00c6-468e-a12c-ae8ac5f160de", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c9b45b4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c9b45b4c5-f5xxs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida0972dd047", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:33.327759 containerd[1473]: 2025-09-09 00:20:33.286 [INFO][4462] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-f5xxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:33.327759 containerd[1473]: 2025-09-09 00:20:33.286 [INFO][4462] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida0972dd047 ContainerID="3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-f5xxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:33.327759 containerd[1473]: 2025-09-09 00:20:33.294 [INFO][4462] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-f5xxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:33.327759 containerd[1473]: 2025-09-09 00:20:33.295 [INFO][4462] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-f5xxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0", GenerateName:"calico-apiserver-c9b45b4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f10fd88-00c6-468e-a12c-ae8ac5f160de", ResourceVersion:"1084", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c9b45b4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854", Pod:"calico-apiserver-c9b45b4c5-f5xxs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida0972dd047", MAC:"12:95:10:79:c1:10", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:33.327759 containerd[1473]: 2025-09-09 00:20:33.317 [INFO][4462] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-f5xxs" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:33.331296 containerd[1473]: time="2025-09-09T00:20:33.331244100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zpbrv,Uid:6169582d-5f41-430b-9890-0f5959297de0,Namespace:kube-system,Attempt:1,} returns sandbox id \"aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd\"" Sep 9 00:20:33.332259 kubelet[2562]: E0909 00:20:33.332220 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:33.340867 containerd[1473]: time="2025-09-09T00:20:33.340811240Z" level=info msg="CreateContainer within sandbox \"aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:20:33.379243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1393934507.mount: Deactivated successfully. Sep 9 00:20:33.383672 containerd[1473]: time="2025-09-09T00:20:33.382084698Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:20:33.383672 containerd[1473]: time="2025-09-09T00:20:33.382862282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:20:33.383672 containerd[1473]: time="2025-09-09T00:20:33.382885126Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:33.383672 containerd[1473]: time="2025-09-09T00:20:33.383135578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:33.392169 containerd[1473]: time="2025-09-09T00:20:33.391840300Z" level=info msg="CreateContainer within sandbox \"aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c3ebb6e585b6fbaa60dc05d434d02d44b50f966a8c6767617e0efa9b353b2f94\"" Sep 9 00:20:33.392727 containerd[1473]: time="2025-09-09T00:20:33.392693561Z" level=info msg="StartContainer for \"c3ebb6e585b6fbaa60dc05d434d02d44b50f966a8c6767617e0efa9b353b2f94\"" Sep 9 00:20:33.408505 systemd[1]: Started cri-containerd-3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854.scope - libcontainer container 3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854. Sep 9 00:20:33.428519 systemd[1]: Started cri-containerd-c3ebb6e585b6fbaa60dc05d434d02d44b50f966a8c6767617e0efa9b353b2f94.scope - libcontainer container c3ebb6e585b6fbaa60dc05d434d02d44b50f966a8c6767617e0efa9b353b2f94. Sep 9 00:20:33.430134 containerd[1473]: time="2025-09-09T00:20:33.429953183Z" level=info msg="StopPodSandbox for \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\"" Sep 9 00:20:33.437938 kubelet[2562]: I0909 00:20:33.437870 2562 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bf0de85-d410-4847-a3a5-0ed12ae23e80" path="/var/lib/kubelet/pods/8bf0de85-d410-4847-a3a5-0ed12ae23e80/volumes" Sep 9 00:20:33.438866 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:20:33.711600 systemd-networkd[1410]: cali276af539ac3: Gained IPv6LL Sep 9 00:20:33.925838 containerd[1473]: time="2025-09-09T00:20:33.925761252Z" level=info msg="StartContainer for \"c3ebb6e585b6fbaa60dc05d434d02d44b50f966a8c6767617e0efa9b353b2f94\" returns successfully" Sep 9 00:20:33.925838 containerd[1473]: time="2025-09-09T00:20:33.925774016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c9b45b4c5-f5xxs,Uid:4f10fd88-00c6-468e-a12c-ae8ac5f160de,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854\"" Sep 9 00:20:33.930461 kubelet[2562]: E0909 00:20:33.930401 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:34.040511 containerd[1473]: 2025-09-09 00:20:33.720 [INFO][4711] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Sep 9 00:20:34.040511 containerd[1473]: 2025-09-09 00:20:33.720 [INFO][4711] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" iface="eth0" netns="/var/run/netns/cni-45f13b6b-5af0-b2b9-60ef-51db92cd35e8" Sep 9 00:20:34.040511 containerd[1473]: 2025-09-09 00:20:33.721 [INFO][4711] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" iface="eth0" netns="/var/run/netns/cni-45f13b6b-5af0-b2b9-60ef-51db92cd35e8" Sep 9 00:20:34.040511 containerd[1473]: 2025-09-09 00:20:33.721 [INFO][4711] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" iface="eth0" netns="/var/run/netns/cni-45f13b6b-5af0-b2b9-60ef-51db92cd35e8" Sep 9 00:20:34.040511 containerd[1473]: 2025-09-09 00:20:33.721 [INFO][4711] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Sep 9 00:20:34.040511 containerd[1473]: 2025-09-09 00:20:33.721 [INFO][4711] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Sep 9 00:20:34.040511 containerd[1473]: 2025-09-09 00:20:33.751 [INFO][4748] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" HandleID="k8s-pod-network.beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Workload="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:34.040511 containerd[1473]: 2025-09-09 00:20:33.751 [INFO][4748] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:34.040511 containerd[1473]: 2025-09-09 00:20:33.751 [INFO][4748] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:34.040511 containerd[1473]: 2025-09-09 00:20:33.959 [WARNING][4748] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" HandleID="k8s-pod-network.beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Workload="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:34.040511 containerd[1473]: 2025-09-09 00:20:33.959 [INFO][4748] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" HandleID="k8s-pod-network.beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Workload="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:34.040511 containerd[1473]: 2025-09-09 00:20:34.032 [INFO][4748] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:34.040511 containerd[1473]: 2025-09-09 00:20:34.036 [INFO][4711] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Sep 9 00:20:34.040511 containerd[1473]: time="2025-09-09T00:20:34.040168844Z" level=info msg="TearDown network for sandbox \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\" successfully" Sep 9 00:20:34.040511 containerd[1473]: time="2025-09-09T00:20:34.040202379Z" level=info msg="StopPodSandbox for \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\" returns successfully" Sep 9 00:20:34.041105 kubelet[2562]: E0909 00:20:34.040631 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:34.041163 containerd[1473]: time="2025-09-09T00:20:34.041109331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-88jjn,Uid:9eba6dae-45e2-4ee0-9d05-4984a3603e03,Namespace:kube-system,Attempt:1,}" Sep 9 00:20:34.051312 kubelet[2562]: I0909 00:20:34.051153 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zpbrv" podStartSLOduration=47.051092849 podStartE2EDuration="47.051092849s" podCreationTimestamp="2025-09-09 00:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:20:34.033272326 +0000 UTC m=+52.721262845" watchObservedRunningTime="2025-09-09 00:20:34.051092849 +0000 UTC m=+52.739083378" Sep 9 00:20:34.195125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3945747302.mount: Deactivated successfully. Sep 9 00:20:34.195978 systemd[1]: run-netns-cni\x2d45f13b6b\x2d5af0\x2db2b9\x2d60ef\x2d51db92cd35e8.mount: Deactivated successfully. Sep 9 00:20:34.293479 systemd-networkd[1410]: cali1cd4e6a6ecf: Gained IPv6LL Sep 9 00:20:34.317644 systemd-networkd[1410]: cali59bfbda4e1c: Link UP Sep 9 00:20:34.319648 systemd-networkd[1410]: cali59bfbda4e1c: Gained carrier Sep 9 00:20:34.326438 containerd[1473]: time="2025-09-09T00:20:34.325869168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:34.331182 containerd[1473]: time="2025-09-09T00:20:34.330230977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 9 00:20:34.335037 containerd[1473]: time="2025-09-09T00:20:34.334231081Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:34.341506 containerd[1473]: time="2025-09-09T00:20:34.341425401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:34.342168 containerd[1473]: time="2025-09-09T00:20:34.342128462Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.377485489s" Sep 9 00:20:34.342243 containerd[1473]: time="2025-09-09T00:20:34.342173989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.148 [INFO][4795] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--674b8bbfcf--88jjn-eth0 coredns-674b8bbfcf- kube-system 9eba6dae-45e2-4ee0-9d05-4984a3603e03 1105 0 2025-09-09 00:19:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-674b8bbfcf-88jjn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali59bfbda4e1c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" Namespace="kube-system" Pod="coredns-674b8bbfcf-88jjn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88jjn-" Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.149 [INFO][4795] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" Namespace="kube-system" Pod="coredns-674b8bbfcf-88jjn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.202 [INFO][4809] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" HandleID="k8s-pod-network.1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" Workload="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.203 [INFO][4809] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" HandleID="k8s-pod-network.1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" Workload="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000490980), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-674b8bbfcf-88jjn", "timestamp":"2025-09-09 00:20:34.202415121 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.203 [INFO][4809] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.203 [INFO][4809] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.203 [INFO][4809] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.233 [INFO][4809] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" host="localhost" Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.247 [INFO][4809] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.260 [INFO][4809] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.265 [INFO][4809] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.274 [INFO][4809] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.274 [INFO][4809] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" host="localhost" Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.279 [INFO][4809] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112 Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.291 [INFO][4809] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" host="localhost" Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.303 [INFO][4809] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" host="localhost" Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.304 [INFO][4809] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" host="localhost" Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.304 [INFO][4809] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:34.345828 containerd[1473]: 2025-09-09 00:20:34.305 [INFO][4809] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" HandleID="k8s-pod-network.1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" Workload="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:34.348245 containerd[1473]: 2025-09-09 00:20:34.309 [INFO][4795] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" Namespace="kube-system" Pod="coredns-674b8bbfcf-88jjn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--88jjn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9eba6dae-45e2-4ee0-9d05-4984a3603e03", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-674b8bbfcf-88jjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59bfbda4e1c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:34.348245 containerd[1473]: 2025-09-09 00:20:34.309 [INFO][4795] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" Namespace="kube-system" Pod="coredns-674b8bbfcf-88jjn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:34.348245 containerd[1473]: 2025-09-09 00:20:34.309 [INFO][4795] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali59bfbda4e1c ContainerID="1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" Namespace="kube-system" Pod="coredns-674b8bbfcf-88jjn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:34.348245 containerd[1473]: 2025-09-09 00:20:34.323 [INFO][4795] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" Namespace="kube-system" Pod="coredns-674b8bbfcf-88jjn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:34.348245 containerd[1473]: 2025-09-09 00:20:34.324 [INFO][4795] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" Namespace="kube-system" Pod="coredns-674b8bbfcf-88jjn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--88jjn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9eba6dae-45e2-4ee0-9d05-4984a3603e03", ResourceVersion:"1105", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112", Pod:"coredns-674b8bbfcf-88jjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59bfbda4e1c", MAC:"2e:33:d9:94:21:24", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:34.348245 containerd[1473]: 2025-09-09 00:20:34.340 [INFO][4795] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112" Namespace="kube-system" Pod="coredns-674b8bbfcf-88jjn" WorkloadEndpoint="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:34.348245 containerd[1473]: time="2025-09-09T00:20:34.345904195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 9 00:20:34.362931 containerd[1473]: time="2025-09-09T00:20:34.362618503Z" level=info msg="CreateContainer within sandbox \"c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 9 00:20:34.390782 containerd[1473]: time="2025-09-09T00:20:34.390638290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:20:34.390782 containerd[1473]: time="2025-09-09T00:20:34.390706460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:20:34.390782 containerd[1473]: time="2025-09-09T00:20:34.390716450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:34.391038 containerd[1473]: time="2025-09-09T00:20:34.390835839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:34.423610 systemd[1]: Started cri-containerd-1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112.scope - libcontainer container 1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112. Sep 9 00:20:34.427806 containerd[1473]: time="2025-09-09T00:20:34.427755051Z" level=info msg="StopPodSandbox for \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\"" Sep 9 00:20:34.445914 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:20:34.490473 containerd[1473]: time="2025-09-09T00:20:34.489424168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-88jjn,Uid:9eba6dae-45e2-4ee0-9d05-4984a3603e03,Namespace:kube-system,Attempt:1,} returns sandbox id \"1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112\"" Sep 9 00:20:34.490828 kubelet[2562]: E0909 00:20:34.490296 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:34.652749 containerd[1473]: time="2025-09-09T00:20:34.652601724Z" level=info msg="CreateContainer within sandbox \"1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 00:20:34.766505 containerd[1473]: time="2025-09-09T00:20:34.766258820Z" level=info msg="CreateContainer within sandbox \"c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"29c986460698bd6578224755bcd4cf92774b0f303fdd2de6b886c6f4710808f1\"" Sep 9 00:20:34.767528 containerd[1473]: time="2025-09-09T00:20:34.767176152Z" level=info msg="StartContainer for \"29c986460698bd6578224755bcd4cf92774b0f303fdd2de6b886c6f4710808f1\"" Sep 9 00:20:34.801618 systemd[1]: Started cri-containerd-29c986460698bd6578224755bcd4cf92774b0f303fdd2de6b886c6f4710808f1.scope - libcontainer container 29c986460698bd6578224755bcd4cf92774b0f303fdd2de6b886c6f4710808f1. Sep 9 00:20:34.834027 containerd[1473]: 2025-09-09 00:20:34.740 [INFO][4878] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Sep 9 00:20:34.834027 containerd[1473]: 2025-09-09 00:20:34.741 [INFO][4878] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" iface="eth0" netns="/var/run/netns/cni-1fb52c40-9e76-e249-6e11-b3df0c71a004" Sep 9 00:20:34.834027 containerd[1473]: 2025-09-09 00:20:34.741 [INFO][4878] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" iface="eth0" netns="/var/run/netns/cni-1fb52c40-9e76-e249-6e11-b3df0c71a004" Sep 9 00:20:34.834027 containerd[1473]: 2025-09-09 00:20:34.741 [INFO][4878] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" iface="eth0" netns="/var/run/netns/cni-1fb52c40-9e76-e249-6e11-b3df0c71a004" Sep 9 00:20:34.834027 containerd[1473]: 2025-09-09 00:20:34.741 [INFO][4878] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Sep 9 00:20:34.834027 containerd[1473]: 2025-09-09 00:20:34.741 [INFO][4878] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Sep 9 00:20:34.834027 containerd[1473]: 2025-09-09 00:20:34.770 [INFO][4901] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" HandleID="k8s-pod-network.723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:34.834027 containerd[1473]: 2025-09-09 00:20:34.770 [INFO][4901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:34.834027 containerd[1473]: 2025-09-09 00:20:34.770 [INFO][4901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:34.834027 containerd[1473]: 2025-09-09 00:20:34.823 [WARNING][4901] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" HandleID="k8s-pod-network.723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:34.834027 containerd[1473]: 2025-09-09 00:20:34.824 [INFO][4901] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" HandleID="k8s-pod-network.723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:34.834027 containerd[1473]: 2025-09-09 00:20:34.826 [INFO][4901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:34.834027 containerd[1473]: 2025-09-09 00:20:34.830 [INFO][4878] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Sep 9 00:20:34.835085 containerd[1473]: time="2025-09-09T00:20:34.835037585Z" level=info msg="TearDown network for sandbox \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\" successfully" Sep 9 00:20:34.835085 containerd[1473]: time="2025-09-09T00:20:34.835072222Z" level=info msg="StopPodSandbox for \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\" returns successfully" Sep 9 00:20:34.837735 containerd[1473]: time="2025-09-09T00:20:34.837421053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c9b45b4c5-8gmqn,Uid:1747ee41-b82d-4e7c-9b85-9f845fa3552f,Namespace:calico-apiserver,Attempt:1,}" Sep 9 00:20:34.862824 systemd-networkd[1410]: vxlan.calico: Gained IPv6LL Sep 9 00:20:34.899548 containerd[1473]: time="2025-09-09T00:20:34.899486672Z" level=info msg="StartContainer for \"29c986460698bd6578224755bcd4cf92774b0f303fdd2de6b886c6f4710808f1\" returns successfully" Sep 9 00:20:34.921465 containerd[1473]: time="2025-09-09T00:20:34.921210544Z" level=info msg="CreateContainer within sandbox \"1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"49815f243e5b9071b3b360ddb589096e22d54ca92a0b2116b65859196d7637be\"" Sep 9 00:20:34.922551 containerd[1473]: time="2025-09-09T00:20:34.922481555Z" level=info msg="StartContainer for \"49815f243e5b9071b3b360ddb589096e22d54ca92a0b2116b65859196d7637be\"" Sep 9 00:20:34.940397 kubelet[2562]: E0909 00:20:34.940250 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:34.982764 systemd[1]: Started cri-containerd-49815f243e5b9071b3b360ddb589096e22d54ca92a0b2116b65859196d7637be.scope - libcontainer container 49815f243e5b9071b3b360ddb589096e22d54ca92a0b2116b65859196d7637be. Sep 9 00:20:34.991777 systemd-networkd[1410]: cali5ba7b1e0171: Gained IPv6LL Sep 9 00:20:35.190465 systemd[1]: run-netns-cni\x2d1fb52c40\x2d9e76\x2de249\x2d6e11\x2db3df0c71a004.mount: Deactivated successfully. Sep 9 00:20:35.191975 containerd[1473]: time="2025-09-09T00:20:35.191922887Z" level=info msg="StartContainer for \"49815f243e5b9071b3b360ddb589096e22d54ca92a0b2116b65859196d7637be\" returns successfully" Sep 9 00:20:35.310562 systemd-networkd[1410]: calida0972dd047: Gained IPv6LL Sep 9 00:20:35.393084 systemd-networkd[1410]: caliae68aef4e67: Link UP Sep 9 00:20:35.393597 systemd-networkd[1410]: caliae68aef4e67: Gained carrier Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.298 [INFO][4986] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0 calico-apiserver-c9b45b4c5- calico-apiserver 1747ee41-b82d-4e7c-9b85-9f845fa3552f 1127 0 2025-09-09 00:20:01 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c9b45b4c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c9b45b4c5-8gmqn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliae68aef4e67 [] [] }} ContainerID="794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-8gmqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-" Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.299 [INFO][4986] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-8gmqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.335 [INFO][5003] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" HandleID="k8s-pod-network.794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.336 [INFO][5003] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" HandleID="k8s-pod-network.794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000510ef0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c9b45b4c5-8gmqn", "timestamp":"2025-09-09 00:20:35.335794984 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.336 [INFO][5003] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.336 [INFO][5003] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.336 [INFO][5003] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.344 [INFO][5003] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" host="localhost" Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.351 [INFO][5003] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.357 [INFO][5003] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.363 [INFO][5003] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.367 [INFO][5003] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.367 [INFO][5003] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" host="localhost" Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.370 [INFO][5003] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.376 [INFO][5003] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" host="localhost" Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.385 [INFO][5003] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" host="localhost" Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.385 [INFO][5003] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" host="localhost" Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.385 [INFO][5003] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:35.412516 containerd[1473]: 2025-09-09 00:20:35.385 [INFO][5003] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" HandleID="k8s-pod-network.794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:35.413753 containerd[1473]: 2025-09-09 00:20:35.389 [INFO][4986] cni-plugin/k8s.go 418: Populated endpoint ContainerID="794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-8gmqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0", GenerateName:"calico-apiserver-c9b45b4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1747ee41-b82d-4e7c-9b85-9f845fa3552f", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c9b45b4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c9b45b4c5-8gmqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae68aef4e67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:35.413753 containerd[1473]: 2025-09-09 00:20:35.389 [INFO][4986] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-8gmqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:35.413753 containerd[1473]: 2025-09-09 00:20:35.389 [INFO][4986] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae68aef4e67 ContainerID="794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-8gmqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:35.413753 containerd[1473]: 2025-09-09 00:20:35.394 [INFO][4986] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-8gmqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:35.413753 containerd[1473]: 2025-09-09 00:20:35.394 [INFO][4986] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-8gmqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0", GenerateName:"calico-apiserver-c9b45b4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1747ee41-b82d-4e7c-9b85-9f845fa3552f", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c9b45b4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce", Pod:"calico-apiserver-c9b45b4c5-8gmqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae68aef4e67", MAC:"da:7e:d3:ef:5e:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:35.413753 containerd[1473]: 2025-09-09 00:20:35.408 [INFO][4986] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce" Namespace="calico-apiserver" Pod="calico-apiserver-c9b45b4c5-8gmqn" WorkloadEndpoint="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:35.438231 containerd[1473]: time="2025-09-09T00:20:35.438102053Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 9 00:20:35.438231 containerd[1473]: time="2025-09-09T00:20:35.438166938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 9 00:20:35.438231 containerd[1473]: time="2025-09-09T00:20:35.438181997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:35.438504 containerd[1473]: time="2025-09-09T00:20:35.438291277Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 9 00:20:35.468611 systemd[1]: Started cri-containerd-794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce.scope - libcontainer container 794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce. Sep 9 00:20:35.485938 systemd-resolved[1334]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 00:20:35.515284 containerd[1473]: time="2025-09-09T00:20:35.515119532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c9b45b4c5-8gmqn,Uid:1747ee41-b82d-4e7c-9b85-9f845fa3552f,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce\"" Sep 9 00:20:35.944355 kubelet[2562]: E0909 00:20:35.944307 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:35.946664 kubelet[2562]: E0909 00:20:35.945888 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:35.951226 systemd-networkd[1410]: cali59bfbda4e1c: Gained IPv6LL Sep 9 00:20:35.974016 kubelet[2562]: I0909 00:20:35.973916 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-88jjn" podStartSLOduration=48.973897237 podStartE2EDuration="48.973897237s" podCreationTimestamp="2025-09-09 00:19:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 00:20:35.956414334 +0000 UTC m=+54.644404833" watchObservedRunningTime="2025-09-09 00:20:35.973897237 +0000 UTC m=+54.661887746" Sep 9 00:20:36.590672 systemd-networkd[1410]: caliae68aef4e67: Gained IPv6LL Sep 9 00:20:36.858149 systemd[1]: Started sshd@11-10.0.0.15:22-10.0.0.1:41434.service - OpenSSH per-connection server daemon (10.0.0.1:41434). Sep 9 00:20:36.948251 kubelet[2562]: E0909 00:20:36.948197 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:36.948947 kubelet[2562]: E0909 00:20:36.948333 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:37.047967 sshd[5063]: Accepted publickey for core from 10.0.0.1 port 41434 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:37.051090 sshd[5063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:37.059055 systemd-logind[1454]: New session 12 of user core. Sep 9 00:20:37.064820 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 00:20:37.220967 sshd[5063]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:37.227012 systemd[1]: sshd@11-10.0.0.15:22-10.0.0.1:41434.service: Deactivated successfully. Sep 9 00:20:37.229320 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 00:20:37.229651 systemd-logind[1454]: Session 12 logged out. Waiting for processes to exit. Sep 9 00:20:37.231791 systemd-logind[1454]: Removed session 12. Sep 9 00:20:37.952257 kubelet[2562]: E0909 00:20:37.952203 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:20:38.329049 containerd[1473]: time="2025-09-09T00:20:38.327740955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:38.340034 containerd[1473]: time="2025-09-09T00:20:38.339351715Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 9 00:20:38.401469 containerd[1473]: time="2025-09-09T00:20:38.401316115Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:38.498103 containerd[1473]: time="2025-09-09T00:20:38.497771094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:38.499230 containerd[1473]: time="2025-09-09T00:20:38.499167450Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 4.153128908s" Sep 9 00:20:38.499316 containerd[1473]: time="2025-09-09T00:20:38.499230521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 9 00:20:38.502035 containerd[1473]: time="2025-09-09T00:20:38.501672981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 9 00:20:38.622767 containerd[1473]: time="2025-09-09T00:20:38.622532309Z" level=info msg="CreateContainer within sandbox \"09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 9 00:20:38.661606 containerd[1473]: time="2025-09-09T00:20:38.661513324Z" level=info msg="CreateContainer within sandbox \"09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"19defaf9215aff71fbdb56fb530c18f3eddcf8c14f8eee80e8a8837bbce38307\"" Sep 9 00:20:38.663722 containerd[1473]: time="2025-09-09T00:20:38.662729454Z" level=info msg="StartContainer for \"19defaf9215aff71fbdb56fb530c18f3eddcf8c14f8eee80e8a8837bbce38307\"" Sep 9 00:20:38.703811 systemd[1]: Started cri-containerd-19defaf9215aff71fbdb56fb530c18f3eddcf8c14f8eee80e8a8837bbce38307.scope - libcontainer container 19defaf9215aff71fbdb56fb530c18f3eddcf8c14f8eee80e8a8837bbce38307. Sep 9 00:20:38.766336 containerd[1473]: time="2025-09-09T00:20:38.765127052Z" level=info msg="StartContainer for \"19defaf9215aff71fbdb56fb530c18f3eddcf8c14f8eee80e8a8837bbce38307\" returns successfully" Sep 9 00:20:39.036473 kubelet[2562]: I0909 00:20:39.035426 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5cbbb5d746-lpbgl" podStartSLOduration=27.548512554 podStartE2EDuration="34.035396124s" podCreationTimestamp="2025-09-09 00:20:05 +0000 UTC" firstStartedPulling="2025-09-09 00:20:32.014034395 +0000 UTC m=+50.702024894" lastFinishedPulling="2025-09-09 00:20:38.500917955 +0000 UTC m=+57.188908464" observedRunningTime="2025-09-09 00:20:38.976724953 +0000 UTC m=+57.664715452" watchObservedRunningTime="2025-09-09 00:20:39.035396124 +0000 UTC m=+57.723386624" Sep 9 00:20:39.521054 systemd[1]: run-containerd-runc-k8s.io-19defaf9215aff71fbdb56fb530c18f3eddcf8c14f8eee80e8a8837bbce38307-runc.OvTJeU.mount: Deactivated successfully. Sep 9 00:20:40.421929 containerd[1473]: time="2025-09-09T00:20:40.421837573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:40.423560 containerd[1473]: time="2025-09-09T00:20:40.423447304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 9 00:20:40.425830 containerd[1473]: time="2025-09-09T00:20:40.425757818Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:40.435058 containerd[1473]: time="2025-09-09T00:20:40.434965354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:40.436425 containerd[1473]: time="2025-09-09T00:20:40.436293306Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.934569066s" Sep 9 00:20:40.436425 containerd[1473]: time="2025-09-09T00:20:40.436381926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 9 00:20:40.438510 containerd[1473]: time="2025-09-09T00:20:40.438455946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 9 00:20:40.444436 containerd[1473]: time="2025-09-09T00:20:40.444341894Z" level=info msg="CreateContainer within sandbox \"01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 9 00:20:40.469296 containerd[1473]: time="2025-09-09T00:20:40.469225409Z" level=info msg="CreateContainer within sandbox \"01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"855504a41f8aad11af237c53e5e419d266c8860bf6f26c453a81ab9174e4a4e4\"" Sep 9 00:20:40.470082 containerd[1473]: time="2025-09-09T00:20:40.469809998Z" level=info msg="StartContainer for \"855504a41f8aad11af237c53e5e419d266c8860bf6f26c453a81ab9174e4a4e4\"" Sep 9 00:20:40.511604 systemd[1]: Started cri-containerd-855504a41f8aad11af237c53e5e419d266c8860bf6f26c453a81ab9174e4a4e4.scope - libcontainer container 855504a41f8aad11af237c53e5e419d266c8860bf6f26c453a81ab9174e4a4e4. Sep 9 00:20:40.563159 containerd[1473]: time="2025-09-09T00:20:40.563098454Z" level=info msg="StartContainer for \"855504a41f8aad11af237c53e5e419d266c8860bf6f26c453a81ab9174e4a4e4\" returns successfully" Sep 9 00:20:41.405440 containerd[1473]: time="2025-09-09T00:20:41.405355608Z" level=info msg="StopPodSandbox for \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\"" Sep 9 00:20:41.501923 containerd[1473]: 2025-09-09 00:20:41.450 [WARNING][5216] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6169582d-5f41-430b-9890-0f5959297de0", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd", Pod:"coredns-674b8bbfcf-zpbrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ba7b1e0171", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:41.501923 containerd[1473]: 2025-09-09 00:20:41.450 [INFO][5216] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Sep 9 00:20:41.501923 containerd[1473]: 2025-09-09 00:20:41.450 [INFO][5216] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" iface="eth0" netns="" Sep 9 00:20:41.501923 containerd[1473]: 2025-09-09 00:20:41.450 [INFO][5216] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Sep 9 00:20:41.501923 containerd[1473]: 2025-09-09 00:20:41.450 [INFO][5216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Sep 9 00:20:41.501923 containerd[1473]: 2025-09-09 00:20:41.485 [INFO][5227] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" HandleID="k8s-pod-network.5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Workload="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:41.501923 containerd[1473]: 2025-09-09 00:20:41.485 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:41.501923 containerd[1473]: 2025-09-09 00:20:41.485 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:41.501923 containerd[1473]: 2025-09-09 00:20:41.492 [WARNING][5227] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" HandleID="k8s-pod-network.5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Workload="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:41.501923 containerd[1473]: 2025-09-09 00:20:41.492 [INFO][5227] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" HandleID="k8s-pod-network.5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Workload="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:41.501923 containerd[1473]: 2025-09-09 00:20:41.494 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:41.501923 containerd[1473]: 2025-09-09 00:20:41.496 [INFO][5216] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Sep 9 00:20:41.502973 containerd[1473]: time="2025-09-09T00:20:41.501983681Z" level=info msg="TearDown network for sandbox \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\" successfully" Sep 9 00:20:41.502973 containerd[1473]: time="2025-09-09T00:20:41.502031402Z" level=info msg="StopPodSandbox for \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\" returns successfully" Sep 9 00:20:41.502973 containerd[1473]: time="2025-09-09T00:20:41.502657750Z" level=info msg="RemovePodSandbox for \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\"" Sep 9 00:20:41.505917 containerd[1473]: time="2025-09-09T00:20:41.505872621Z" level=info msg="Forcibly stopping sandbox \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\"" Sep 9 00:20:41.587414 containerd[1473]: 2025-09-09 00:20:41.547 [WARNING][5244] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"6169582d-5f41-430b-9890-0f5959297de0", ResourceVersion:"1137", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"aae262ca5a99906d760a44532507d6720e2cc2da2121e9147284a33dc10425fd", Pod:"coredns-674b8bbfcf-zpbrv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5ba7b1e0171", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:41.587414 containerd[1473]: 2025-09-09 00:20:41.547 [INFO][5244] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Sep 9 00:20:41.587414 containerd[1473]: 2025-09-09 00:20:41.547 [INFO][5244] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" iface="eth0" netns="" Sep 9 00:20:41.587414 containerd[1473]: 2025-09-09 00:20:41.547 [INFO][5244] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Sep 9 00:20:41.587414 containerd[1473]: 2025-09-09 00:20:41.547 [INFO][5244] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Sep 9 00:20:41.587414 containerd[1473]: 2025-09-09 00:20:41.571 [INFO][5253] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" HandleID="k8s-pod-network.5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Workload="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:41.587414 containerd[1473]: 2025-09-09 00:20:41.571 [INFO][5253] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:41.587414 containerd[1473]: 2025-09-09 00:20:41.571 [INFO][5253] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:41.587414 containerd[1473]: 2025-09-09 00:20:41.579 [WARNING][5253] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" HandleID="k8s-pod-network.5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Workload="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:41.587414 containerd[1473]: 2025-09-09 00:20:41.579 [INFO][5253] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" HandleID="k8s-pod-network.5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Workload="localhost-k8s-coredns--674b8bbfcf--zpbrv-eth0" Sep 9 00:20:41.587414 containerd[1473]: 2025-09-09 00:20:41.581 [INFO][5253] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:41.587414 containerd[1473]: 2025-09-09 00:20:41.583 [INFO][5244] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5" Sep 9 00:20:41.587896 containerd[1473]: time="2025-09-09T00:20:41.587458393Z" level=info msg="TearDown network for sandbox \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\" successfully" Sep 9 00:20:41.596384 containerd[1473]: time="2025-09-09T00:20:41.596319567Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:20:41.596480 containerd[1473]: time="2025-09-09T00:20:41.596455056Z" level=info msg="RemovePodSandbox \"5eb34825d008c40ee56c24fa25b4768e5de5a41729eb3e3591ed6ee80f0ba9a5\" returns successfully" Sep 9 00:20:41.597276 containerd[1473]: time="2025-09-09T00:20:41.597226372Z" level=info msg="StopPodSandbox for \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\"" Sep 9 00:20:41.686661 containerd[1473]: 2025-09-09 00:20:41.643 [WARNING][5271] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" WorkloadEndpoint="localhost-k8s-whisker--54cbddff85--98lc8-eth0" Sep 9 00:20:41.686661 containerd[1473]: 2025-09-09 00:20:41.643 [INFO][5271] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Sep 9 00:20:41.686661 containerd[1473]: 2025-09-09 00:20:41.643 [INFO][5271] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" iface="eth0" netns="" Sep 9 00:20:41.686661 containerd[1473]: 2025-09-09 00:20:41.643 [INFO][5271] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Sep 9 00:20:41.686661 containerd[1473]: 2025-09-09 00:20:41.643 [INFO][5271] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Sep 9 00:20:41.686661 containerd[1473]: 2025-09-09 00:20:41.670 [INFO][5280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" HandleID="k8s-pod-network.6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Workload="localhost-k8s-whisker--54cbddff85--98lc8-eth0" Sep 9 00:20:41.686661 containerd[1473]: 2025-09-09 00:20:41.670 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:41.686661 containerd[1473]: 2025-09-09 00:20:41.670 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:41.686661 containerd[1473]: 2025-09-09 00:20:41.678 [WARNING][5280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" HandleID="k8s-pod-network.6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Workload="localhost-k8s-whisker--54cbddff85--98lc8-eth0" Sep 9 00:20:41.686661 containerd[1473]: 2025-09-09 00:20:41.678 [INFO][5280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" HandleID="k8s-pod-network.6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Workload="localhost-k8s-whisker--54cbddff85--98lc8-eth0" Sep 9 00:20:41.686661 containerd[1473]: 2025-09-09 00:20:41.680 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:41.686661 containerd[1473]: 2025-09-09 00:20:41.683 [INFO][5271] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Sep 9 00:20:41.686661 containerd[1473]: time="2025-09-09T00:20:41.686594625Z" level=info msg="TearDown network for sandbox \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\" successfully" Sep 9 00:20:41.686661 containerd[1473]: time="2025-09-09T00:20:41.686629902Z" level=info msg="StopPodSandbox for \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\" returns successfully" Sep 9 00:20:41.687413 containerd[1473]: time="2025-09-09T00:20:41.687323169Z" level=info msg="RemovePodSandbox for \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\"" Sep 9 00:20:41.687413 containerd[1473]: time="2025-09-09T00:20:41.687393794Z" level=info msg="Forcibly stopping sandbox \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\"" Sep 9 00:20:41.776324 containerd[1473]: 2025-09-09 00:20:41.731 [WARNING][5298] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" WorkloadEndpoint="localhost-k8s-whisker--54cbddff85--98lc8-eth0" Sep 9 00:20:41.776324 containerd[1473]: 2025-09-09 00:20:41.732 [INFO][5298] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Sep 9 00:20:41.776324 containerd[1473]: 2025-09-09 00:20:41.732 [INFO][5298] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" iface="eth0" netns="" Sep 9 00:20:41.776324 containerd[1473]: 2025-09-09 00:20:41.732 [INFO][5298] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Sep 9 00:20:41.776324 containerd[1473]: 2025-09-09 00:20:41.732 [INFO][5298] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Sep 9 00:20:41.776324 containerd[1473]: 2025-09-09 00:20:41.758 [INFO][5306] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" HandleID="k8s-pod-network.6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Workload="localhost-k8s-whisker--54cbddff85--98lc8-eth0" Sep 9 00:20:41.776324 containerd[1473]: 2025-09-09 00:20:41.758 [INFO][5306] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:41.776324 containerd[1473]: 2025-09-09 00:20:41.758 [INFO][5306] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:41.776324 containerd[1473]: 2025-09-09 00:20:41.767 [WARNING][5306] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" HandleID="k8s-pod-network.6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Workload="localhost-k8s-whisker--54cbddff85--98lc8-eth0" Sep 9 00:20:41.776324 containerd[1473]: 2025-09-09 00:20:41.767 [INFO][5306] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" HandleID="k8s-pod-network.6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Workload="localhost-k8s-whisker--54cbddff85--98lc8-eth0" Sep 9 00:20:41.776324 containerd[1473]: 2025-09-09 00:20:41.769 [INFO][5306] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:41.776324 containerd[1473]: 2025-09-09 00:20:41.773 [INFO][5298] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb" Sep 9 00:20:41.776825 containerd[1473]: time="2025-09-09T00:20:41.776388561Z" level=info msg="TearDown network for sandbox \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\" successfully" Sep 9 00:20:41.783264 containerd[1473]: time="2025-09-09T00:20:41.783165508Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:20:41.783264 containerd[1473]: time="2025-09-09T00:20:41.783239970Z" level=info msg="RemovePodSandbox \"6d3f2eb8af17f07b237720e52362278c70bfbd0cde8998ab16e84c85742158cb\" returns successfully" Sep 9 00:20:41.783689 containerd[1473]: time="2025-09-09T00:20:41.783653843Z" level=info msg="StopPodSandbox for \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\"" Sep 9 00:20:41.865636 containerd[1473]: 2025-09-09 00:20:41.824 [WARNING][5326] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--b8v4d-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"2d5d79a3-0364-4187-9384-d9371101170a", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247", Pod:"goldmane-54d579b49d-b8v4d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1cd4e6a6ecf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:41.865636 containerd[1473]: 2025-09-09 00:20:41.825 [INFO][5326] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Sep 9 00:20:41.865636 containerd[1473]: 2025-09-09 00:20:41.825 [INFO][5326] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" iface="eth0" netns="" Sep 9 00:20:41.865636 containerd[1473]: 2025-09-09 00:20:41.825 [INFO][5326] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Sep 9 00:20:41.865636 containerd[1473]: 2025-09-09 00:20:41.825 [INFO][5326] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Sep 9 00:20:41.865636 containerd[1473]: 2025-09-09 00:20:41.850 [INFO][5334] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" HandleID="k8s-pod-network.dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Workload="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:41.865636 containerd[1473]: 2025-09-09 00:20:41.850 [INFO][5334] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:41.865636 containerd[1473]: 2025-09-09 00:20:41.850 [INFO][5334] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:41.865636 containerd[1473]: 2025-09-09 00:20:41.858 [WARNING][5334] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" HandleID="k8s-pod-network.dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Workload="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:41.865636 containerd[1473]: 2025-09-09 00:20:41.858 [INFO][5334] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" HandleID="k8s-pod-network.dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Workload="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:41.865636 containerd[1473]: 2025-09-09 00:20:41.859 [INFO][5334] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:41.865636 containerd[1473]: 2025-09-09 00:20:41.862 [INFO][5326] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Sep 9 00:20:41.865636 containerd[1473]: time="2025-09-09T00:20:41.865485044Z" level=info msg="TearDown network for sandbox \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\" successfully" Sep 9 00:20:41.865636 containerd[1473]: time="2025-09-09T00:20:41.865515953Z" level=info msg="StopPodSandbox for \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\" returns successfully" Sep 9 00:20:41.866347 containerd[1473]: time="2025-09-09T00:20:41.866287168Z" level=info msg="RemovePodSandbox for \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\"" Sep 9 00:20:41.866347 containerd[1473]: time="2025-09-09T00:20:41.866320061Z" level=info msg="Forcibly stopping sandbox \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\"" Sep 9 00:20:41.952629 containerd[1473]: 2025-09-09 00:20:41.906 [WARNING][5352] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--54d579b49d--b8v4d-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"2d5d79a3-0364-4187-9384-d9371101170a", ResourceVersion:"1093", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247", Pod:"goldmane-54d579b49d-b8v4d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1cd4e6a6ecf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:41.952629 containerd[1473]: 2025-09-09 00:20:41.906 [INFO][5352] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Sep 9 00:20:41.952629 containerd[1473]: 2025-09-09 00:20:41.907 [INFO][5352] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" iface="eth0" netns="" Sep 9 00:20:41.952629 containerd[1473]: 2025-09-09 00:20:41.907 [INFO][5352] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Sep 9 00:20:41.952629 containerd[1473]: 2025-09-09 00:20:41.907 [INFO][5352] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Sep 9 00:20:41.952629 containerd[1473]: 2025-09-09 00:20:41.936 [INFO][5361] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" HandleID="k8s-pod-network.dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Workload="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:41.952629 containerd[1473]: 2025-09-09 00:20:41.936 [INFO][5361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:41.952629 containerd[1473]: 2025-09-09 00:20:41.936 [INFO][5361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:41.952629 containerd[1473]: 2025-09-09 00:20:41.943 [WARNING][5361] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" HandleID="k8s-pod-network.dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Workload="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:41.952629 containerd[1473]: 2025-09-09 00:20:41.944 [INFO][5361] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" HandleID="k8s-pod-network.dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Workload="localhost-k8s-goldmane--54d579b49d--b8v4d-eth0" Sep 9 00:20:41.952629 containerd[1473]: 2025-09-09 00:20:41.945 [INFO][5361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:41.952629 containerd[1473]: 2025-09-09 00:20:41.948 [INFO][5352] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79" Sep 9 00:20:41.952629 containerd[1473]: time="2025-09-09T00:20:41.952465687Z" level=info msg="TearDown network for sandbox \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\" successfully" Sep 9 00:20:41.964279 containerd[1473]: time="2025-09-09T00:20:41.964131027Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:20:41.964279 containerd[1473]: time="2025-09-09T00:20:41.964239745Z" level=info msg="RemovePodSandbox \"dacb7c4b0735859a8128ada13a60005747744d624f6840af801b6a251355ba79\" returns successfully" Sep 9 00:20:41.965147 containerd[1473]: time="2025-09-09T00:20:41.965098789Z" level=info msg="StopPodSandbox for \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\"" Sep 9 00:20:42.037778 containerd[1473]: 2025-09-09 00:20:42.002 [WARNING][5378] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0", GenerateName:"calico-apiserver-c9b45b4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1747ee41-b82d-4e7c-9b85-9f845fa3552f", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c9b45b4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce", Pod:"calico-apiserver-c9b45b4c5-8gmqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae68aef4e67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:42.037778 containerd[1473]: 2025-09-09 00:20:42.002 [INFO][5378] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Sep 9 00:20:42.037778 containerd[1473]: 2025-09-09 00:20:42.002 [INFO][5378] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" iface="eth0" netns="" Sep 9 00:20:42.037778 containerd[1473]: 2025-09-09 00:20:42.002 [INFO][5378] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Sep 9 00:20:42.037778 containerd[1473]: 2025-09-09 00:20:42.002 [INFO][5378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Sep 9 00:20:42.037778 containerd[1473]: 2025-09-09 00:20:42.022 [INFO][5386] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" HandleID="k8s-pod-network.723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:42.037778 containerd[1473]: 2025-09-09 00:20:42.023 [INFO][5386] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:42.037778 containerd[1473]: 2025-09-09 00:20:42.023 [INFO][5386] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:42.037778 containerd[1473]: 2025-09-09 00:20:42.030 [WARNING][5386] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" HandleID="k8s-pod-network.723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:42.037778 containerd[1473]: 2025-09-09 00:20:42.030 [INFO][5386] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" HandleID="k8s-pod-network.723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:42.037778 containerd[1473]: 2025-09-09 00:20:42.031 [INFO][5386] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:42.037778 containerd[1473]: 2025-09-09 00:20:42.034 [INFO][5378] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Sep 9 00:20:42.038356 containerd[1473]: time="2025-09-09T00:20:42.037827258Z" level=info msg="TearDown network for sandbox \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\" successfully" Sep 9 00:20:42.038356 containerd[1473]: time="2025-09-09T00:20:42.037855522Z" level=info msg="StopPodSandbox for \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\" returns successfully" Sep 9 00:20:42.038464 containerd[1473]: time="2025-09-09T00:20:42.038449889Z" level=info msg="RemovePodSandbox for \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\"" Sep 9 00:20:42.038523 containerd[1473]: time="2025-09-09T00:20:42.038479205Z" level=info msg="Forcibly stopping sandbox \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\"" Sep 9 00:20:42.120221 containerd[1473]: 2025-09-09 00:20:42.074 [WARNING][5404] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0", GenerateName:"calico-apiserver-c9b45b4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"1747ee41-b82d-4e7c-9b85-9f845fa3552f", ResourceVersion:"1145", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c9b45b4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce", Pod:"calico-apiserver-c9b45b4c5-8gmqn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliae68aef4e67", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:42.120221 containerd[1473]: 2025-09-09 00:20:42.074 [INFO][5404] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Sep 9 00:20:42.120221 containerd[1473]: 2025-09-09 00:20:42.074 [INFO][5404] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" iface="eth0" netns="" Sep 9 00:20:42.120221 containerd[1473]: 2025-09-09 00:20:42.074 [INFO][5404] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Sep 9 00:20:42.120221 containerd[1473]: 2025-09-09 00:20:42.074 [INFO][5404] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Sep 9 00:20:42.120221 containerd[1473]: 2025-09-09 00:20:42.103 [INFO][5412] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" HandleID="k8s-pod-network.723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:42.120221 containerd[1473]: 2025-09-09 00:20:42.104 [INFO][5412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:42.120221 containerd[1473]: 2025-09-09 00:20:42.104 [INFO][5412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:42.120221 containerd[1473]: 2025-09-09 00:20:42.111 [WARNING][5412] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" HandleID="k8s-pod-network.723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:42.120221 containerd[1473]: 2025-09-09 00:20:42.111 [INFO][5412] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" HandleID="k8s-pod-network.723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--8gmqn-eth0" Sep 9 00:20:42.120221 containerd[1473]: 2025-09-09 00:20:42.112 [INFO][5412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:42.120221 containerd[1473]: 2025-09-09 00:20:42.116 [INFO][5404] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa" Sep 9 00:20:42.120948 containerd[1473]: time="2025-09-09T00:20:42.120894234Z" level=info msg="TearDown network for sandbox \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\" successfully" Sep 9 00:20:42.125681 containerd[1473]: time="2025-09-09T00:20:42.125616004Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:20:42.125744 containerd[1473]: time="2025-09-09T00:20:42.125710495Z" level=info msg="RemovePodSandbox \"723c39377b78c82c861fad8da43710060813bbb2a443b46aa1701cd71635e6aa\" returns successfully" Sep 9 00:20:42.126516 containerd[1473]: time="2025-09-09T00:20:42.126448406Z" level=info msg="StopPodSandbox for \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\"" Sep 9 00:20:42.219658 containerd[1473]: 2025-09-09 00:20:42.174 [WARNING][5429] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0", GenerateName:"calico-kube-controllers-5cbbb5d746-", Namespace:"calico-system", SelfLink:"", UID:"e65cbb6f-ba37-4168-a027-ea0ff3dac6d4", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cbbb5d746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703", Pod:"calico-kube-controllers-5cbbb5d746-lpbgl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali276af539ac3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:42.219658 containerd[1473]: 2025-09-09 00:20:42.174 [INFO][5429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Sep 9 00:20:42.219658 containerd[1473]: 2025-09-09 00:20:42.174 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" iface="eth0" netns="" Sep 9 00:20:42.219658 containerd[1473]: 2025-09-09 00:20:42.174 [INFO][5429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Sep 9 00:20:42.219658 containerd[1473]: 2025-09-09 00:20:42.174 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Sep 9 00:20:42.219658 containerd[1473]: 2025-09-09 00:20:42.202 [INFO][5438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" HandleID="k8s-pod-network.7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Workload="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:42.219658 containerd[1473]: 2025-09-09 00:20:42.202 [INFO][5438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:42.219658 containerd[1473]: 2025-09-09 00:20:42.202 [INFO][5438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:42.219658 containerd[1473]: 2025-09-09 00:20:42.210 [WARNING][5438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" HandleID="k8s-pod-network.7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Workload="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:42.219658 containerd[1473]: 2025-09-09 00:20:42.210 [INFO][5438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" HandleID="k8s-pod-network.7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Workload="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:42.219658 containerd[1473]: 2025-09-09 00:20:42.212 [INFO][5438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:42.219658 containerd[1473]: 2025-09-09 00:20:42.215 [INFO][5429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Sep 9 00:20:42.219658 containerd[1473]: time="2025-09-09T00:20:42.219608063Z" level=info msg="TearDown network for sandbox \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\" successfully" Sep 9 00:20:42.219658 containerd[1473]: time="2025-09-09T00:20:42.219642679Z" level=info msg="StopPodSandbox for \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\" returns successfully" Sep 9 00:20:42.220525 containerd[1473]: time="2025-09-09T00:20:42.220434393Z" level=info msg="RemovePodSandbox for \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\"" Sep 9 00:20:42.220525 containerd[1473]: time="2025-09-09T00:20:42.220497514Z" level=info msg="Forcibly stopping sandbox \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\"" Sep 9 00:20:42.233516 systemd[1]: Started sshd@12-10.0.0.15:22-10.0.0.1:50082.service - OpenSSH per-connection server daemon (10.0.0.1:50082). Sep 9 00:20:42.321507 sshd[5462]: Accepted publickey for core from 10.0.0.1 port 50082 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:42.325073 sshd[5462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:42.334108 containerd[1473]: 2025-09-09 00:20:42.269 [WARNING][5457] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0", GenerateName:"calico-kube-controllers-5cbbb5d746-", Namespace:"calico-system", SelfLink:"", UID:"e65cbb6f-ba37-4168-a027-ea0ff3dac6d4", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5cbbb5d746", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"09528accb367d807250fc2deadc6d4fb8453a160a8df66e979dc19e1bf22f703", Pod:"calico-kube-controllers-5cbbb5d746-lpbgl", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali276af539ac3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:42.334108 containerd[1473]: 2025-09-09 00:20:42.270 [INFO][5457] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Sep 9 00:20:42.334108 containerd[1473]: 2025-09-09 00:20:42.270 [INFO][5457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" iface="eth0" netns="" Sep 9 00:20:42.334108 containerd[1473]: 2025-09-09 00:20:42.271 [INFO][5457] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Sep 9 00:20:42.334108 containerd[1473]: 2025-09-09 00:20:42.271 [INFO][5457] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Sep 9 00:20:42.334108 containerd[1473]: 2025-09-09 00:20:42.309 [INFO][5467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" HandleID="k8s-pod-network.7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Workload="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:42.334108 containerd[1473]: 2025-09-09 00:20:42.311 [INFO][5467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:42.334108 containerd[1473]: 2025-09-09 00:20:42.312 [INFO][5467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:42.334108 containerd[1473]: 2025-09-09 00:20:42.325 [WARNING][5467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" HandleID="k8s-pod-network.7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Workload="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:42.334108 containerd[1473]: 2025-09-09 00:20:42.325 [INFO][5467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" HandleID="k8s-pod-network.7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Workload="localhost-k8s-calico--kube--controllers--5cbbb5d746--lpbgl-eth0" Sep 9 00:20:42.334108 containerd[1473]: 2025-09-09 00:20:42.327 [INFO][5467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:42.334108 containerd[1473]: 2025-09-09 00:20:42.330 [INFO][5457] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519" Sep 9 00:20:42.334835 containerd[1473]: time="2025-09-09T00:20:42.334166822Z" level=info msg="TearDown network for sandbox \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\" successfully" Sep 9 00:20:42.336979 systemd-logind[1454]: New session 13 of user core. Sep 9 00:20:42.339860 containerd[1473]: time="2025-09-09T00:20:42.339810456Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:20:42.339976 containerd[1473]: time="2025-09-09T00:20:42.339882312Z" level=info msg="RemovePodSandbox \"7db2dc82694cc7cfa5afc43dd017927ad98e3460a0a9e88650295da5bb916519\" returns successfully" Sep 9 00:20:42.340515 containerd[1473]: time="2025-09-09T00:20:42.340489995Z" level=info msg="StopPodSandbox for \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\"" Sep 9 00:20:42.343588 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 00:20:42.478962 containerd[1473]: 2025-09-09 00:20:42.400 [WARNING][5483] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--88jjn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9eba6dae-45e2-4ee0-9d05-4984a3603e03", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112", Pod:"coredns-674b8bbfcf-88jjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59bfbda4e1c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:42.478962 containerd[1473]: 2025-09-09 00:20:42.401 [INFO][5483] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Sep 9 00:20:42.478962 containerd[1473]: 2025-09-09 00:20:42.401 [INFO][5483] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" iface="eth0" netns="" Sep 9 00:20:42.478962 containerd[1473]: 2025-09-09 00:20:42.401 [INFO][5483] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Sep 9 00:20:42.478962 containerd[1473]: 2025-09-09 00:20:42.401 [INFO][5483] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Sep 9 00:20:42.478962 containerd[1473]: 2025-09-09 00:20:42.453 [INFO][5492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" HandleID="k8s-pod-network.beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Workload="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:42.478962 containerd[1473]: 2025-09-09 00:20:42.453 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:42.478962 containerd[1473]: 2025-09-09 00:20:42.453 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:42.478962 containerd[1473]: 2025-09-09 00:20:42.465 [WARNING][5492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" HandleID="k8s-pod-network.beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Workload="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:42.478962 containerd[1473]: 2025-09-09 00:20:42.465 [INFO][5492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" HandleID="k8s-pod-network.beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Workload="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:42.478962 containerd[1473]: 2025-09-09 00:20:42.470 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:42.478962 containerd[1473]: 2025-09-09 00:20:42.474 [INFO][5483] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Sep 9 00:20:42.478962 containerd[1473]: time="2025-09-09T00:20:42.478925626Z" level=info msg="TearDown network for sandbox \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\" successfully" Sep 9 00:20:42.478962 containerd[1473]: time="2025-09-09T00:20:42.478956986Z" level=info msg="StopPodSandbox for \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\" returns successfully" Sep 9 00:20:42.480337 containerd[1473]: time="2025-09-09T00:20:42.480283223Z" level=info msg="RemovePodSandbox for \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\"" Sep 9 00:20:42.480337 containerd[1473]: time="2025-09-09T00:20:42.480326045Z" level=info msg="Forcibly stopping sandbox \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\"" Sep 9 00:20:42.533185 sshd[5462]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:42.541001 systemd[1]: sshd@12-10.0.0.15:22-10.0.0.1:50082.service: Deactivated successfully. Sep 9 00:20:42.545645 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 00:20:42.547347 systemd-logind[1454]: Session 13 logged out. Waiting for processes to exit. Sep 9 00:20:42.548915 systemd-logind[1454]: Removed session 13. Sep 9 00:20:42.601936 containerd[1473]: 2025-09-09 00:20:42.542 [WARNING][5520] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--674b8bbfcf--88jjn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9eba6dae-45e2-4ee0-9d05-4984a3603e03", ResourceVersion:"1156", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 19, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a20fde510b6a1a27809304367197433e7d0092a3df5c0042600c1a19cdd5112", Pod:"coredns-674b8bbfcf-88jjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali59bfbda4e1c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:42.601936 containerd[1473]: 2025-09-09 00:20:42.542 [INFO][5520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Sep 9 00:20:42.601936 containerd[1473]: 2025-09-09 00:20:42.542 [INFO][5520] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" iface="eth0" netns="" Sep 9 00:20:42.601936 containerd[1473]: 2025-09-09 00:20:42.542 [INFO][5520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Sep 9 00:20:42.601936 containerd[1473]: 2025-09-09 00:20:42.542 [INFO][5520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Sep 9 00:20:42.601936 containerd[1473]: 2025-09-09 00:20:42.577 [INFO][5531] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" HandleID="k8s-pod-network.beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Workload="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:42.601936 containerd[1473]: 2025-09-09 00:20:42.577 [INFO][5531] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:42.601936 containerd[1473]: 2025-09-09 00:20:42.577 [INFO][5531] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:42.601936 containerd[1473]: 2025-09-09 00:20:42.591 [WARNING][5531] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" HandleID="k8s-pod-network.beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Workload="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:42.601936 containerd[1473]: 2025-09-09 00:20:42.591 [INFO][5531] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" HandleID="k8s-pod-network.beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Workload="localhost-k8s-coredns--674b8bbfcf--88jjn-eth0" Sep 9 00:20:42.601936 containerd[1473]: 2025-09-09 00:20:42.594 [INFO][5531] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:42.601936 containerd[1473]: 2025-09-09 00:20:42.597 [INFO][5520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618" Sep 9 00:20:42.602826 containerd[1473]: time="2025-09-09T00:20:42.602107086Z" level=info msg="TearDown network for sandbox \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\" successfully" Sep 9 00:20:43.356992 containerd[1473]: time="2025-09-09T00:20:43.356902263Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:20:43.357199 containerd[1473]: time="2025-09-09T00:20:43.357020639Z" level=info msg="RemovePodSandbox \"beeed4ee0dbc0509e135946ad11b662c63143353c3cb64b323326ab379865618\" returns successfully" Sep 9 00:20:43.357603 containerd[1473]: time="2025-09-09T00:20:43.357575890Z" level=info msg="StopPodSandbox for \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\"" Sep 9 00:20:43.453187 containerd[1473]: 2025-09-09 00:20:43.403 [WARNING][5554] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7zmjc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626", Pod:"csi-node-driver-7zmjc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali17e7d8a29c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:43.453187 containerd[1473]: 2025-09-09 00:20:43.404 [INFO][5554] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Sep 9 00:20:43.453187 containerd[1473]: 2025-09-09 00:20:43.404 [INFO][5554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" iface="eth0" netns="" Sep 9 00:20:43.453187 containerd[1473]: 2025-09-09 00:20:43.404 [INFO][5554] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Sep 9 00:20:43.453187 containerd[1473]: 2025-09-09 00:20:43.404 [INFO][5554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Sep 9 00:20:43.453187 containerd[1473]: 2025-09-09 00:20:43.436 [INFO][5563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" HandleID="k8s-pod-network.86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Workload="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:43.453187 containerd[1473]: 2025-09-09 00:20:43.436 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:43.453187 containerd[1473]: 2025-09-09 00:20:43.436 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:43.453187 containerd[1473]: 2025-09-09 00:20:43.445 [WARNING][5563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" HandleID="k8s-pod-network.86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Workload="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:43.453187 containerd[1473]: 2025-09-09 00:20:43.445 [INFO][5563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" HandleID="k8s-pod-network.86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Workload="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:43.453187 containerd[1473]: 2025-09-09 00:20:43.446 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:43.453187 containerd[1473]: 2025-09-09 00:20:43.449 [INFO][5554] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Sep 9 00:20:43.453793 containerd[1473]: time="2025-09-09T00:20:43.453248473Z" level=info msg="TearDown network for sandbox \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\" successfully" Sep 9 00:20:43.453793 containerd[1473]: time="2025-09-09T00:20:43.453283301Z" level=info msg="StopPodSandbox for \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\" returns successfully" Sep 9 00:20:43.453917 containerd[1473]: time="2025-09-09T00:20:43.453880271Z" level=info msg="RemovePodSandbox for \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\"" Sep 9 00:20:43.453917 containerd[1473]: time="2025-09-09T00:20:43.453911952Z" level=info msg="Forcibly stopping sandbox \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\"" Sep 9 00:20:43.553959 containerd[1473]: 2025-09-09 00:20:43.503 [WARNING][5581] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--7zmjc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"167309ad-7f53-41fb-a5c4-b6c3ac0a5dbe", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626", Pod:"csi-node-driver-7zmjc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali17e7d8a29c3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:43.553959 containerd[1473]: 2025-09-09 00:20:43.503 [INFO][5581] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Sep 9 00:20:43.553959 containerd[1473]: 2025-09-09 00:20:43.503 [INFO][5581] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" iface="eth0" netns="" Sep 9 00:20:43.553959 containerd[1473]: 2025-09-09 00:20:43.503 [INFO][5581] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Sep 9 00:20:43.553959 containerd[1473]: 2025-09-09 00:20:43.504 [INFO][5581] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Sep 9 00:20:43.553959 containerd[1473]: 2025-09-09 00:20:43.534 [INFO][5590] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" HandleID="k8s-pod-network.86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Workload="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:43.553959 containerd[1473]: 2025-09-09 00:20:43.535 [INFO][5590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:43.553959 containerd[1473]: 2025-09-09 00:20:43.535 [INFO][5590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:43.553959 containerd[1473]: 2025-09-09 00:20:43.544 [WARNING][5590] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" HandleID="k8s-pod-network.86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Workload="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:43.553959 containerd[1473]: 2025-09-09 00:20:43.544 [INFO][5590] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" HandleID="k8s-pod-network.86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Workload="localhost-k8s-csi--node--driver--7zmjc-eth0" Sep 9 00:20:43.553959 containerd[1473]: 2025-09-09 00:20:43.548 [INFO][5590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:43.553959 containerd[1473]: 2025-09-09 00:20:43.551 [INFO][5581] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242" Sep 9 00:20:43.554510 containerd[1473]: time="2025-09-09T00:20:43.553996227Z" level=info msg="TearDown network for sandbox \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\" successfully" Sep 9 00:20:43.594266 containerd[1473]: time="2025-09-09T00:20:43.594210435Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:20:43.594266 containerd[1473]: time="2025-09-09T00:20:43.594294285Z" level=info msg="RemovePodSandbox \"86cc82ecda80dd7de55c48e2ea09feebec8f8b90163b2c565e4c548420c89242\" returns successfully" Sep 9 00:20:43.594976 containerd[1473]: time="2025-09-09T00:20:43.594931634Z" level=info msg="StopPodSandbox for \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\"" Sep 9 00:20:43.739104 containerd[1473]: 2025-09-09 00:20:43.680 [WARNING][5608] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0", GenerateName:"calico-apiserver-c9b45b4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f10fd88-00c6-468e-a12c-ae8ac5f160de", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c9b45b4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854", Pod:"calico-apiserver-c9b45b4c5-f5xxs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida0972dd047", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:43.739104 containerd[1473]: 2025-09-09 00:20:43.681 [INFO][5608] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Sep 9 00:20:43.739104 containerd[1473]: 2025-09-09 00:20:43.681 [INFO][5608] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" iface="eth0" netns="" Sep 9 00:20:43.739104 containerd[1473]: 2025-09-09 00:20:43.681 [INFO][5608] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Sep 9 00:20:43.739104 containerd[1473]: 2025-09-09 00:20:43.681 [INFO][5608] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Sep 9 00:20:43.739104 containerd[1473]: 2025-09-09 00:20:43.720 [INFO][5617] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" HandleID="k8s-pod-network.c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:43.739104 containerd[1473]: 2025-09-09 00:20:43.720 [INFO][5617] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:43.739104 containerd[1473]: 2025-09-09 00:20:43.720 [INFO][5617] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:43.739104 containerd[1473]: 2025-09-09 00:20:43.731 [WARNING][5617] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" HandleID="k8s-pod-network.c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:43.739104 containerd[1473]: 2025-09-09 00:20:43.731 [INFO][5617] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" HandleID="k8s-pod-network.c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:43.739104 containerd[1473]: 2025-09-09 00:20:43.732 [INFO][5617] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:43.739104 containerd[1473]: 2025-09-09 00:20:43.736 [INFO][5608] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Sep 9 00:20:43.740074 containerd[1473]: time="2025-09-09T00:20:43.739143287Z" level=info msg="TearDown network for sandbox \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\" successfully" Sep 9 00:20:43.740074 containerd[1473]: time="2025-09-09T00:20:43.739180969Z" level=info msg="StopPodSandbox for \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\" returns successfully" Sep 9 00:20:43.740074 containerd[1473]: time="2025-09-09T00:20:43.739768051Z" level=info msg="RemovePodSandbox for \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\"" Sep 9 00:20:43.740074 containerd[1473]: time="2025-09-09T00:20:43.739795563Z" level=info msg="Forcibly stopping sandbox \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\"" Sep 9 00:20:43.823552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount631575541.mount: Deactivated successfully. Sep 9 00:20:43.944836 containerd[1473]: 2025-09-09 00:20:43.903 [WARNING][5634] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0", GenerateName:"calico-apiserver-c9b45b4c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"4f10fd88-00c6-468e-a12c-ae8ac5f160de", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2025, time.September, 9, 0, 20, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c9b45b4c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854", Pod:"calico-apiserver-c9b45b4c5-f5xxs", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calida0972dd047", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 9 00:20:43.944836 containerd[1473]: 2025-09-09 00:20:43.903 [INFO][5634] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Sep 9 00:20:43.944836 containerd[1473]: 2025-09-09 00:20:43.903 [INFO][5634] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" iface="eth0" netns="" Sep 9 00:20:43.944836 containerd[1473]: 2025-09-09 00:20:43.903 [INFO][5634] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Sep 9 00:20:43.944836 containerd[1473]: 2025-09-09 00:20:43.903 [INFO][5634] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Sep 9 00:20:43.944836 containerd[1473]: 2025-09-09 00:20:43.927 [INFO][5648] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" HandleID="k8s-pod-network.c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:43.944836 containerd[1473]: 2025-09-09 00:20:43.928 [INFO][5648] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 9 00:20:43.944836 containerd[1473]: 2025-09-09 00:20:43.928 [INFO][5648] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 9 00:20:43.944836 containerd[1473]: 2025-09-09 00:20:43.935 [WARNING][5648] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" HandleID="k8s-pod-network.c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:43.944836 containerd[1473]: 2025-09-09 00:20:43.935 [INFO][5648] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" HandleID="k8s-pod-network.c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Workload="localhost-k8s-calico--apiserver--c9b45b4c5--f5xxs-eth0" Sep 9 00:20:43.944836 containerd[1473]: 2025-09-09 00:20:43.938 [INFO][5648] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 9 00:20:43.944836 containerd[1473]: 2025-09-09 00:20:43.941 [INFO][5634] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378" Sep 9 00:20:43.945346 containerd[1473]: time="2025-09-09T00:20:43.944888308Z" level=info msg="TearDown network for sandbox \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\" successfully" Sep 9 00:20:44.072678 containerd[1473]: time="2025-09-09T00:20:44.072489998Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 9 00:20:44.072678 containerd[1473]: time="2025-09-09T00:20:44.072586281Z" level=info msg="RemovePodSandbox \"c747d7c0970fce88bfd0a61d0f361eb3e8e7c3a39e6c22c62faabc8dcc89f378\" returns successfully" Sep 9 00:20:44.576497 containerd[1473]: time="2025-09-09T00:20:44.576445416Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:44.577410 containerd[1473]: time="2025-09-09T00:20:44.577303496Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 9 00:20:44.578818 containerd[1473]: time="2025-09-09T00:20:44.578784065Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:44.581892 containerd[1473]: time="2025-09-09T00:20:44.581823411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:44.582676 containerd[1473]: time="2025-09-09T00:20:44.582631415Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.144124922s" Sep 9 00:20:44.582676 containerd[1473]: time="2025-09-09T00:20:44.582668576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 9 00:20:44.583967 containerd[1473]: time="2025-09-09T00:20:44.583939895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:20:44.589497 containerd[1473]: time="2025-09-09T00:20:44.589445874Z" level=info msg="CreateContainer within sandbox \"84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 9 00:20:44.611521 containerd[1473]: time="2025-09-09T00:20:44.610530037Z" level=info msg="CreateContainer within sandbox \"84960935d450ecac6d959c72041c567e3531daa06f124124cdb0cef3941ec247\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"28eb1208ec33ef64d7f6b8e92cfdff8872a06969c1ef386cabd552259daf2c93\"" Sep 9 00:20:44.611521 containerd[1473]: time="2025-09-09T00:20:44.611300569Z" level=info msg="StartContainer for \"28eb1208ec33ef64d7f6b8e92cfdff8872a06969c1ef386cabd552259daf2c93\"" Sep 9 00:20:44.651518 systemd[1]: Started cri-containerd-28eb1208ec33ef64d7f6b8e92cfdff8872a06969c1ef386cabd552259daf2c93.scope - libcontainer container 28eb1208ec33ef64d7f6b8e92cfdff8872a06969c1ef386cabd552259daf2c93. Sep 9 00:20:44.703250 containerd[1473]: time="2025-09-09T00:20:44.703186090Z" level=info msg="StartContainer for \"28eb1208ec33ef64d7f6b8e92cfdff8872a06969c1ef386cabd552259daf2c93\" returns successfully" Sep 9 00:20:46.095825 kubelet[2562]: I0909 00:20:46.095746 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-b8v4d" podStartSLOduration=30.719206725 podStartE2EDuration="42.095727165s" podCreationTimestamp="2025-09-09 00:20:04 +0000 UTC" firstStartedPulling="2025-09-09 00:20:33.20726506 +0000 UTC m=+51.895255559" lastFinishedPulling="2025-09-09 00:20:44.5837855 +0000 UTC m=+63.271775999" observedRunningTime="2025-09-09 00:20:44.999378328 +0000 UTC m=+63.687368827" watchObservedRunningTime="2025-09-09 00:20:46.095727165 +0000 UTC m=+64.783717664" Sep 9 00:20:47.589457 systemd[1]: Started sshd@13-10.0.0.15:22-10.0.0.1:50096.service - OpenSSH per-connection server daemon (10.0.0.1:50096). Sep 9 00:20:47.706253 sshd[5756]: Accepted publickey for core from 10.0.0.1 port 50096 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:47.712743 sshd[5756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:47.727553 systemd-logind[1454]: New session 14 of user core. Sep 9 00:20:47.754338 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 00:20:48.245560 sshd[5756]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:48.266109 systemd[1]: sshd@13-10.0.0.15:22-10.0.0.1:50096.service: Deactivated successfully. Sep 9 00:20:48.270562 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 00:20:48.280236 systemd-logind[1454]: Session 14 logged out. Waiting for processes to exit. Sep 9 00:20:48.299072 systemd[1]: Started sshd@14-10.0.0.15:22-10.0.0.1:50100.service - OpenSSH per-connection server daemon (10.0.0.1:50100). Sep 9 00:20:48.299815 systemd-logind[1454]: Removed session 14. Sep 9 00:20:48.352485 sshd[5775]: Accepted publickey for core from 10.0.0.1 port 50100 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:48.356754 sshd[5775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:48.369760 systemd-logind[1454]: New session 15 of user core. Sep 9 00:20:48.381335 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 00:20:48.753167 sshd[5775]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:48.780909 systemd[1]: Started sshd@15-10.0.0.15:22-10.0.0.1:50114.service - OpenSSH per-connection server daemon (10.0.0.1:50114). Sep 9 00:20:48.782441 systemd[1]: sshd@14-10.0.0.15:22-10.0.0.1:50100.service: Deactivated successfully. Sep 9 00:20:48.795857 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 00:20:48.805693 systemd-logind[1454]: Session 15 logged out. Waiting for processes to exit. Sep 9 00:20:48.815458 systemd-logind[1454]: Removed session 15. Sep 9 00:20:48.887783 sshd[5785]: Accepted publickey for core from 10.0.0.1 port 50114 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:48.896283 sshd[5785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:48.925330 systemd-logind[1454]: New session 16 of user core. Sep 9 00:20:48.947328 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 00:20:49.221726 sshd[5785]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:49.243016 systemd-logind[1454]: Session 16 logged out. Waiting for processes to exit. Sep 9 00:20:49.245193 systemd[1]: sshd@15-10.0.0.15:22-10.0.0.1:50114.service: Deactivated successfully. Sep 9 00:20:49.248597 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 00:20:49.250988 systemd-logind[1454]: Removed session 16. Sep 9 00:20:50.271927 containerd[1473]: time="2025-09-09T00:20:50.271453400Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:50.278022 containerd[1473]: time="2025-09-09T00:20:50.277905475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 9 00:20:50.362916 containerd[1473]: time="2025-09-09T00:20:50.362571933Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.778595036s" Sep 9 00:20:50.362916 containerd[1473]: time="2025-09-09T00:20:50.362633500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:20:50.371835 containerd[1473]: time="2025-09-09T00:20:50.368177926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 9 00:20:50.381675 containerd[1473]: time="2025-09-09T00:20:50.380111437Z" level=info msg="CreateContainer within sandbox \"3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:20:50.415242 containerd[1473]: time="2025-09-09T00:20:50.415122398Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:50.416923 containerd[1473]: time="2025-09-09T00:20:50.416424379Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:50.417548 containerd[1473]: time="2025-09-09T00:20:50.417455303Z" level=info msg="CreateContainer within sandbox \"3ecb373151de1558fd072ff32aeaaedf210795728248e0612a8952136d445854\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8305805c276b7ef5cf282582372e7060c5160c55e92b5a213aa5515f5f5e4d19\"" Sep 9 00:20:50.419000 containerd[1473]: time="2025-09-09T00:20:50.418306775Z" level=info msg="StartContainer for \"8305805c276b7ef5cf282582372e7060c5160c55e92b5a213aa5515f5f5e4d19\"" Sep 9 00:20:50.595711 systemd[1]: Started cri-containerd-8305805c276b7ef5cf282582372e7060c5160c55e92b5a213aa5515f5f5e4d19.scope - libcontainer container 8305805c276b7ef5cf282582372e7060c5160c55e92b5a213aa5515f5f5e4d19. Sep 9 00:20:50.729137 containerd[1473]: time="2025-09-09T00:20:50.728840736Z" level=info msg="StartContainer for \"8305805c276b7ef5cf282582372e7060c5160c55e92b5a213aa5515f5f5e4d19\" returns successfully" Sep 9 00:20:51.067803 kubelet[2562]: I0909 00:20:51.067262 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c9b45b4c5-f5xxs" podStartSLOduration=33.628237286 podStartE2EDuration="50.067239801s" podCreationTimestamp="2025-09-09 00:20:01 +0000 UTC" firstStartedPulling="2025-09-09 00:20:33.927768129 +0000 UTC m=+52.615758628" lastFinishedPulling="2025-09-09 00:20:50.366770644 +0000 UTC m=+69.054761143" observedRunningTime="2025-09-09 00:20:51.066700834 +0000 UTC m=+69.754691333" watchObservedRunningTime="2025-09-09 00:20:51.067239801 +0000 UTC m=+69.755230300" Sep 9 00:20:52.036349 kubelet[2562]: I0909 00:20:52.036274 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:20:53.306996 containerd[1473]: time="2025-09-09T00:20:53.306892613Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:53.311125 containerd[1473]: time="2025-09-09T00:20:53.310972029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 9 00:20:53.315405 containerd[1473]: time="2025-09-09T00:20:53.313911554Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:53.319054 containerd[1473]: time="2025-09-09T00:20:53.318950446Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:53.320194 containerd[1473]: time="2025-09-09T00:20:53.320153596Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.951905496s" Sep 9 00:20:53.320466 containerd[1473]: time="2025-09-09T00:20:53.320287111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 9 00:20:53.328018 containerd[1473]: time="2025-09-09T00:20:53.327933997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 9 00:20:53.350695 containerd[1473]: time="2025-09-09T00:20:53.350628789Z" level=info msg="CreateContainer within sandbox \"c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 9 00:20:53.528241 containerd[1473]: time="2025-09-09T00:20:53.528132151Z" level=info msg="CreateContainer within sandbox \"c5b63e593a705821ee1d1edf39f41eff76a8fdc374821b18209f3346edd4b626\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3b13991346269e3e04ae49540af9d4b2b35b49c60c97d98c387d0e3f19a5ee87\"" Sep 9 00:20:53.529680 containerd[1473]: time="2025-09-09T00:20:53.529628960Z" level=info msg="StartContainer for \"3b13991346269e3e04ae49540af9d4b2b35b49c60c97d98c387d0e3f19a5ee87\"" Sep 9 00:20:53.659832 systemd[1]: Started cri-containerd-3b13991346269e3e04ae49540af9d4b2b35b49c60c97d98c387d0e3f19a5ee87.scope - libcontainer container 3b13991346269e3e04ae49540af9d4b2b35b49c60c97d98c387d0e3f19a5ee87. Sep 9 00:20:53.741812 kubelet[2562]: I0909 00:20:53.741728 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:20:53.751985 containerd[1473]: time="2025-09-09T00:20:53.751893540Z" level=info msg="StartContainer for \"3b13991346269e3e04ae49540af9d4b2b35b49c60c97d98c387d0e3f19a5ee87\" returns successfully" Sep 9 00:20:53.793480 containerd[1473]: time="2025-09-09T00:20:53.792832380Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:53.794679 containerd[1473]: time="2025-09-09T00:20:53.794612317Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 9 00:20:53.798222 containerd[1473]: time="2025-09-09T00:20:53.797900357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 469.894643ms" Sep 9 00:20:53.798222 containerd[1473]: time="2025-09-09T00:20:53.797963938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 9 00:20:53.803655 containerd[1473]: time="2025-09-09T00:20:53.802928920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 9 00:20:53.824140 containerd[1473]: time="2025-09-09T00:20:53.824035678Z" level=info msg="CreateContainer within sandbox \"794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 9 00:20:53.939696 containerd[1473]: time="2025-09-09T00:20:53.937829784Z" level=info msg="CreateContainer within sandbox \"794d2a45b1bfa23ec056f2144794cf2f946866a017f162c009587d06a91a59ce\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5ccbd83baffb253244498f856f968d6a06e0970f7c2338be3af8feae6c8957ad\"" Sep 9 00:20:53.943171 containerd[1473]: time="2025-09-09T00:20:53.942122015Z" level=info msg="StartContainer for \"5ccbd83baffb253244498f856f968d6a06e0970f7c2338be3af8feae6c8957ad\"" Sep 9 00:20:54.043703 systemd[1]: Started cri-containerd-5ccbd83baffb253244498f856f968d6a06e0970f7c2338be3af8feae6c8957ad.scope - libcontainer container 5ccbd83baffb253244498f856f968d6a06e0970f7c2338be3af8feae6c8957ad. Sep 9 00:20:54.096929 kubelet[2562]: I0909 00:20:54.096771 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7zmjc" podStartSLOduration=27.735325236 podStartE2EDuration="49.096673173s" podCreationTimestamp="2025-09-09 00:20:05 +0000 UTC" firstStartedPulling="2025-09-09 00:20:31.963231996 +0000 UTC m=+50.651222495" lastFinishedPulling="2025-09-09 00:20:53.324579933 +0000 UTC m=+72.012570432" observedRunningTime="2025-09-09 00:20:54.096201996 +0000 UTC m=+72.784192505" watchObservedRunningTime="2025-09-09 00:20:54.096673173 +0000 UTC m=+72.784663672" Sep 9 00:20:54.203168 containerd[1473]: time="2025-09-09T00:20:54.201810942Z" level=info msg="StartContainer for \"5ccbd83baffb253244498f856f968d6a06e0970f7c2338be3af8feae6c8957ad\" returns successfully" Sep 9 00:20:54.245092 systemd[1]: Started sshd@16-10.0.0.15:22-10.0.0.1:49168.service - OpenSSH per-connection server daemon (10.0.0.1:49168). Sep 9 00:20:54.434651 sshd[5954]: Accepted publickey for core from 10.0.0.1 port 49168 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:20:54.437238 sshd[5954]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:20:54.452201 systemd-logind[1454]: New session 17 of user core. Sep 9 00:20:54.464277 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 00:20:54.524071 kubelet[2562]: I0909 00:20:54.523837 2562 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 9 00:20:54.525531 kubelet[2562]: I0909 00:20:54.525509 2562 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 9 00:20:55.118477 kubelet[2562]: I0909 00:20:55.118221 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c9b45b4c5-8gmqn" podStartSLOduration=35.834903502 podStartE2EDuration="54.118189426s" podCreationTimestamp="2025-09-09 00:20:01 +0000 UTC" firstStartedPulling="2025-09-09 00:20:35.518357058 +0000 UTC m=+54.206347567" lastFinishedPulling="2025-09-09 00:20:53.801642972 +0000 UTC m=+72.489633491" observedRunningTime="2025-09-09 00:20:55.108396998 +0000 UTC m=+73.796387517" watchObservedRunningTime="2025-09-09 00:20:55.118189426 +0000 UTC m=+73.806179925" Sep 9 00:20:55.214456 sshd[5954]: pam_unix(sshd:session): session closed for user core Sep 9 00:20:55.227166 systemd[1]: sshd@16-10.0.0.15:22-10.0.0.1:49168.service: Deactivated successfully. Sep 9 00:20:55.231527 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 00:20:55.234500 systemd-logind[1454]: Session 17 logged out. Waiting for processes to exit. Sep 9 00:20:55.236069 systemd-logind[1454]: Removed session 17. Sep 9 00:20:56.093594 kubelet[2562]: I0909 00:20:56.093401 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:20:58.262743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1198420456.mount: Deactivated successfully. Sep 9 00:20:58.439997 containerd[1473]: time="2025-09-09T00:20:58.439781102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:58.442493 containerd[1473]: time="2025-09-09T00:20:58.442401993Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 9 00:20:58.444856 containerd[1473]: time="2025-09-09T00:20:58.444279100Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:58.453211 containerd[1473]: time="2025-09-09T00:20:58.447895303Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 00:20:58.453211 containerd[1473]: time="2025-09-09T00:20:58.449112206Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.646113794s" Sep 9 00:20:58.453211 containerd[1473]: time="2025-09-09T00:20:58.452396116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 9 00:20:58.482790 containerd[1473]: time="2025-09-09T00:20:58.481162166Z" level=info msg="CreateContainer within sandbox \"01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 9 00:20:58.531794 containerd[1473]: time="2025-09-09T00:20:58.531116214Z" level=info msg="CreateContainer within sandbox \"01d45d5cc0d6fdc7a79de315a6797a2a9aea1bab08baf9e1f2b606c066402ca4\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"81965eb135d7f231ccae0231b4253ef173c2f197e7bdf31cb47668d857801cf1\"" Sep 9 00:20:58.540219 containerd[1473]: time="2025-09-09T00:20:58.539470902Z" level=info msg="StartContainer for \"81965eb135d7f231ccae0231b4253ef173c2f197e7bdf31cb47668d857801cf1\"" Sep 9 00:20:58.638381 systemd[1]: Started cri-containerd-81965eb135d7f231ccae0231b4253ef173c2f197e7bdf31cb47668d857801cf1.scope - libcontainer container 81965eb135d7f231ccae0231b4253ef173c2f197e7bdf31cb47668d857801cf1. Sep 9 00:20:58.729399 containerd[1473]: time="2025-09-09T00:20:58.729300922Z" level=info msg="StartContainer for \"81965eb135d7f231ccae0231b4253ef173c2f197e7bdf31cb47668d857801cf1\" returns successfully" Sep 9 00:21:00.261013 systemd[1]: Started sshd@17-10.0.0.15:22-10.0.0.1:57492.service - OpenSSH per-connection server daemon (10.0.0.1:57492). Sep 9 00:21:00.400671 sshd[6030]: Accepted publickey for core from 10.0.0.1 port 57492 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:21:00.406283 sshd[6030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:00.429813 systemd-logind[1454]: New session 18 of user core. Sep 9 00:21:00.440644 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 00:21:01.425288 sshd[6030]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:01.436038 kubelet[2562]: E0909 00:21:01.432728 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:01.437218 systemd-logind[1454]: Session 18 logged out. Waiting for processes to exit. Sep 9 00:21:01.438319 systemd[1]: sshd@17-10.0.0.15:22-10.0.0.1:57492.service: Deactivated successfully. Sep 9 00:21:01.442589 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 00:21:01.452561 systemd-logind[1454]: Removed session 18. Sep 9 00:21:02.136188 kubelet[2562]: I0909 00:21:02.135104 2562 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5db6d46c68-6wzfx" podStartSLOduration=5.008749704 podStartE2EDuration="31.135067003s" podCreationTimestamp="2025-09-09 00:20:31 +0000 UTC" firstStartedPulling="2025-09-09 00:20:32.330195161 +0000 UTC m=+51.018185660" lastFinishedPulling="2025-09-09 00:20:58.45651246 +0000 UTC m=+77.144502959" observedRunningTime="2025-09-09 00:20:59.213573189 +0000 UTC m=+77.901563688" watchObservedRunningTime="2025-09-09 00:21:02.135067003 +0000 UTC m=+80.823057502" Sep 9 00:21:06.461629 systemd[1]: Started sshd@18-10.0.0.15:22-10.0.0.1:57506.service - OpenSSH per-connection server daemon (10.0.0.1:57506). Sep 9 00:21:06.584408 kernel: hrtimer: interrupt took 5851196 ns Sep 9 00:21:06.602057 sshd[6082]: Accepted publickey for core from 10.0.0.1 port 57506 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:21:06.616024 sshd[6082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:06.632742 systemd-logind[1454]: New session 19 of user core. Sep 9 00:21:06.646802 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 00:21:06.967521 sshd[6082]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:06.981225 systemd[1]: sshd@18-10.0.0.15:22-10.0.0.1:57506.service: Deactivated successfully. Sep 9 00:21:06.990700 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 00:21:07.005427 systemd-logind[1454]: Session 19 logged out. Waiting for processes to exit. Sep 9 00:21:07.020209 systemd-logind[1454]: Removed session 19. Sep 9 00:21:12.015007 systemd[1]: Started sshd@19-10.0.0.15:22-10.0.0.1:42468.service - OpenSSH per-connection server daemon (10.0.0.1:42468). Sep 9 00:21:12.201403 sshd[6117]: Accepted publickey for core from 10.0.0.1 port 42468 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:21:12.209466 sshd[6117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:12.224229 systemd-logind[1454]: New session 20 of user core. Sep 9 00:21:12.232753 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 00:21:13.133198 sshd[6117]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:13.144698 systemd[1]: sshd@19-10.0.0.15:22-10.0.0.1:42468.service: Deactivated successfully. Sep 9 00:21:13.150894 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 00:21:13.152451 systemd-logind[1454]: Session 20 logged out. Waiting for processes to exit. Sep 9 00:21:13.154424 systemd-logind[1454]: Removed session 20. Sep 9 00:21:13.437026 kubelet[2562]: E0909 00:21:13.436237 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:16.304012 kubelet[2562]: I0909 00:21:16.303258 2562 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 9 00:21:16.428163 kubelet[2562]: E0909 00:21:16.427621 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:18.174047 systemd[1]: Started sshd@20-10.0.0.15:22-10.0.0.1:42482.service - OpenSSH per-connection server daemon (10.0.0.1:42482). Sep 9 00:21:18.288306 sshd[6167]: Accepted publickey for core from 10.0.0.1 port 42482 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:21:18.292866 sshd[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:18.308596 systemd-logind[1454]: New session 21 of user core. Sep 9 00:21:18.325789 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 00:21:18.428246 kubelet[2562]: E0909 00:21:18.428059 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:18.746033 sshd[6167]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:18.762950 systemd[1]: sshd@20-10.0.0.15:22-10.0.0.1:42482.service: Deactivated successfully. Sep 9 00:21:18.772920 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 00:21:18.778700 systemd-logind[1454]: Session 21 logged out. Waiting for processes to exit. Sep 9 00:21:18.780323 systemd-logind[1454]: Removed session 21. Sep 9 00:21:23.783790 systemd[1]: Started sshd@21-10.0.0.15:22-10.0.0.1:40470.service - OpenSSH per-connection server daemon (10.0.0.1:40470). Sep 9 00:21:23.872238 sshd[6182]: Accepted publickey for core from 10.0.0.1 port 40470 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:21:23.875379 sshd[6182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:23.885339 systemd-logind[1454]: New session 22 of user core. Sep 9 00:21:23.890825 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 00:21:24.321721 sshd[6182]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:24.339976 systemd[1]: sshd@21-10.0.0.15:22-10.0.0.1:40470.service: Deactivated successfully. Sep 9 00:21:24.340669 systemd-logind[1454]: Session 22 logged out. Waiting for processes to exit. Sep 9 00:21:24.351453 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 00:21:24.352989 systemd-logind[1454]: Removed session 22. Sep 9 00:21:26.428563 kubelet[2562]: E0909 00:21:26.427790 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:29.358927 systemd[1]: Started sshd@22-10.0.0.15:22-10.0.0.1:40486.service - OpenSSH per-connection server daemon (10.0.0.1:40486). Sep 9 00:21:29.480980 sshd[6196]: Accepted publickey for core from 10.0.0.1 port 40486 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:21:29.483568 sshd[6196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:29.493579 systemd-logind[1454]: New session 23 of user core. Sep 9 00:21:29.506811 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 00:21:29.736594 sshd[6196]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:29.763324 systemd[1]: sshd@22-10.0.0.15:22-10.0.0.1:40486.service: Deactivated successfully. Sep 9 00:21:29.774992 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 00:21:29.788911 systemd-logind[1454]: Session 23 logged out. Waiting for processes to exit. Sep 9 00:21:29.803261 systemd[1]: Started sshd@23-10.0.0.15:22-10.0.0.1:40488.service - OpenSSH per-connection server daemon (10.0.0.1:40488). Sep 9 00:21:29.805919 systemd-logind[1454]: Removed session 23. Sep 9 00:21:29.868397 sshd[6211]: Accepted publickey for core from 10.0.0.1 port 40488 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:21:29.872587 sshd[6211]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:29.890997 systemd-logind[1454]: New session 24 of user core. Sep 9 00:21:29.900736 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 00:21:30.713257 sshd[6211]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:30.732472 systemd[1]: sshd@23-10.0.0.15:22-10.0.0.1:40488.service: Deactivated successfully. Sep 9 00:21:30.736907 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 00:21:30.740244 systemd-logind[1454]: Session 24 logged out. Waiting for processes to exit. Sep 9 00:21:30.751284 systemd[1]: Started sshd@24-10.0.0.15:22-10.0.0.1:47844.service - OpenSSH per-connection server daemon (10.0.0.1:47844). Sep 9 00:21:30.752177 systemd-logind[1454]: Removed session 24. Sep 9 00:21:30.855115 sshd[6224]: Accepted publickey for core from 10.0.0.1 port 47844 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:21:30.861945 sshd[6224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:30.879100 systemd-logind[1454]: New session 25 of user core. Sep 9 00:21:30.903830 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 00:21:32.173085 sshd[6224]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:32.193689 systemd[1]: sshd@24-10.0.0.15:22-10.0.0.1:47844.service: Deactivated successfully. Sep 9 00:21:32.196928 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 00:21:32.199665 systemd-logind[1454]: Session 25 logged out. Waiting for processes to exit. Sep 9 00:21:32.211447 systemd[1]: Started sshd@25-10.0.0.15:22-10.0.0.1:47846.service - OpenSSH per-connection server daemon (10.0.0.1:47846). Sep 9 00:21:32.213200 systemd-logind[1454]: Removed session 25. Sep 9 00:21:32.262492 sshd[6269]: Accepted publickey for core from 10.0.0.1 port 47846 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:21:32.265108 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:32.283422 systemd-logind[1454]: New session 26 of user core. Sep 9 00:21:32.298296 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 00:21:33.388570 sshd[6269]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:33.410710 systemd[1]: sshd@25-10.0.0.15:22-10.0.0.1:47846.service: Deactivated successfully. Sep 9 00:21:33.425343 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 00:21:33.431581 systemd-logind[1454]: Session 26 logged out. Waiting for processes to exit. Sep 9 00:21:33.464373 systemd[1]: Started sshd@26-10.0.0.15:22-10.0.0.1:47862.service - OpenSSH per-connection server daemon (10.0.0.1:47862). Sep 9 00:21:33.472773 systemd-logind[1454]: Removed session 26. Sep 9 00:21:33.559535 sshd[6283]: Accepted publickey for core from 10.0.0.1 port 47862 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:21:33.563147 sshd[6283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:33.586205 systemd-logind[1454]: New session 27 of user core. Sep 9 00:21:33.594758 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 9 00:21:33.945781 sshd[6283]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:33.954913 systemd[1]: sshd@26-10.0.0.15:22-10.0.0.1:47862.service: Deactivated successfully. Sep 9 00:21:33.961862 systemd[1]: session-27.scope: Deactivated successfully. Sep 9 00:21:33.963344 systemd-logind[1454]: Session 27 logged out. Waiting for processes to exit. Sep 9 00:21:33.973124 systemd-logind[1454]: Removed session 27. Sep 9 00:21:38.989110 systemd[1]: Started sshd@27-10.0.0.15:22-10.0.0.1:47876.service - OpenSSH per-connection server daemon (10.0.0.1:47876). Sep 9 00:21:39.050209 sshd[6319]: Accepted publickey for core from 10.0.0.1 port 47876 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:21:39.054451 sshd[6319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:39.073643 systemd-logind[1454]: New session 28 of user core. Sep 9 00:21:39.085772 systemd[1]: Started session-28.scope - Session 28 of User core. Sep 9 00:21:39.435953 sshd[6319]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:39.450388 systemd[1]: sshd@27-10.0.0.15:22-10.0.0.1:47876.service: Deactivated successfully. Sep 9 00:21:39.470155 systemd[1]: session-28.scope: Deactivated successfully. Sep 9 00:21:39.480842 systemd-logind[1454]: Session 28 logged out. Waiting for processes to exit. Sep 9 00:21:39.486050 systemd-logind[1454]: Removed session 28. Sep 9 00:21:39.927954 update_engine[1458]: I20250909 00:21:39.927750 1458 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 9 00:21:39.927954 update_engine[1458]: I20250909 00:21:39.927852 1458 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 9 00:21:39.935416 update_engine[1458]: I20250909 00:21:39.935159 1458 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 9 00:21:39.936776 update_engine[1458]: I20250909 00:21:39.936408 1458 omaha_request_params.cc:62] Current group set to lts Sep 9 00:21:39.936776 update_engine[1458]: I20250909 00:21:39.936625 1458 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 9 00:21:39.936776 update_engine[1458]: I20250909 00:21:39.936642 1458 update_attempter.cc:643] Scheduling an action processor start. Sep 9 00:21:39.936776 update_engine[1458]: I20250909 00:21:39.936681 1458 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 9 00:21:39.936776 update_engine[1458]: I20250909 00:21:39.936744 1458 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 9 00:21:39.936958 update_engine[1458]: I20250909 00:21:39.936841 1458 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 9 00:21:39.936958 update_engine[1458]: I20250909 00:21:39.936856 1458 omaha_request_action.cc:272] Request: Sep 9 00:21:39.936958 update_engine[1458]: Sep 9 00:21:39.936958 update_engine[1458]: Sep 9 00:21:39.936958 update_engine[1458]: Sep 9 00:21:39.936958 update_engine[1458]: Sep 9 00:21:39.936958 update_engine[1458]: Sep 9 00:21:39.936958 update_engine[1458]: Sep 9 00:21:39.936958 update_engine[1458]: Sep 9 00:21:39.936958 update_engine[1458]: Sep 9 00:21:39.936958 update_engine[1458]: I20250909 00:21:39.936866 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 00:21:39.958125 update_engine[1458]: I20250909 00:21:39.957615 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 00:21:39.958125 update_engine[1458]: I20250909 00:21:39.958004 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 00:21:39.972891 update_engine[1458]: E20250909 00:21:39.972704 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 00:21:39.972891 update_engine[1458]: I20250909 00:21:39.972841 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 9 00:21:39.974897 locksmithd[1481]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 9 00:21:43.434842 kubelet[2562]: E0909 00:21:43.434783 2562 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 00:21:44.488855 systemd[1]: Started sshd@28-10.0.0.15:22-10.0.0.1:36722.service - OpenSSH per-connection server daemon (10.0.0.1:36722). Sep 9 00:21:44.589664 sshd[6357]: Accepted publickey for core from 10.0.0.1 port 36722 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:21:44.592708 sshd[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:44.604893 systemd-logind[1454]: New session 29 of user core. Sep 9 00:21:44.614885 systemd[1]: Started session-29.scope - Session 29 of User core. Sep 9 00:21:45.525155 sshd[6357]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:45.540105 systemd[1]: sshd@28-10.0.0.15:22-10.0.0.1:36722.service: Deactivated successfully. Sep 9 00:21:45.555201 systemd[1]: session-29.scope: Deactivated successfully. Sep 9 00:21:45.562300 systemd-logind[1454]: Session 29 logged out. Waiting for processes to exit. Sep 9 00:21:45.570550 systemd-logind[1454]: Removed session 29. Sep 9 00:21:49.836097 update_engine[1458]: I20250909 00:21:49.834875 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 9 00:21:49.836097 update_engine[1458]: I20250909 00:21:49.835345 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 9 00:21:49.840450 update_engine[1458]: I20250909 00:21:49.837811 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 9 00:21:49.852564 update_engine[1458]: E20250909 00:21:49.852080 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 9 00:21:49.852564 update_engine[1458]: I20250909 00:21:49.852237 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Sep 9 00:21:50.554811 systemd[1]: Started sshd@29-10.0.0.15:22-10.0.0.1:56976.service - OpenSSH per-connection server daemon (10.0.0.1:56976). Sep 9 00:21:50.679917 sshd[6397]: Accepted publickey for core from 10.0.0.1 port 56976 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:21:50.682651 sshd[6397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:50.699422 systemd-logind[1454]: New session 30 of user core. Sep 9 00:21:50.709565 systemd[1]: Started session-30.scope - Session 30 of User core. Sep 9 00:21:51.274947 sshd[6397]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:51.286740 systemd[1]: sshd@29-10.0.0.15:22-10.0.0.1:56976.service: Deactivated successfully. Sep 9 00:21:51.292394 systemd[1]: session-30.scope: Deactivated successfully. Sep 9 00:21:51.300548 systemd-logind[1454]: Session 30 logged out. Waiting for processes to exit. Sep 9 00:21:51.304200 systemd-logind[1454]: Removed session 30. Sep 9 00:21:56.291522 systemd[1]: Started sshd@30-10.0.0.15:22-10.0.0.1:56980.service - OpenSSH per-connection server daemon (10.0.0.1:56980). Sep 9 00:21:56.343946 sshd[6441]: Accepted publickey for core from 10.0.0.1 port 56980 ssh2: RSA SHA256:KA9/SrLi0HJZnQ3nz9u7pIgg2ymhn74LV9fPMAvvX5M Sep 9 00:21:56.347052 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 00:21:56.361923 systemd-logind[1454]: New session 31 of user core. Sep 9 00:21:56.369237 systemd[1]: Started session-31.scope - Session 31 of User core. Sep 9 00:21:56.615813 sshd[6441]: pam_unix(sshd:session): session closed for user core Sep 9 00:21:56.622863 systemd[1]: sshd@30-10.0.0.15:22-10.0.0.1:56980.service: Deactivated successfully. Sep 9 00:21:56.627291 systemd[1]: session-31.scope: Deactivated successfully. Sep 9 00:21:56.629421 systemd-logind[1454]: Session 31 logged out. Waiting for processes to exit. Sep 9 00:21:56.630959 systemd-logind[1454]: Removed session 31.