Sep 13 00:09:45.018914 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 22:30:50 -00 2025 Sep 13 00:09:45.018940 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:45.018955 kernel: BIOS-provided physical RAM map: Sep 13 00:09:45.018963 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 13 00:09:45.019004 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 13 00:09:45.019012 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 13 00:09:45.019019 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 13 00:09:45.019026 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 13 00:09:45.019032 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 13 00:09:45.019041 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 13 00:09:45.019048 kernel: NX (Execute Disable) protection: active Sep 13 00:09:45.019055 kernel: APIC: Static calls initialized Sep 13 00:09:45.019066 kernel: SMBIOS 2.8 present. Sep 13 00:09:45.019073 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 13 00:09:45.019081 kernel: Hypervisor detected: KVM Sep 13 00:09:45.019091 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 13 00:09:45.019102 kernel: kvm-clock: using sched offset of 3408003804 cycles Sep 13 00:09:45.019110 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 13 00:09:45.019117 kernel: tsc: Detected 1995.312 MHz processor Sep 13 00:09:45.019125 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 13 00:09:45.019132 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 13 00:09:45.019140 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 13 00:09:45.019147 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 13 00:09:45.019154 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 13 00:09:45.019164 kernel: ACPI: Early table checksum verification disabled Sep 13 00:09:45.019171 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 13 00:09:45.019178 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:09:45.019185 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:09:45.019192 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:09:45.019199 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 13 00:09:45.019206 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:09:45.019213 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:09:45.019220 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:09:45.019230 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:09:45.019237 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 13 00:09:45.019244 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 13 00:09:45.019251 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 13 00:09:45.019258 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 13 00:09:45.019265 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 13 00:09:45.019272 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 13 00:09:45.019285 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 13 00:09:45.019293 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 13 00:09:45.019300 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 13 00:09:45.019308 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 13 00:09:45.019316 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 13 00:09:45.019336 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 13 00:09:45.019350 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 13 00:09:45.019366 kernel: Zone ranges: Sep 13 00:09:45.019378 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 13 00:09:45.019386 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 13 00:09:45.019393 kernel: Normal empty Sep 13 00:09:45.019401 kernel: Movable zone start for each node Sep 13 00:09:45.019408 kernel: Early memory node ranges Sep 13 00:09:45.019415 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 13 00:09:45.019423 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 13 00:09:45.019430 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 13 00:09:45.019441 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 13 00:09:45.019448 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 13 00:09:45.019460 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 13 00:09:45.019484 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 13 00:09:45.019497 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 13 00:09:45.019511 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 13 00:09:45.019524 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 13 00:09:45.019536 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 13 00:09:45.019544 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 13 00:09:45.019556 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 13 00:09:45.019564 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 13 00:09:45.019571 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 13 00:09:45.019579 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 13 00:09:45.019586 kernel: TSC deadline timer available Sep 13 00:09:45.019594 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 13 00:09:45.019601 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 13 00:09:45.019609 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 13 00:09:45.019621 kernel: Booting paravirtualized kernel on KVM Sep 13 00:09:45.019628 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 13 00:09:45.019639 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 13 00:09:45.019654 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 13 00:09:45.019667 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 13 00:09:45.019678 kernel: pcpu-alloc: [0] 0 1 Sep 13 00:09:45.019689 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 13 00:09:45.020925 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:45.020949 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:09:45.020957 kernel: random: crng init done Sep 13 00:09:45.020971 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:09:45.020979 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 13 00:09:45.020987 kernel: Fallback order for Node 0: 0 Sep 13 00:09:45.020995 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 13 00:09:45.021002 kernel: Policy zone: DMA32 Sep 13 00:09:45.021010 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:09:45.021018 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125148K reserved, 0K cma-reserved) Sep 13 00:09:45.021026 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 13 00:09:45.021037 kernel: Kernel/User page tables isolation: enabled Sep 13 00:09:45.021045 kernel: ftrace: allocating 37974 entries in 149 pages Sep 13 00:09:45.021052 kernel: ftrace: allocated 149 pages with 4 groups Sep 13 00:09:45.021060 kernel: Dynamic Preempt: voluntary Sep 13 00:09:45.021067 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:09:45.021076 kernel: rcu: RCU event tracing is enabled. Sep 13 00:09:45.021084 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 13 00:09:45.021092 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:09:45.021100 kernel: Rude variant of Tasks RCU enabled. Sep 13 00:09:45.021107 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:09:45.021118 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:09:45.021126 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 13 00:09:45.021133 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 13 00:09:45.021141 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:09:45.021154 kernel: Console: colour VGA+ 80x25 Sep 13 00:09:45.021162 kernel: printk: console [tty0] enabled Sep 13 00:09:45.021169 kernel: printk: console [ttyS0] enabled Sep 13 00:09:45.021177 kernel: ACPI: Core revision 20230628 Sep 13 00:09:45.021184 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 13 00:09:45.021195 kernel: APIC: Switch to symmetric I/O mode setup Sep 13 00:09:45.021203 kernel: x2apic enabled Sep 13 00:09:45.021210 kernel: APIC: Switched APIC routing to: physical x2apic Sep 13 00:09:45.021218 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 13 00:09:45.021225 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Sep 13 00:09:45.021233 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) Sep 13 00:09:45.021241 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 13 00:09:45.021249 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 13 00:09:45.021268 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 13 00:09:45.021276 kernel: Spectre V2 : Mitigation: Retpolines Sep 13 00:09:45.021284 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 13 00:09:45.021296 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 13 00:09:45.021310 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 13 00:09:45.021318 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 13 00:09:45.021326 kernel: MDS: Mitigation: Clear CPU buffers Sep 13 00:09:45.021334 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 13 00:09:45.021342 kernel: active return thunk: its_return_thunk Sep 13 00:09:45.021358 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 13 00:09:45.021366 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 13 00:09:45.021374 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 13 00:09:45.021382 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 13 00:09:45.021391 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 13 00:09:45.021399 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 13 00:09:45.021407 kernel: Freeing SMP alternatives memory: 32K Sep 13 00:09:45.021415 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:09:45.021426 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:09:45.021434 kernel: landlock: Up and running. Sep 13 00:09:45.021443 kernel: SELinux: Initializing. Sep 13 00:09:45.021451 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:09:45.021459 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 13 00:09:45.021468 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 13 00:09:45.021476 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:45.021484 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:45.021493 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 13 00:09:45.021509 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 13 00:09:45.021522 kernel: signal: max sigframe size: 1776 Sep 13 00:09:45.021536 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:09:45.021552 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:09:45.021564 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 13 00:09:45.021575 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:09:45.021587 kernel: smpboot: x86: Booting SMP configuration: Sep 13 00:09:45.021598 kernel: .... node #0, CPUs: #1 Sep 13 00:09:45.021617 kernel: smp: Brought up 1 node, 2 CPUs Sep 13 00:09:45.021634 kernel: smpboot: Max logical packages: 1 Sep 13 00:09:45.021647 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Sep 13 00:09:45.021659 kernel: devtmpfs: initialized Sep 13 00:09:45.021672 kernel: x86/mm: Memory block size: 128MB Sep 13 00:09:45.021730 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:09:45.021744 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 13 00:09:45.021757 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:09:45.021770 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:09:45.021782 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:09:45.021801 kernel: audit: type=2000 audit(1757722183.524:1): state=initialized audit_enabled=0 res=1 Sep 13 00:09:45.021813 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:09:45.021826 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 13 00:09:45.021839 kernel: cpuidle: using governor menu Sep 13 00:09:45.021852 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:09:45.021862 kernel: dca service started, version 1.12.1 Sep 13 00:09:45.021870 kernel: PCI: Using configuration type 1 for base access Sep 13 00:09:45.021879 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 13 00:09:45.021888 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:09:45.021900 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:09:45.021909 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:09:45.021917 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:09:45.021926 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:09:45.021934 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:09:45.021942 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 13 00:09:45.021950 kernel: ACPI: Interpreter enabled Sep 13 00:09:45.021958 kernel: ACPI: PM: (supports S0 S5) Sep 13 00:09:45.021967 kernel: ACPI: Using IOAPIC for interrupt routing Sep 13 00:09:45.021979 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 13 00:09:45.021987 kernel: PCI: Using E820 reservations for host bridge windows Sep 13 00:09:45.021996 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 13 00:09:45.022004 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:09:45.022262 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:09:45.022381 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 13 00:09:45.022479 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 13 00:09:45.022497 kernel: acpiphp: Slot [3] registered Sep 13 00:09:45.022514 kernel: acpiphp: Slot [4] registered Sep 13 00:09:45.022526 kernel: acpiphp: Slot [5] registered Sep 13 00:09:45.022538 kernel: acpiphp: Slot [6] registered Sep 13 00:09:45.022551 kernel: acpiphp: Slot [7] registered Sep 13 00:09:45.022562 kernel: acpiphp: Slot [8] registered Sep 13 00:09:45.022573 kernel: acpiphp: Slot [9] registered Sep 13 00:09:45.022586 kernel: acpiphp: Slot [10] registered Sep 13 00:09:45.022598 kernel: acpiphp: Slot [11] registered Sep 13 00:09:45.022614 kernel: acpiphp: Slot [12] registered Sep 13 00:09:45.022626 kernel: acpiphp: Slot [13] registered Sep 13 00:09:45.022639 kernel: acpiphp: Slot [14] registered Sep 13 00:09:45.022652 kernel: acpiphp: Slot [15] registered Sep 13 00:09:45.022664 kernel: acpiphp: Slot [16] registered Sep 13 00:09:45.022677 kernel: acpiphp: Slot [17] registered Sep 13 00:09:45.022692 kernel: acpiphp: Slot [18] registered Sep 13 00:09:45.023292 kernel: acpiphp: Slot [19] registered Sep 13 00:09:45.023305 kernel: acpiphp: Slot [20] registered Sep 13 00:09:45.023313 kernel: acpiphp: Slot [21] registered Sep 13 00:09:45.023335 kernel: acpiphp: Slot [22] registered Sep 13 00:09:45.023348 kernel: acpiphp: Slot [23] registered Sep 13 00:09:45.023361 kernel: acpiphp: Slot [24] registered Sep 13 00:09:45.023374 kernel: acpiphp: Slot [25] registered Sep 13 00:09:45.023388 kernel: acpiphp: Slot [26] registered Sep 13 00:09:45.023396 kernel: acpiphp: Slot [27] registered Sep 13 00:09:45.023404 kernel: acpiphp: Slot [28] registered Sep 13 00:09:45.023412 kernel: acpiphp: Slot [29] registered Sep 13 00:09:45.023421 kernel: acpiphp: Slot [30] registered Sep 13 00:09:45.023433 kernel: acpiphp: Slot [31] registered Sep 13 00:09:45.023441 kernel: PCI host bridge to bus 0000:00 Sep 13 00:09:45.023613 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 13 00:09:45.023765 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 13 00:09:45.023858 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 13 00:09:45.023947 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 13 00:09:45.024076 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 13 00:09:45.024193 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:09:45.024347 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 13 00:09:45.024486 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 13 00:09:45.024600 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 13 00:09:45.024726 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 13 00:09:45.024829 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 13 00:09:45.024926 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 13 00:09:45.025039 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 13 00:09:45.025154 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 13 00:09:45.025291 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 13 00:09:45.025390 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 13 00:09:45.025497 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 13 00:09:45.025627 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 13 00:09:45.025784 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 13 00:09:45.025911 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 13 00:09:45.026052 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 13 00:09:45.026157 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 13 00:09:45.026257 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 13 00:09:45.026354 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 13 00:09:45.026460 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 13 00:09:45.026576 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:09:45.026673 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 13 00:09:45.026809 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 13 00:09:45.026916 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 13 00:09:45.027046 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 13 00:09:45.027147 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 13 00:09:45.027246 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 13 00:09:45.027407 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 13 00:09:45.027652 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 13 00:09:45.027782 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 13 00:09:45.027884 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 13 00:09:45.027994 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 13 00:09:45.028136 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:09:45.028253 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 13 00:09:45.028382 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 13 00:09:45.028511 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 13 00:09:45.028665 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 13 00:09:45.028831 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 13 00:09:45.028947 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 13 00:09:45.029058 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 13 00:09:45.029165 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 13 00:09:45.029269 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 13 00:09:45.029367 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 13 00:09:45.029384 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 13 00:09:45.029393 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 13 00:09:45.029402 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 13 00:09:45.029411 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 13 00:09:45.029419 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 13 00:09:45.029431 kernel: iommu: Default domain type: Translated Sep 13 00:09:45.029439 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 13 00:09:45.029447 kernel: PCI: Using ACPI for IRQ routing Sep 13 00:09:45.029456 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 13 00:09:45.029464 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 13 00:09:45.029472 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 13 00:09:45.029575 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 13 00:09:45.029673 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 13 00:09:45.030683 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 13 00:09:45.030719 kernel: vgaarb: loaded Sep 13 00:09:45.030729 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 13 00:09:45.030738 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 13 00:09:45.030746 kernel: clocksource: Switched to clocksource kvm-clock Sep 13 00:09:45.030754 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:09:45.030763 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:09:45.030771 kernel: pnp: PnP ACPI init Sep 13 00:09:45.030780 kernel: pnp: PnP ACPI: found 4 devices Sep 13 00:09:45.030794 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 13 00:09:45.030802 kernel: NET: Registered PF_INET protocol family Sep 13 00:09:45.030811 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:09:45.030820 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 13 00:09:45.030828 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:09:45.030837 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 13 00:09:45.030845 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 13 00:09:45.030853 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 13 00:09:45.030861 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:09:45.030873 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 13 00:09:45.030881 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:09:45.030889 kernel: NET: Registered PF_XDP protocol family Sep 13 00:09:45.030992 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 13 00:09:45.031119 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 13 00:09:45.031207 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 13 00:09:45.031303 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 13 00:09:45.031393 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 13 00:09:45.031533 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 13 00:09:45.031687 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 13 00:09:45.032126 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 13 00:09:45.033781 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 41585 usecs Sep 13 00:09:45.033807 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:09:45.033817 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 13 00:09:45.033826 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns Sep 13 00:09:45.033835 kernel: Initialise system trusted keyrings Sep 13 00:09:45.033845 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 13 00:09:45.033858 kernel: Key type asymmetric registered Sep 13 00:09:45.033867 kernel: Asymmetric key parser 'x509' registered Sep 13 00:09:45.033875 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 13 00:09:45.033883 kernel: io scheduler mq-deadline registered Sep 13 00:09:45.033892 kernel: io scheduler kyber registered Sep 13 00:09:45.033900 kernel: io scheduler bfq registered Sep 13 00:09:45.033909 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 13 00:09:45.033918 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 13 00:09:45.033927 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 13 00:09:45.033939 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 13 00:09:45.033947 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:09:45.033955 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 13 00:09:45.033964 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 13 00:09:45.033972 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 13 00:09:45.033980 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 13 00:09:45.033988 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 13 00:09:45.034126 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 13 00:09:45.034234 kernel: rtc_cmos 00:03: registered as rtc0 Sep 13 00:09:45.034343 kernel: rtc_cmos 00:03: setting system clock to 2025-09-13T00:09:44 UTC (1757722184) Sep 13 00:09:45.034438 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 13 00:09:45.034449 kernel: intel_pstate: CPU model not supported Sep 13 00:09:45.034457 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:09:45.034466 kernel: Segment Routing with IPv6 Sep 13 00:09:45.034474 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:09:45.034483 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:09:45.034496 kernel: Key type dns_resolver registered Sep 13 00:09:45.034505 kernel: IPI shorthand broadcast: enabled Sep 13 00:09:45.034519 kernel: sched_clock: Marking stable (1085010330, 183333867)->(1430098142, -161753945) Sep 13 00:09:45.034535 kernel: registered taskstats version 1 Sep 13 00:09:45.034547 kernel: Loading compiled-in X.509 certificates Sep 13 00:09:45.034561 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 1274e0c573ac8d09163d6bc6d1ee1445fb2f8cc6' Sep 13 00:09:45.034570 kernel: Key type .fscrypt registered Sep 13 00:09:45.034578 kernel: Key type fscrypt-provisioning registered Sep 13 00:09:45.034586 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:09:45.034598 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:09:45.034607 kernel: ima: No architecture policies found Sep 13 00:09:45.034615 kernel: clk: Disabling unused clocks Sep 13 00:09:45.034623 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 13 00:09:45.034632 kernel: Write protecting the kernel read-only data: 36864k Sep 13 00:09:45.034658 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 13 00:09:45.034669 kernel: Run /init as init process Sep 13 00:09:45.034690 kernel: with arguments: Sep 13 00:09:45.034745 kernel: /init Sep 13 00:09:45.035573 kernel: with environment: Sep 13 00:09:45.035593 kernel: HOME=/ Sep 13 00:09:45.035602 kernel: TERM=linux Sep 13 00:09:45.035611 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:09:45.035623 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:09:45.035635 systemd[1]: Detected virtualization kvm. Sep 13 00:09:45.035645 systemd[1]: Detected architecture x86-64. Sep 13 00:09:45.035696 systemd[1]: Running in initrd. Sep 13 00:09:45.035773 systemd[1]: No hostname configured, using default hostname. Sep 13 00:09:45.035783 systemd[1]: Hostname set to . Sep 13 00:09:45.035792 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:09:45.035801 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:09:45.035810 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:45.035820 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:45.035830 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:09:45.035839 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:09:45.035852 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:09:45.035861 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:09:45.035874 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:09:45.035892 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:09:45.035906 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:45.035921 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:45.035933 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:09:45.035949 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:09:45.035962 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:09:45.035981 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:09:45.035992 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:09:45.036001 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:09:45.036013 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:09:45.036022 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:09:45.036031 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:45.036040 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:45.036049 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:45.036058 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:09:45.036067 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:09:45.036076 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:09:45.036085 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:09:45.036096 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:09:45.036105 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:09:45.036114 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:09:45.036123 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:45.036132 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:09:45.036179 systemd-journald[184]: Collecting audit messages is disabled. Sep 13 00:09:45.036206 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:45.036216 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:09:45.036234 systemd-journald[184]: Journal started Sep 13 00:09:45.036270 systemd-journald[184]: Runtime Journal (/run/log/journal/aff756ac4d0f4d1aa3250f4976cc0c25) is 4.9M, max 39.3M, 34.4M free. Sep 13 00:09:45.033444 systemd-modules-load[185]: Inserted module 'overlay' Sep 13 00:09:45.051739 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:09:45.070752 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:09:45.073084 systemd-modules-load[185]: Inserted module 'br_netfilter' Sep 13 00:09:45.098161 kernel: Bridge firewalling registered Sep 13 00:09:45.098191 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:09:45.105794 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:45.113667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:45.114797 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:09:45.124000 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:45.127973 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:09:45.134979 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:09:45.137646 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:09:45.158108 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:45.166936 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:45.170538 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:45.182031 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:09:45.183012 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:45.185926 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:09:45.215100 systemd-resolved[219]: Positive Trust Anchors: Sep 13 00:09:45.215909 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:09:45.218615 dracut-cmdline[221]: dracut-dracut-053 Sep 13 00:09:45.215946 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:09:45.220433 systemd-resolved[219]: Defaulting to hostname 'linux'. Sep 13 00:09:45.227276 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=2945e6465d436b7d1da8a9350a0544af0bd9aec821cd06987451d5e1d3071534 Sep 13 00:09:45.224601 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:09:45.225659 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:45.342984 kernel: SCSI subsystem initialized Sep 13 00:09:45.357769 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:09:45.373754 kernel: iscsi: registered transport (tcp) Sep 13 00:09:45.407059 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:09:45.407183 kernel: QLogic iSCSI HBA Driver Sep 13 00:09:45.472256 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:09:45.480007 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:09:45.512343 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:09:45.512438 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:09:45.512459 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:09:45.561749 kernel: raid6: avx2x4 gen() 22804 MB/s Sep 13 00:09:45.578748 kernel: raid6: avx2x2 gen() 26208 MB/s Sep 13 00:09:45.596020 kernel: raid6: avx2x1 gen() 21172 MB/s Sep 13 00:09:45.596118 kernel: raid6: using algorithm avx2x2 gen() 26208 MB/s Sep 13 00:09:45.614743 kernel: raid6: .... xor() 15621 MB/s, rmw enabled Sep 13 00:09:45.614832 kernel: raid6: using avx2x2 recovery algorithm Sep 13 00:09:45.640765 kernel: xor: automatically using best checksumming function avx Sep 13 00:09:45.834757 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:09:45.850563 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:09:45.858972 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:45.886208 systemd-udevd[403]: Using default interface naming scheme 'v255'. Sep 13 00:09:45.893108 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:45.901998 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:09:45.930827 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Sep 13 00:09:45.977749 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:09:45.985093 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:09:46.053171 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:46.061252 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:09:46.086230 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:09:46.089741 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:09:46.090412 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:46.093617 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:09:46.100963 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:09:46.133686 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:09:46.159576 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 13 00:09:46.159944 kernel: scsi host0: Virtio SCSI HBA Sep 13 00:09:46.178770 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 13 00:09:46.182728 kernel: cryptd: max_cpu_qlen set to 1000 Sep 13 00:09:46.212947 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:09:46.213024 kernel: GPT:9289727 != 125829119 Sep 13 00:09:46.213036 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:09:46.214376 kernel: GPT:9289727 != 125829119 Sep 13 00:09:46.214428 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:09:46.216290 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:09:46.223860 kernel: ACPI: bus type USB registered Sep 13 00:09:46.223976 kernel: usbcore: registered new interface driver usbfs Sep 13 00:09:46.230248 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:09:46.230460 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:46.235957 kernel: usbcore: registered new interface driver hub Sep 13 00:09:46.235999 kernel: usbcore: registered new device driver usb Sep 13 00:09:46.235781 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:46.236945 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:46.238973 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:46.239899 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:46.255119 kernel: AVX2 version of gcm_enc/dec engaged. Sep 13 00:09:46.255186 kernel: AES CTR mode by8 optimization enabled Sep 13 00:09:46.256178 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:46.263337 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 13 00:09:46.280154 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 13 00:09:46.333065 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 13 00:09:46.337322 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 13 00:09:46.337466 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 13 00:09:46.337637 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 13 00:09:46.339873 kernel: libata version 3.00 loaded. Sep 13 00:09:46.339902 kernel: BTRFS: device fsid fa70a3b0-3d47-4508-bba0-9fa4607626aa devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (451) Sep 13 00:09:46.343690 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (459) Sep 13 00:09:46.349778 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 13 00:09:46.350749 kernel: hub 1-0:1.0: USB hub found Sep 13 00:09:46.351020 kernel: hub 1-0:1.0: 2 ports detected Sep 13 00:09:46.359682 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:09:46.408760 kernel: scsi host1: ata_piix Sep 13 00:09:46.409049 kernel: scsi host2: ata_piix Sep 13 00:09:46.409207 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 13 00:09:46.409221 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 13 00:09:46.413441 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:46.420431 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:09:46.425506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:09:46.429966 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:09:46.430698 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:09:46.443188 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:09:46.448695 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:09:46.452490 disk-uuid[541]: Primary Header is updated. Sep 13 00:09:46.452490 disk-uuid[541]: Secondary Entries is updated. Sep 13 00:09:46.452490 disk-uuid[541]: Secondary Header is updated. Sep 13 00:09:46.458767 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:09:46.463752 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:09:46.495199 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:47.468140 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:09:47.468245 disk-uuid[542]: The operation has completed successfully. Sep 13 00:09:47.511805 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:09:47.512789 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:09:47.531020 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:09:47.549142 sh[562]: Success Sep 13 00:09:47.568746 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 13 00:09:47.638761 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:09:47.648933 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:09:47.653921 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:09:47.685247 kernel: BTRFS info (device dm-0): first mount of filesystem fa70a3b0-3d47-4508-bba0-9fa4607626aa Sep 13 00:09:47.685336 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:47.685350 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:09:47.686785 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:09:47.688038 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:09:47.699486 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:09:47.700865 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:09:47.718155 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:09:47.721941 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:09:47.737837 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:47.737924 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:47.740299 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:09:47.743730 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:09:47.759104 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:09:47.760962 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:47.768855 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:09:47.772996 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:09:47.871591 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:09:47.888152 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:09:47.927268 systemd-networkd[745]: lo: Link UP Sep 13 00:09:47.927280 systemd-networkd[745]: lo: Gained carrier Sep 13 00:09:47.932062 systemd-networkd[745]: Enumeration completed Sep 13 00:09:47.932895 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:09:47.934030 systemd[1]: Reached target network.target - Network. Sep 13 00:09:47.934872 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 13 00:09:47.934877 systemd-networkd[745]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 13 00:09:47.937003 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:47.937008 systemd-networkd[745]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:09:47.938000 systemd-networkd[745]: eth0: Link UP Sep 13 00:09:47.938006 systemd-networkd[745]: eth0: Gained carrier Sep 13 00:09:47.938018 systemd-networkd[745]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 13 00:09:47.944242 systemd-networkd[745]: eth1: Link UP Sep 13 00:09:47.944254 systemd-networkd[745]: eth1: Gained carrier Sep 13 00:09:47.944271 systemd-networkd[745]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:09:47.950594 ignition[651]: Ignition 2.19.0 Sep 13 00:09:47.950611 ignition[651]: Stage: fetch-offline Sep 13 00:09:47.950660 ignition[651]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:47.950674 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:09:47.954451 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:09:47.950824 ignition[651]: parsed url from cmdline: "" Sep 13 00:09:47.950828 ignition[651]: no config URL provided Sep 13 00:09:47.950836 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:09:47.957794 systemd-networkd[745]: eth0: DHCPv4 address 164.90.159.5/20, gateway 164.90.144.1 acquired from 169.254.169.253 Sep 13 00:09:47.950845 ignition[651]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:09:47.963901 systemd-networkd[745]: eth1: DHCPv4 address 10.124.0.26/20 acquired from 169.254.169.253 Sep 13 00:09:47.950851 ignition[651]: failed to fetch config: resource requires networking Sep 13 00:09:47.964374 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 13 00:09:47.951067 ignition[651]: Ignition finished successfully Sep 13 00:09:48.003237 ignition[753]: Ignition 2.19.0 Sep 13 00:09:48.003252 ignition[753]: Stage: fetch Sep 13 00:09:48.003582 ignition[753]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:48.003597 ignition[753]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:09:48.003798 ignition[753]: parsed url from cmdline: "" Sep 13 00:09:48.003805 ignition[753]: no config URL provided Sep 13 00:09:48.003813 ignition[753]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:09:48.003834 ignition[753]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:09:48.003866 ignition[753]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 13 00:09:48.021000 ignition[753]: GET result: OK Sep 13 00:09:48.021171 ignition[753]: parsing config with SHA512: c7124d08bad7853f71afccc5c0a2c33d41a781c82b624579ec46aa071a1a5fcdd499288119e400959f218eaf9fc6b886dcda001e2f5536f7e0c3d3e9ed9273ce Sep 13 00:09:48.031565 unknown[753]: fetched base config from "system" Sep 13 00:09:48.031585 unknown[753]: fetched base config from "system" Sep 13 00:09:48.032303 ignition[753]: fetch: fetch complete Sep 13 00:09:48.031596 unknown[753]: fetched user config from "digitalocean" Sep 13 00:09:48.032311 ignition[753]: fetch: fetch passed Sep 13 00:09:48.035756 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 13 00:09:48.032381 ignition[753]: Ignition finished successfully Sep 13 00:09:48.051156 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:09:48.073802 ignition[761]: Ignition 2.19.0 Sep 13 00:09:48.073819 ignition[761]: Stage: kargs Sep 13 00:09:48.074079 ignition[761]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:48.074094 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:09:48.077491 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:09:48.075658 ignition[761]: kargs: kargs passed Sep 13 00:09:48.075769 ignition[761]: Ignition finished successfully Sep 13 00:09:48.085064 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:09:48.106284 ignition[767]: Ignition 2.19.0 Sep 13 00:09:48.106303 ignition[767]: Stage: disks Sep 13 00:09:48.106603 ignition[767]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:48.106616 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:09:48.108049 ignition[767]: disks: disks passed Sep 13 00:09:48.110495 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:09:48.108125 ignition[767]: Ignition finished successfully Sep 13 00:09:48.116879 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:09:48.117948 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:09:48.119003 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:09:48.120435 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:09:48.121846 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:09:48.135818 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:09:48.153944 systemd-fsck[775]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:09:48.157519 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:09:48.166064 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:09:48.315760 kernel: EXT4-fs (vda9): mounted filesystem 3a3ecd49-b269-4fcb-bb61-e2994e1868ee r/w with ordered data mode. Quota mode: none. Sep 13 00:09:48.316824 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:09:48.318521 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:09:48.325949 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:09:48.329901 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:09:48.335078 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Sep 13 00:09:48.342752 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (783) Sep 13 00:09:48.345967 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 13 00:09:48.354037 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:48.354084 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:48.354103 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:09:48.356611 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:09:48.358014 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:09:48.370803 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:09:48.375971 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:09:48.377850 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:09:48.387063 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:09:48.472982 coreos-metadata[785]: Sep 13 00:09:48.472 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:09:48.480598 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:09:48.485672 coreos-metadata[785]: Sep 13 00:09:48.484 INFO Fetch successful Sep 13 00:09:48.489735 coreos-metadata[786]: Sep 13 00:09:48.488 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:09:48.497051 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:09:48.498406 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 13 00:09:48.499042 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Sep 13 00:09:48.502902 coreos-metadata[786]: Sep 13 00:09:48.500 INFO Fetch successful Sep 13 00:09:48.510581 initrd-setup-root[828]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:09:48.512301 coreos-metadata[786]: Sep 13 00:09:48.512 INFO wrote hostname ci-4081.3.5-n-738365eea6 to /sysroot/etc/hostname Sep 13 00:09:48.514180 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:09:48.521505 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:09:48.651212 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:09:48.663063 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:09:48.667023 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:09:48.678798 kernel: BTRFS info (device vda6): last unmount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:48.682112 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:09:48.713933 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:09:48.721531 ignition[903]: INFO : Ignition 2.19.0 Sep 13 00:09:48.721531 ignition[903]: INFO : Stage: mount Sep 13 00:09:48.724084 ignition[903]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:48.724084 ignition[903]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:09:48.725846 ignition[903]: INFO : mount: mount passed Sep 13 00:09:48.725846 ignition[903]: INFO : Ignition finished successfully Sep 13 00:09:48.726084 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:09:48.734974 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:09:48.765184 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:09:48.778025 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (915) Sep 13 00:09:48.781411 kernel: BTRFS info (device vda6): first mount of filesystem 94088f30-ba7d-4694-bba6-875359d7b417 Sep 13 00:09:48.781519 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 13 00:09:48.781538 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:09:48.785789 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:09:48.788940 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:09:48.823878 ignition[931]: INFO : Ignition 2.19.0 Sep 13 00:09:48.823878 ignition[931]: INFO : Stage: files Sep 13 00:09:48.825519 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:48.825519 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:09:48.827354 ignition[931]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:09:48.828977 ignition[931]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:09:48.828977 ignition[931]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:09:48.833358 ignition[931]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:09:48.834532 ignition[931]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:09:48.835592 unknown[931]: wrote ssh authorized keys file for user: core Sep 13 00:09:48.836476 ignition[931]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:09:48.837388 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:09:48.838490 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:09:48.838490 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:09:48.838490 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 13 00:09:48.894247 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:09:49.269779 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 13 00:09:49.269779 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:09:49.274995 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:09:49.274995 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:09:49.274995 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:09:49.274995 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:09:49.274995 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:09:49.274995 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:09:49.274995 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:09:49.274995 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:09:49.274995 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:09:49.274995 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:09:49.274995 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:09:49.274995 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:09:49.274995 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 13 00:09:49.717827 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 13 00:09:49.729146 systemd-networkd[745]: eth1: Gained IPv6LL Sep 13 00:09:49.857846 systemd-networkd[745]: eth0: Gained IPv6LL Sep 13 00:09:50.138611 ignition[931]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 13 00:09:50.138611 ignition[931]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 13 00:09:50.142106 ignition[931]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:09:50.142106 ignition[931]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:09:50.142106 ignition[931]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 13 00:09:50.142106 ignition[931]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 13 00:09:50.142106 ignition[931]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:09:50.142106 ignition[931]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:09:50.142106 ignition[931]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 13 00:09:50.142106 ignition[931]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:09:50.142106 ignition[931]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:09:50.142106 ignition[931]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:09:50.142106 ignition[931]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:09:50.142106 ignition[931]: INFO : files: files passed Sep 13 00:09:50.142106 ignition[931]: INFO : Ignition finished successfully Sep 13 00:09:50.143291 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:09:50.151049 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:09:50.162572 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:09:50.168114 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:09:50.169044 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:09:50.179200 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:50.179200 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:50.183752 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:09:50.184514 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:09:50.186406 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:09:50.195076 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:09:50.246427 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:09:50.246645 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:09:50.248347 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:09:50.249335 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:09:50.250690 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:09:50.257105 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:09:50.277023 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:09:50.287132 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:09:50.303052 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:50.304901 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:50.306728 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:09:50.307446 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:09:50.307667 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:09:50.309218 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:09:50.310049 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:09:50.311260 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:09:50.312635 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:09:50.314306 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:09:50.315580 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:09:50.317029 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:09:50.318683 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:09:50.320059 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:09:50.321448 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:09:50.322726 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:09:50.322963 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:09:50.324779 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:50.326278 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:50.327530 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:09:50.328849 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:50.329774 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:09:50.329999 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:09:50.331923 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:09:50.332215 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:09:50.333807 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:09:50.334032 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:09:50.334961 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 13 00:09:50.335128 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 13 00:09:50.346697 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:09:50.347417 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:09:50.347739 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:50.351026 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:09:50.352648 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:09:50.352932 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:50.356965 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:09:50.357103 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:09:50.365988 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:09:50.366115 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:09:50.380752 ignition[984]: INFO : Ignition 2.19.0 Sep 13 00:09:50.380752 ignition[984]: INFO : Stage: umount Sep 13 00:09:50.380752 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:09:50.380752 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 13 00:09:50.386794 ignition[984]: INFO : umount: umount passed Sep 13 00:09:50.387644 ignition[984]: INFO : Ignition finished successfully Sep 13 00:09:50.392146 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:09:50.393952 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:09:50.394104 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:09:50.427354 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:09:50.427634 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:09:50.429071 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:09:50.429167 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:09:50.430286 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 13 00:09:50.430343 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 13 00:09:50.431640 systemd[1]: Stopped target network.target - Network. Sep 13 00:09:50.432755 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:09:50.432902 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:09:50.434223 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:09:50.436182 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:09:50.439805 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:50.442770 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:09:50.444042 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:09:50.448049 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:09:50.448142 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:09:50.449578 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:09:50.449640 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:09:50.451523 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:09:50.451595 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:09:50.452687 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:09:50.452782 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:09:50.454093 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:09:50.455141 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:09:50.457135 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:09:50.457272 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:09:50.458876 systemd-networkd[745]: eth0: DHCPv6 lease lost Sep 13 00:09:50.459219 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:09:50.459337 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:09:50.463360 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:09:50.463571 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:09:50.464164 systemd-networkd[745]: eth1: DHCPv6 lease lost Sep 13 00:09:50.468240 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:09:50.468448 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:09:50.471183 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:09:50.471259 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:50.479066 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:09:50.480941 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:09:50.481083 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:09:50.484589 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:09:50.484680 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:50.486218 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:09:50.486291 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:50.488012 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:09:50.488091 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:50.489807 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:50.504163 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:09:50.504319 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:09:50.507538 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:09:50.507921 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:50.510833 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:09:50.510932 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:50.512861 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:09:50.512930 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:50.514377 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:09:50.514474 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:09:50.516432 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:09:50.516539 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:09:50.517940 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:09:50.518041 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:09:50.532245 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:09:50.534098 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:09:50.534220 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:50.534907 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:50.534962 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:50.544547 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:09:50.545164 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:09:50.546940 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:09:50.555132 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:09:50.570066 systemd[1]: Switching root. Sep 13 00:09:50.595764 systemd-journald[184]: Journal stopped Sep 13 00:09:51.980355 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Sep 13 00:09:51.980442 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:09:51.980458 kernel: SELinux: policy capability open_perms=1 Sep 13 00:09:51.980469 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:09:51.980486 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:09:51.980498 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:09:51.980515 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:09:51.980527 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:09:51.980538 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:09:51.980549 kernel: audit: type=1403 audit(1757722190.866:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:09:51.980568 systemd[1]: Successfully loaded SELinux policy in 47.934ms. Sep 13 00:09:51.980590 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.253ms. Sep 13 00:09:51.980606 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:09:51.980620 systemd[1]: Detected virtualization kvm. Sep 13 00:09:51.980631 systemd[1]: Detected architecture x86-64. Sep 13 00:09:51.980642 systemd[1]: Detected first boot. Sep 13 00:09:51.980655 systemd[1]: Hostname set to . Sep 13 00:09:51.980672 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:09:51.980688 zram_generator::config[1049]: No configuration found. Sep 13 00:09:51.981843 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:09:51.981894 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:09:51.981915 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:09:51.981940 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:09:51.981960 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:09:51.981989 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:09:51.982006 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:09:51.982025 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:09:51.982043 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:09:51.982064 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:09:51.982086 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:09:51.982107 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:09:51.982126 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:09:51.982145 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:09:51.982166 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:09:51.982188 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:09:51.982211 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:09:51.982231 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 13 00:09:51.982256 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:09:51.982275 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:09:51.982294 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:09:51.982316 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:09:51.982335 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:09:51.982355 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:09:51.982376 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:09:51.982399 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:09:51.982426 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:09:51.982446 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:09:51.982465 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:09:51.982485 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:09:51.982504 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:09:51.982524 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:09:51.982543 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:09:51.982562 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:09:51.982586 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:09:51.982606 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:51.982626 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:09:51.982645 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:09:51.982664 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:09:51.982684 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:09:51.982722 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:09:51.982743 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:09:51.982765 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:09:51.982790 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:09:51.982811 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:09:51.982830 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:09:51.982851 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:09:51.982871 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:09:51.982890 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:09:51.982908 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:09:51.982926 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:09:51.982948 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:09:51.982969 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:09:51.982989 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:09:51.983009 kernel: loop: module loaded Sep 13 00:09:51.983030 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:09:51.983048 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:09:51.983068 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:51.983087 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:09:51.983112 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:09:51.983130 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:09:51.983149 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:09:51.983170 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:09:51.983189 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:09:51.983211 kernel: ACPI: bus type drm_connector registered Sep 13 00:09:51.983233 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:09:51.983252 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:09:51.983273 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:09:51.983298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:09:51.983319 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:09:51.983339 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:09:51.983359 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:09:51.983380 kernel: fuse: init (API version 7.39) Sep 13 00:09:51.983417 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:09:51.983438 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:09:51.983457 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:09:51.983476 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:09:51.983495 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:09:51.983525 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:09:51.983544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:09:51.983563 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:09:51.983636 systemd-journald[1140]: Collecting audit messages is disabled. Sep 13 00:09:51.983669 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:09:51.983682 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:09:51.983695 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:09:51.985806 systemd-journald[1140]: Journal started Sep 13 00:09:51.985871 systemd-journald[1140]: Runtime Journal (/run/log/journal/aff756ac4d0f4d1aa3250f4976cc0c25) is 4.9M, max 39.3M, 34.4M free. Sep 13 00:09:51.995825 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:09:52.006843 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:09:52.006946 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:09:52.019742 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:09:52.028979 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:09:52.046929 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:09:52.049895 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:09:52.069485 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:09:52.078745 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:09:52.088734 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:09:52.099857 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:09:52.101486 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:09:52.104965 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:09:52.108387 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:09:52.122033 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:09:52.139307 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:09:52.148213 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:09:52.152385 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:09:52.169234 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Sep 13 00:09:52.169248 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Sep 13 00:09:52.181874 systemd-journald[1140]: Time spent on flushing to /var/log/journal/aff756ac4d0f4d1aa3250f4976cc0c25 is 28.649ms for 981 entries. Sep 13 00:09:52.181874 systemd-journald[1140]: System Journal (/var/log/journal/aff756ac4d0f4d1aa3250f4976cc0c25) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:09:52.225535 systemd-journald[1140]: Received client request to flush runtime journal. Sep 13 00:09:52.188863 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:09:52.202098 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:09:52.205618 udevadm[1201]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:09:52.233728 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:09:52.270201 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:09:52.282107 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:09:52.321140 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Sep 13 00:09:52.321645 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Sep 13 00:09:52.331360 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:09:52.878001 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:09:52.891236 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:09:52.925341 systemd-udevd[1217]: Using default interface naming scheme 'v255'. Sep 13 00:09:52.951208 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:09:52.961651 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:09:52.988985 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:09:53.043236 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 13 00:09:53.059619 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:53.059865 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:09:53.067981 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:09:53.076912 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:09:53.081905 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:09:53.083308 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:09:53.083369 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:09:53.083429 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:53.083607 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:09:53.092156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:09:53.092358 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:09:53.097507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:09:53.101002 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:09:53.110985 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:09:53.122357 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:09:53.125010 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:09:53.130225 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:09:53.155770 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1224) Sep 13 00:09:53.257758 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 13 00:09:53.262741 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 13 00:09:53.263816 systemd-networkd[1221]: lo: Link UP Sep 13 00:09:53.263828 systemd-networkd[1221]: lo: Gained carrier Sep 13 00:09:53.268699 systemd-networkd[1221]: Enumeration completed Sep 13 00:09:53.268917 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:09:53.269321 systemd-networkd[1221]: eth0: Configuring with /run/systemd/network/10-02:b4:b8:3a:10:9c.network. Sep 13 00:09:53.269784 kernel: ACPI: button: Power Button [PWRF] Sep 13 00:09:53.270347 systemd-networkd[1221]: eth1: Configuring with /run/systemd/network/10-26:bc:43:fb:33:f7.network. Sep 13 00:09:53.272978 systemd-networkd[1221]: eth0: Link UP Sep 13 00:09:53.272988 systemd-networkd[1221]: eth0: Gained carrier Sep 13 00:09:53.277464 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:09:53.278277 systemd-networkd[1221]: eth1: Link UP Sep 13 00:09:53.278291 systemd-networkd[1221]: eth1: Gained carrier Sep 13 00:09:53.297814 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 13 00:09:53.336174 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:09:53.357732 kernel: mousedev: PS/2 mouse device common for all mice Sep 13 00:09:53.381072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:53.407743 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 13 00:09:53.414217 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 13 00:09:53.418770 kernel: Console: switching to colour dummy device 80x25 Sep 13 00:09:53.420049 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 13 00:09:53.420102 kernel: [drm] features: -context_init Sep 13 00:09:53.422739 kernel: [drm] number of scanouts: 1 Sep 13 00:09:53.424746 kernel: [drm] number of cap sets: 0 Sep 13 00:09:53.429768 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 13 00:09:53.437507 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 13 00:09:53.437597 kernel: Console: switching to colour frame buffer device 128x48 Sep 13 00:09:53.454733 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 13 00:09:53.480271 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:09:53.480585 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:53.496282 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:09:53.600187 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:09:53.618450 kernel: EDAC MC: Ver: 3.0.0 Sep 13 00:09:53.648467 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:09:53.658180 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:09:53.675536 lvm[1280]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:09:53.711155 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:09:53.713048 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:09:53.718004 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:09:53.729735 lvm[1283]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:09:53.764386 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:09:53.766208 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:09:53.776920 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 13 00:09:53.777073 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:09:53.777109 systemd[1]: Reached target machines.target - Containers. Sep 13 00:09:53.779667 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:09:53.795753 kernel: ISO 9660 Extensions: RRIP_1991A Sep 13 00:09:53.801305 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 13 00:09:53.804399 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:09:53.806666 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:09:53.813082 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:09:53.820053 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:09:53.820920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:09:53.822926 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:09:53.828928 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:09:53.833144 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:09:53.835481 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:09:53.866018 kernel: loop0: detected capacity change from 0 to 140768 Sep 13 00:09:53.899968 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:09:53.906142 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:09:53.907141 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:09:53.920738 kernel: loop1: detected capacity change from 0 to 221472 Sep 13 00:09:53.962791 kernel: loop2: detected capacity change from 0 to 8 Sep 13 00:09:53.989838 kernel: loop3: detected capacity change from 0 to 142488 Sep 13 00:09:54.045694 kernel: loop4: detected capacity change from 0 to 140768 Sep 13 00:09:54.082760 kernel: loop5: detected capacity change from 0 to 221472 Sep 13 00:09:54.097347 kernel: loop6: detected capacity change from 0 to 8 Sep 13 00:09:54.102129 kernel: loop7: detected capacity change from 0 to 142488 Sep 13 00:09:54.121380 (sd-merge)[1310]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 13 00:09:54.121965 (sd-merge)[1310]: Merged extensions into '/usr'. Sep 13 00:09:54.137263 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:09:54.137665 systemd[1]: Reloading... Sep 13 00:09:54.240992 zram_generator::config[1335]: No configuration found. Sep 13 00:09:54.450428 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:09:54.480863 ldconfig[1294]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:09:54.538736 systemd[1]: Reloading finished in 400 ms. Sep 13 00:09:54.562699 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:09:54.565247 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:09:54.580110 systemd[1]: Starting ensure-sysext.service... Sep 13 00:09:54.586106 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:09:54.595054 systemd[1]: Reloading requested from client PID 1388 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:09:54.595086 systemd[1]: Reloading... Sep 13 00:09:54.635406 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:09:54.636241 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:09:54.637260 systemd-tmpfiles[1389]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:09:54.637626 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Sep 13 00:09:54.637829 systemd-tmpfiles[1389]: ACLs are not supported, ignoring. Sep 13 00:09:54.643477 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:09:54.643753 systemd-tmpfiles[1389]: Skipping /boot Sep 13 00:09:54.660399 systemd-tmpfiles[1389]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:09:54.660581 systemd-tmpfiles[1389]: Skipping /boot Sep 13 00:09:54.712036 zram_generator::config[1417]: No configuration found. Sep 13 00:09:54.860494 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:09:54.929315 systemd[1]: Reloading finished in 333 ms. Sep 13 00:09:54.960806 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:09:54.975074 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:09:54.986215 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:09:54.993898 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:09:55.005677 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:09:55.016093 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:09:55.034812 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:55.035567 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:09:55.040234 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:09:55.055774 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:09:55.064324 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:09:55.066951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:09:55.067208 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:55.087229 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:09:55.087473 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:09:55.093531 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:09:55.093856 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:09:55.101362 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:09:55.101580 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:09:55.114475 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:09:55.121642 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:09:55.144189 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:09:55.156214 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:55.156827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:09:55.161260 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:09:55.162407 augenrules[1505]: No rules Sep 13 00:09:55.169863 systemd-networkd[1221]: eth0: Gained IPv6LL Sep 13 00:09:55.170268 systemd-networkd[1221]: eth1: Gained IPv6LL Sep 13 00:09:55.176387 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:09:55.196060 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:09:55.213789 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:09:55.217978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:09:55.227923 systemd-resolved[1473]: Positive Trust Anchors: Sep 13 00:09:55.227933 systemd-resolved[1473]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:09:55.227979 systemd-resolved[1473]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:09:55.240193 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:09:55.240818 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:09:55.240955 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 13 00:09:55.243018 systemd-resolved[1473]: Using system hostname 'ci-4081.3.5-n-738365eea6'. Sep 13 00:09:55.244340 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:09:55.249574 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:09:55.252070 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:09:55.256141 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:09:55.266081 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:09:55.269137 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:09:55.269358 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:09:55.271868 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:09:55.272173 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:09:55.274242 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:09:55.274473 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:09:55.282477 systemd[1]: Reached target network.target - Network. Sep 13 00:09:55.284638 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:09:55.286531 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:09:55.287052 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:09:55.287143 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:09:55.288391 systemd[1]: Finished ensure-sysext.service. Sep 13 00:09:55.299074 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:09:55.302533 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:09:55.381453 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:09:55.383251 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:09:55.386491 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:09:55.387992 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:09:55.388584 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:09:55.389132 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:09:55.389170 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:09:55.389610 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:09:55.392542 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:09:55.393202 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:09:55.393672 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:09:55.398696 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:09:55.403170 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:09:55.407903 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:09:55.409553 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:09:55.412459 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:09:55.413279 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:09:55.415630 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:09:55.416078 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:09:55.416142 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:09:55.421918 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:09:55.431039 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 13 00:09:55.445244 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:09:55.459978 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:09:55.476103 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:09:55.477841 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:09:55.484950 coreos-metadata[1537]: Sep 13 00:09:55.484 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:09:55.485613 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:09:55.499065 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:09:55.503018 coreos-metadata[1537]: Sep 13 00:09:55.500 INFO Fetch successful Sep 13 00:09:55.509999 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:09:55.520479 jq[1542]: false Sep 13 00:09:55.528939 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:09:55.540459 extend-filesystems[1543]: Found loop4 Sep 13 00:09:55.548750 extend-filesystems[1543]: Found loop5 Sep 13 00:09:55.548750 extend-filesystems[1543]: Found loop6 Sep 13 00:09:55.548750 extend-filesystems[1543]: Found loop7 Sep 13 00:09:55.548750 extend-filesystems[1543]: Found vda Sep 13 00:09:55.548750 extend-filesystems[1543]: Found vda1 Sep 13 00:09:55.548750 extend-filesystems[1543]: Found vda2 Sep 13 00:09:55.548750 extend-filesystems[1543]: Found vda3 Sep 13 00:09:55.548750 extend-filesystems[1543]: Found usr Sep 13 00:09:55.548750 extend-filesystems[1543]: Found vda4 Sep 13 00:09:55.548750 extend-filesystems[1543]: Found vda6 Sep 13 00:09:55.548750 extend-filesystems[1543]: Found vda7 Sep 13 00:09:55.548750 extend-filesystems[1543]: Found vda9 Sep 13 00:09:55.548750 extend-filesystems[1543]: Checking size of /dev/vda9 Sep 13 00:09:55.553063 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:09:55.587784 dbus-daemon[1538]: [system] SELinux support is enabled Sep 13 00:09:55.570817 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:09:55.593054 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:09:55.596463 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:09:55.612622 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:09:55.629748 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:09:55.637130 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:09:55.654420 extend-filesystems[1543]: Resized partition /dev/vda9 Sep 13 00:09:55.669113 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:09:55.669603 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:09:55.705760 extend-filesystems[1575]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:09:55.736849 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 13 00:09:55.736910 jq[1570]: true Sep 13 00:09:55.737174 update_engine[1560]: I20250913 00:09:55.727476 1560 main.cc:92] Flatcar Update Engine starting Sep 13 00:09:55.720598 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:09:55.726137 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:09:55.735089 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:09:55.735593 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:09:55.752776 update_engine[1560]: I20250913 00:09:55.751497 1560 update_check_scheduler.cc:74] Next update check in 3m0s Sep 13 00:09:55.809750 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1232) Sep 13 00:09:55.819243 systemd-timesyncd[1531]: Contacted time server 15.204.198.96:123 (0.flatcar.pool.ntp.org). Sep 13 00:09:55.819429 systemd-timesyncd[1531]: Initial clock synchronization to Sat 2025-09-13 00:09:56.106270 UTC. Sep 13 00:09:55.827034 (ntainerd)[1590]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:09:55.841749 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:09:55.856452 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 13 00:09:55.862783 jq[1588]: true Sep 13 00:09:55.865171 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:09:55.875217 tar[1578]: linux-amd64/helm Sep 13 00:09:55.868737 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:09:55.869336 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:09:55.869390 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:09:55.873563 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:09:55.874459 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 13 00:09:55.874540 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:09:55.877657 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:09:55.884136 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:09:56.008777 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 13 00:09:56.045776 extend-filesystems[1575]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:09:56.045776 extend-filesystems[1575]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 13 00:09:56.045776 extend-filesystems[1575]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 13 00:09:56.096929 extend-filesystems[1543]: Resized filesystem in /dev/vda9 Sep 13 00:09:56.096929 extend-filesystems[1543]: Found vdb Sep 13 00:09:56.052238 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:09:56.052675 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:09:56.126452 systemd-logind[1559]: New seat seat0. Sep 13 00:09:56.133892 systemd-logind[1559]: Watching system buttons on /dev/input/event1 (Power Button) Sep 13 00:09:56.133934 systemd-logind[1559]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 13 00:09:56.134287 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:09:56.205995 bash[1626]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:09:56.208568 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:09:56.253313 systemd[1]: Starting sshkeys.service... Sep 13 00:09:56.285688 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 13 00:09:56.299118 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 13 00:09:56.331562 locksmithd[1605]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:09:56.355239 coreos-metadata[1638]: Sep 13 00:09:56.353 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 13 00:09:56.371912 coreos-metadata[1638]: Sep 13 00:09:56.369 INFO Fetch successful Sep 13 00:09:56.392274 unknown[1638]: wrote ssh authorized keys file for user: core Sep 13 00:09:56.492632 update-ssh-keys[1650]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:09:56.499155 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 13 00:09:56.510212 systemd[1]: Finished sshkeys.service. Sep 13 00:09:56.529299 sshd_keygen[1580]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:09:56.613481 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:09:56.635214 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:09:56.689422 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:09:56.689808 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:09:56.711908 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:09:56.746045 containerd[1590]: time="2025-09-13T00:09:56.744297229Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:09:56.750565 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:09:56.772270 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:09:56.787945 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 13 00:09:56.799763 containerd[1590]: time="2025-09-13T00:09:56.795929756Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:56.799763 containerd[1590]: time="2025-09-13T00:09:56.798836918Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:56.799763 containerd[1590]: time="2025-09-13T00:09:56.798895501Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:09:56.799763 containerd[1590]: time="2025-09-13T00:09:56.798925504Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:09:56.799763 containerd[1590]: time="2025-09-13T00:09:56.799243145Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:09:56.799763 containerd[1590]: time="2025-09-13T00:09:56.799265844Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:56.799763 containerd[1590]: time="2025-09-13T00:09:56.799332784Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:56.799763 containerd[1590]: time="2025-09-13T00:09:56.799354619Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:56.799763 containerd[1590]: time="2025-09-13T00:09:56.799658719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:56.799763 containerd[1590]: time="2025-09-13T00:09:56.799678519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:56.799763 containerd[1590]: time="2025-09-13T00:09:56.799692508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:56.797053 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:09:56.800203 containerd[1590]: time="2025-09-13T00:09:56.799702721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:56.800203 containerd[1590]: time="2025-09-13T00:09:56.799811020Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:56.800203 containerd[1590]: time="2025-09-13T00:09:56.800111345Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:09:56.800326 containerd[1590]: time="2025-09-13T00:09:56.800298131Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:09:56.800326 containerd[1590]: time="2025-09-13T00:09:56.800320501Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:09:56.800452 containerd[1590]: time="2025-09-13T00:09:56.800418066Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:09:56.800880 containerd[1590]: time="2025-09-13T00:09:56.800490099Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:09:56.814777 containerd[1590]: time="2025-09-13T00:09:56.814687265Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:09:56.814995 containerd[1590]: time="2025-09-13T00:09:56.814807414Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:09:56.814995 containerd[1590]: time="2025-09-13T00:09:56.814830291Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:09:56.814995 containerd[1590]: time="2025-09-13T00:09:56.814847362Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:09:56.814995 containerd[1590]: time="2025-09-13T00:09:56.814912756Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:09:56.815367 containerd[1590]: time="2025-09-13T00:09:56.815132645Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:09:56.815859 containerd[1590]: time="2025-09-13T00:09:56.815692017Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:09:56.815935 containerd[1590]: time="2025-09-13T00:09:56.815897547Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:09:56.815935 containerd[1590]: time="2025-09-13T00:09:56.815920346Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:09:56.817752 containerd[1590]: time="2025-09-13T00:09:56.815936151Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:09:56.817752 containerd[1590]: time="2025-09-13T00:09:56.815951660Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:09:56.817752 containerd[1590]: time="2025-09-13T00:09:56.815965691Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:09:56.817752 containerd[1590]: time="2025-09-13T00:09:56.815979200Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:09:56.817752 containerd[1590]: time="2025-09-13T00:09:56.816003594Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:09:56.817752 containerd[1590]: time="2025-09-13T00:09:56.816021997Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:09:56.817752 containerd[1590]: time="2025-09-13T00:09:56.816035958Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:09:56.817752 containerd[1590]: time="2025-09-13T00:09:56.816050456Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:09:56.817752 containerd[1590]: time="2025-09-13T00:09:56.816063605Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:09:56.817752 containerd[1590]: time="2025-09-13T00:09:56.816100877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.817752 containerd[1590]: time="2025-09-13T00:09:56.816117219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.817752 containerd[1590]: time="2025-09-13T00:09:56.816130586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.817752 containerd[1590]: time="2025-09-13T00:09:56.816144593Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.817752 containerd[1590]: time="2025-09-13T00:09:56.816158085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.818175 containerd[1590]: time="2025-09-13T00:09:56.816171374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.818175 containerd[1590]: time="2025-09-13T00:09:56.816184015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.818175 containerd[1590]: time="2025-09-13T00:09:56.816199064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.818175 containerd[1590]: time="2025-09-13T00:09:56.816221332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.818175 containerd[1590]: time="2025-09-13T00:09:56.816242314Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.818175 containerd[1590]: time="2025-09-13T00:09:56.816254583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.818175 containerd[1590]: time="2025-09-13T00:09:56.816269813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.818175 containerd[1590]: time="2025-09-13T00:09:56.816283338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.818175 containerd[1590]: time="2025-09-13T00:09:56.816300078Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:09:56.818175 containerd[1590]: time="2025-09-13T00:09:56.816323135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.818175 containerd[1590]: time="2025-09-13T00:09:56.816345528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.818175 containerd[1590]: time="2025-09-13T00:09:56.816359182Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:09:56.818175 containerd[1590]: time="2025-09-13T00:09:56.816406601Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:09:56.818175 containerd[1590]: time="2025-09-13T00:09:56.816425340Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:09:56.818440 containerd[1590]: time="2025-09-13T00:09:56.816437196Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:09:56.818440 containerd[1590]: time="2025-09-13T00:09:56.816451305Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:09:56.818440 containerd[1590]: time="2025-09-13T00:09:56.816461828Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.818440 containerd[1590]: time="2025-09-13T00:09:56.816481789Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:09:56.818440 containerd[1590]: time="2025-09-13T00:09:56.816496655Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:09:56.818440 containerd[1590]: time="2025-09-13T00:09:56.816508054Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:09:56.818556 containerd[1590]: time="2025-09-13T00:09:56.817613344Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:09:56.818556 containerd[1590]: time="2025-09-13T00:09:56.817686885Z" level=info msg="Connect containerd service" Sep 13 00:09:56.818556 containerd[1590]: time="2025-09-13T00:09:56.817799442Z" level=info msg="using legacy CRI server" Sep 13 00:09:56.818556 containerd[1590]: time="2025-09-13T00:09:56.817813751Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:09:56.818556 containerd[1590]: time="2025-09-13T00:09:56.817946603Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:09:56.819098 containerd[1590]: time="2025-09-13T00:09:56.818649293Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:09:56.819098 containerd[1590]: time="2025-09-13T00:09:56.819083343Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:09:56.819168 containerd[1590]: time="2025-09-13T00:09:56.819131332Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:09:56.820877 containerd[1590]: time="2025-09-13T00:09:56.819258935Z" level=info msg="Start subscribing containerd event" Sep 13 00:09:56.820877 containerd[1590]: time="2025-09-13T00:09:56.819310985Z" level=info msg="Start recovering state" Sep 13 00:09:56.820877 containerd[1590]: time="2025-09-13T00:09:56.819376416Z" level=info msg="Start event monitor" Sep 13 00:09:56.820877 containerd[1590]: time="2025-09-13T00:09:56.819395611Z" level=info msg="Start snapshots syncer" Sep 13 00:09:56.820877 containerd[1590]: time="2025-09-13T00:09:56.819405058Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:09:56.820877 containerd[1590]: time="2025-09-13T00:09:56.819412440Z" level=info msg="Start streaming server" Sep 13 00:09:56.820877 containerd[1590]: time="2025-09-13T00:09:56.819475714Z" level=info msg="containerd successfully booted in 0.078076s" Sep 13 00:09:56.819986 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:09:57.204347 tar[1578]: linux-amd64/LICENSE Sep 13 00:09:57.207030 tar[1578]: linux-amd64/README.md Sep 13 00:09:57.232547 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:09:57.725969 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:09:57.729286 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:09:57.733026 systemd[1]: Startup finished in 7.369s (kernel) + 6.913s (userspace) = 14.283s. Sep 13 00:09:57.742483 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:09:58.469080 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:09:58.478259 systemd[1]: Started sshd@0-164.90.159.5:22-139.178.68.195:54422.service - OpenSSH per-connection server daemon (139.178.68.195:54422). Sep 13 00:09:58.478858 kubelet[1694]: E0913 00:09:58.478814 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:09:58.482725 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:09:58.483021 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:09:58.570191 sshd[1704]: Accepted publickey for core from 139.178.68.195 port 54422 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:09:58.572768 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:58.588821 systemd-logind[1559]: New session 1 of user core. Sep 13 00:09:58.590478 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:09:58.596251 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:09:58.628924 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:09:58.644059 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:09:58.648443 (systemd)[1712]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:09:58.788161 systemd[1712]: Queued start job for default target default.target. Sep 13 00:09:58.788707 systemd[1712]: Created slice app.slice - User Application Slice. Sep 13 00:09:58.788759 systemd[1712]: Reached target paths.target - Paths. Sep 13 00:09:58.788781 systemd[1712]: Reached target timers.target - Timers. Sep 13 00:09:58.794905 systemd[1712]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:09:58.818096 systemd[1712]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:09:58.819040 systemd[1712]: Reached target sockets.target - Sockets. Sep 13 00:09:58.819066 systemd[1712]: Reached target basic.target - Basic System. Sep 13 00:09:58.819146 systemd[1712]: Reached target default.target - Main User Target. Sep 13 00:09:58.819189 systemd[1712]: Startup finished in 160ms. Sep 13 00:09:58.819845 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:09:58.826884 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:09:58.896287 systemd[1]: Started sshd@1-164.90.159.5:22-139.178.68.195:54436.service - OpenSSH per-connection server daemon (139.178.68.195:54436). Sep 13 00:09:58.949213 sshd[1724]: Accepted publickey for core from 139.178.68.195 port 54436 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:09:58.952406 sshd[1724]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:58.960125 systemd-logind[1559]: New session 2 of user core. Sep 13 00:09:58.968362 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:09:59.036392 sshd[1724]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:59.050566 systemd[1]: Started sshd@2-164.90.159.5:22-139.178.68.195:54448.service - OpenSSH per-connection server daemon (139.178.68.195:54448). Sep 13 00:09:59.052008 systemd[1]: sshd@1-164.90.159.5:22-139.178.68.195:54436.service: Deactivated successfully. Sep 13 00:09:59.055580 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:09:59.056554 systemd-logind[1559]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:09:59.059107 systemd-logind[1559]: Removed session 2. Sep 13 00:09:59.102465 sshd[1729]: Accepted publickey for core from 139.178.68.195 port 54448 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:09:59.104821 sshd[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:59.112935 systemd-logind[1559]: New session 3 of user core. Sep 13 00:09:59.120411 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:09:59.182106 sshd[1729]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:59.188940 systemd[1]: sshd@2-164.90.159.5:22-139.178.68.195:54448.service: Deactivated successfully. Sep 13 00:09:59.193832 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:09:59.195381 systemd-logind[1559]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:09:59.205283 systemd[1]: Started sshd@3-164.90.159.5:22-139.178.68.195:54452.service - OpenSSH per-connection server daemon (139.178.68.195:54452). Sep 13 00:09:59.207114 systemd-logind[1559]: Removed session 3. Sep 13 00:09:59.278174 sshd[1740]: Accepted publickey for core from 139.178.68.195 port 54452 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:09:59.280236 sshd[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:59.287048 systemd-logind[1559]: New session 4 of user core. Sep 13 00:09:59.293554 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:09:59.365132 sshd[1740]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:59.376481 systemd[1]: Started sshd@4-164.90.159.5:22-139.178.68.195:54454.service - OpenSSH per-connection server daemon (139.178.68.195:54454). Sep 13 00:09:59.377394 systemd[1]: sshd@3-164.90.159.5:22-139.178.68.195:54452.service: Deactivated successfully. Sep 13 00:09:59.381483 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:09:59.382686 systemd-logind[1559]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:09:59.385983 systemd-logind[1559]: Removed session 4. Sep 13 00:09:59.431276 sshd[1745]: Accepted publickey for core from 139.178.68.195 port 54454 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:09:59.433893 sshd[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:59.443692 systemd-logind[1559]: New session 5 of user core. Sep 13 00:09:59.453457 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:09:59.530503 sudo[1752]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:09:59.531382 sudo[1752]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:09:59.551274 sudo[1752]: pam_unix(sudo:session): session closed for user root Sep 13 00:09:59.555680 sshd[1745]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:59.560532 systemd-logind[1559]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:09:59.561041 systemd[1]: sshd@4-164.90.159.5:22-139.178.68.195:54454.service: Deactivated successfully. Sep 13 00:09:59.574534 systemd[1]: Started sshd@5-164.90.159.5:22-139.178.68.195:54460.service - OpenSSH per-connection server daemon (139.178.68.195:54460). Sep 13 00:09:59.576272 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:09:59.578410 systemd-logind[1559]: Removed session 5. Sep 13 00:09:59.639590 sshd[1757]: Accepted publickey for core from 139.178.68.195 port 54460 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:09:59.642511 sshd[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:59.650852 systemd-logind[1559]: New session 6 of user core. Sep 13 00:09:59.658407 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:09:59.726707 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:09:59.727340 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:09:59.733072 sudo[1762]: pam_unix(sudo:session): session closed for user root Sep 13 00:09:59.743067 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:09:59.743601 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:09:59.769218 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:09:59.775408 auditctl[1765]: No rules Sep 13 00:09:59.776391 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:09:59.776822 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:09:59.788447 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:09:59.838261 augenrules[1784]: No rules Sep 13 00:09:59.840306 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:09:59.844530 sudo[1761]: pam_unix(sudo:session): session closed for user root Sep 13 00:09:59.854113 sshd[1757]: pam_unix(sshd:session): session closed for user core Sep 13 00:09:59.873888 systemd[1]: Started sshd@6-164.90.159.5:22-139.178.68.195:56354.service - OpenSSH per-connection server daemon (139.178.68.195:56354). Sep 13 00:09:59.875544 systemd[1]: sshd@5-164.90.159.5:22-139.178.68.195:54460.service: Deactivated successfully. Sep 13 00:09:59.880711 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:09:59.883961 systemd-logind[1559]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:09:59.888033 systemd-logind[1559]: Removed session 6. Sep 13 00:09:59.943515 sshd[1791]: Accepted publickey for core from 139.178.68.195 port 56354 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:09:59.945776 sshd[1791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:09:59.953880 systemd-logind[1559]: New session 7 of user core. Sep 13 00:09:59.963888 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:10:00.037640 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:10:00.038264 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:10:00.688524 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:10:00.700989 (dockerd)[1812]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:10:01.283836 dockerd[1812]: time="2025-09-13T00:10:01.283670770Z" level=info msg="Starting up" Sep 13 00:10:01.580584 systemd[1]: var-lib-docker-metacopy\x2dcheck3530920295-merged.mount: Deactivated successfully. Sep 13 00:10:01.613305 dockerd[1812]: time="2025-09-13T00:10:01.612805100Z" level=info msg="Loading containers: start." Sep 13 00:10:01.873573 kernel: Initializing XFRM netlink socket Sep 13 00:10:02.039499 systemd-networkd[1221]: docker0: Link UP Sep 13 00:10:02.080859 dockerd[1812]: time="2025-09-13T00:10:02.080082458Z" level=info msg="Loading containers: done." Sep 13 00:10:02.119764 dockerd[1812]: time="2025-09-13T00:10:02.119201050Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:10:02.119764 dockerd[1812]: time="2025-09-13T00:10:02.119615635Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:10:02.120406 dockerd[1812]: time="2025-09-13T00:10:02.120285256Z" level=info msg="Daemon has completed initialization" Sep 13 00:10:02.224778 dockerd[1812]: time="2025-09-13T00:10:02.224314985Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:10:02.225834 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:10:03.506209 containerd[1590]: time="2025-09-13T00:10:03.506125472Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:10:04.211005 kernel: hrtimer: interrupt took 13571730 ns Sep 13 00:10:04.864014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount923877645.mount: Deactivated successfully. Sep 13 00:10:06.857545 containerd[1590]: time="2025-09-13T00:10:06.857419938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:06.859589 containerd[1590]: time="2025-09-13T00:10:06.859496606Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 13 00:10:06.862763 containerd[1590]: time="2025-09-13T00:10:06.860575402Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:06.866126 containerd[1590]: time="2025-09-13T00:10:06.865917999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:06.868283 containerd[1590]: time="2025-09-13T00:10:06.868215077Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 3.362018223s" Sep 13 00:10:06.868594 containerd[1590]: time="2025-09-13T00:10:06.868557146Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 13 00:10:06.872859 containerd[1590]: time="2025-09-13T00:10:06.872797617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:10:08.565620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:10:08.568507 containerd[1590]: time="2025-09-13T00:10:08.568451196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:08.570226 containerd[1590]: time="2025-09-13T00:10:08.570159081Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 13 00:10:08.571550 containerd[1590]: time="2025-09-13T00:10:08.571503566Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:08.574051 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:08.577516 containerd[1590]: time="2025-09-13T00:10:08.577447294Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:08.578926 containerd[1590]: time="2025-09-13T00:10:08.578867209Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.705998039s" Sep 13 00:10:08.579089 containerd[1590]: time="2025-09-13T00:10:08.578934435Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 13 00:10:08.581401 containerd[1590]: time="2025-09-13T00:10:08.581354614Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:10:08.935112 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:08.937695 (kubelet)[2032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:10:09.027273 kubelet[2032]: E0913 00:10:09.027147 2032 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:09.033818 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:09.034453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:10.101447 containerd[1590]: time="2025-09-13T00:10:10.101356210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:10.103281 containerd[1590]: time="2025-09-13T00:10:10.103191893Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 13 00:10:10.104208 containerd[1590]: time="2025-09-13T00:10:10.103852029Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:10.107993 containerd[1590]: time="2025-09-13T00:10:10.107938230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:10.109830 containerd[1590]: time="2025-09-13T00:10:10.109667349Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.528254886s" Sep 13 00:10:10.109942 containerd[1590]: time="2025-09-13T00:10:10.109808134Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 13 00:10:10.110753 containerd[1590]: time="2025-09-13T00:10:10.110729114Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:10:10.112449 systemd-resolved[1473]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 13 00:10:11.390743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1295652799.mount: Deactivated successfully. Sep 13 00:10:12.105024 containerd[1590]: time="2025-09-13T00:10:12.104924119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:12.107323 containerd[1590]: time="2025-09-13T00:10:12.107009598Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 13 00:10:12.108551 containerd[1590]: time="2025-09-13T00:10:12.108213493Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:12.111253 containerd[1590]: time="2025-09-13T00:10:12.111156939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:12.112216 containerd[1590]: time="2025-09-13T00:10:12.112161235Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.00130574s" Sep 13 00:10:12.112322 containerd[1590]: time="2025-09-13T00:10:12.112222230Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 13 00:10:12.113196 containerd[1590]: time="2025-09-13T00:10:12.112923635Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:10:12.701247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3492040765.mount: Deactivated successfully. Sep 13 00:10:13.216979 systemd-resolved[1473]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 13 00:10:13.896656 containerd[1590]: time="2025-09-13T00:10:13.896598989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:13.897996 containerd[1590]: time="2025-09-13T00:10:13.897940402Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 13 00:10:13.898731 containerd[1590]: time="2025-09-13T00:10:13.898498654Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:13.902233 containerd[1590]: time="2025-09-13T00:10:13.902165595Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:13.903979 containerd[1590]: time="2025-09-13T00:10:13.903828137Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.790859089s" Sep 13 00:10:13.903979 containerd[1590]: time="2025-09-13T00:10:13.903874250Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 13 00:10:13.904901 containerd[1590]: time="2025-09-13T00:10:13.904673950Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:10:14.342056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4111001228.mount: Deactivated successfully. Sep 13 00:10:14.350322 containerd[1590]: time="2025-09-13T00:10:14.349248460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:14.350322 containerd[1590]: time="2025-09-13T00:10:14.350260667Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 13 00:10:14.352042 containerd[1590]: time="2025-09-13T00:10:14.351997221Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:14.355517 containerd[1590]: time="2025-09-13T00:10:14.355453873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:14.356939 containerd[1590]: time="2025-09-13T00:10:14.356895104Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 452.188168ms" Sep 13 00:10:14.357133 containerd[1590]: time="2025-09-13T00:10:14.357115065Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 13 00:10:14.358086 containerd[1590]: time="2025-09-13T00:10:14.358054948Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:10:14.940871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2462401078.mount: Deactivated successfully. Sep 13 00:10:17.030008 containerd[1590]: time="2025-09-13T00:10:17.029932283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:17.032496 containerd[1590]: time="2025-09-13T00:10:17.032399581Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 13 00:10:17.033753 containerd[1590]: time="2025-09-13T00:10:17.032932770Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:17.039741 containerd[1590]: time="2025-09-13T00:10:17.038057241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:17.040435 containerd[1590]: time="2025-09-13T00:10:17.040376738Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.682109114s" Sep 13 00:10:17.040677 containerd[1590]: time="2025-09-13T00:10:17.040647446Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 13 00:10:19.065391 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 13 00:10:19.075141 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:19.278960 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:19.296377 (kubelet)[2194]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:10:19.362700 kubelet[2194]: E0913 00:10:19.362508 2194 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:10:19.365476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:10:19.365658 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:10:20.328550 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:20.338208 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:20.390570 systemd[1]: Reloading requested from client PID 2211 ('systemctl') (unit session-7.scope)... Sep 13 00:10:20.390593 systemd[1]: Reloading... Sep 13 00:10:20.551824 zram_generator::config[2254]: No configuration found. Sep 13 00:10:20.734522 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:20.856238 systemd[1]: Reloading finished in 465 ms. Sep 13 00:10:20.911960 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:10:20.912067 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:10:20.912453 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:20.920806 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:21.094129 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:21.109486 (kubelet)[2313]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:10:21.167682 kubelet[2313]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:21.167682 kubelet[2313]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:10:21.167682 kubelet[2313]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:21.168243 kubelet[2313]: I0913 00:10:21.167872 2313 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:10:22.067163 kubelet[2313]: I0913 00:10:22.067067 2313 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:10:22.067163 kubelet[2313]: I0913 00:10:22.067148 2313 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:10:22.067764 kubelet[2313]: I0913 00:10:22.067645 2313 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:10:22.096731 kubelet[2313]: I0913 00:10:22.096665 2313 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:10:22.098417 kubelet[2313]: E0913 00:10:22.098085 2313 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://164.90.159.5:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 164.90.159.5:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:22.110673 kubelet[2313]: E0913 00:10:22.110613 2313 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:10:22.110673 kubelet[2313]: I0913 00:10:22.110667 2313 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:10:22.119735 kubelet[2313]: I0913 00:10:22.119038 2313 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:10:22.120817 kubelet[2313]: I0913 00:10:22.120769 2313 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:10:22.121329 kubelet[2313]: I0913 00:10:22.121269 2313 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:10:22.121673 kubelet[2313]: I0913 00:10:22.121327 2313 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-738365eea6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:10:22.121973 kubelet[2313]: I0913 00:10:22.121749 2313 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:10:22.121973 kubelet[2313]: I0913 00:10:22.121772 2313 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:10:22.122072 kubelet[2313]: I0913 00:10:22.122028 2313 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:22.126880 kubelet[2313]: I0913 00:10:22.126187 2313 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:10:22.126880 kubelet[2313]: I0913 00:10:22.126306 2313 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:10:22.126880 kubelet[2313]: I0913 00:10:22.126381 2313 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:10:22.126880 kubelet[2313]: I0913 00:10:22.126441 2313 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:10:22.134277 kubelet[2313]: W0913 00:10:22.134186 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.90.159.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-738365eea6&limit=500&resourceVersion=0": dial tcp 164.90.159.5:6443: connect: connection refused Sep 13 00:10:22.134979 kubelet[2313]: E0913 00:10:22.134943 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://164.90.159.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-738365eea6&limit=500&resourceVersion=0\": dial tcp 164.90.159.5:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:22.135288 kubelet[2313]: I0913 00:10:22.135268 2313 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:10:22.138743 kubelet[2313]: W0913 00:10:22.138402 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.90.159.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 164.90.159.5:6443: connect: connection refused Sep 13 00:10:22.138743 kubelet[2313]: E0913 00:10:22.138475 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://164.90.159.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.90.159.5:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:22.139646 kubelet[2313]: I0913 00:10:22.139606 2313 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:10:22.139799 kubelet[2313]: W0913 00:10:22.139775 2313 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:10:22.142668 kubelet[2313]: I0913 00:10:22.142050 2313 server.go:1274] "Started kubelet" Sep 13 00:10:22.145125 kubelet[2313]: I0913 00:10:22.145071 2313 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:10:22.149745 kubelet[2313]: E0913 00:10:22.147868 2313 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://164.90.159.5:6443/api/v1/namespaces/default/events\": dial tcp 164.90.159.5:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.5-n-738365eea6.1864af0c58957eb3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.5-n-738365eea6,UID:ci-4081.3.5-n-738365eea6,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.5-n-738365eea6,},FirstTimestamp:2025-09-13 00:10:22.141996723 +0000 UTC m=+1.024262351,LastTimestamp:2025-09-13 00:10:22.141996723 +0000 UTC m=+1.024262351,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.5-n-738365eea6,}" Sep 13 00:10:22.149745 kubelet[2313]: I0913 00:10:22.149513 2313 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:10:22.152164 kubelet[2313]: I0913 00:10:22.152001 2313 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:10:22.156814 kubelet[2313]: I0913 00:10:22.156542 2313 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:10:22.156968 kubelet[2313]: I0913 00:10:22.156912 2313 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:10:22.158113 kubelet[2313]: I0913 00:10:22.158075 2313 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:10:22.163390 kubelet[2313]: I0913 00:10:22.160242 2313 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:10:22.163390 kubelet[2313]: E0913 00:10:22.160755 2313 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.5-n-738365eea6\" not found" Sep 13 00:10:22.163390 kubelet[2313]: I0913 00:10:22.161313 2313 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:10:22.163390 kubelet[2313]: I0913 00:10:22.161443 2313 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:10:22.163390 kubelet[2313]: E0913 00:10:22.161926 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.159.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-738365eea6?timeout=10s\": dial tcp 164.90.159.5:6443: connect: connection refused" interval="200ms" Sep 13 00:10:22.163824 kubelet[2313]: I0913 00:10:22.163792 2313 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:10:22.163960 kubelet[2313]: I0913 00:10:22.163937 2313 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:10:22.169498 kubelet[2313]: E0913 00:10:22.169444 2313 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:10:22.170216 kubelet[2313]: W0913 00:10:22.169751 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.90.159.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.90.159.5:6443: connect: connection refused Sep 13 00:10:22.170216 kubelet[2313]: E0913 00:10:22.169855 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://164.90.159.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.90.159.5:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:22.172203 kubelet[2313]: I0913 00:10:22.172154 2313 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:10:22.217913 kubelet[2313]: I0913 00:10:22.217840 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:10:22.222779 kubelet[2313]: I0913 00:10:22.222244 2313 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:10:22.222779 kubelet[2313]: I0913 00:10:22.222326 2313 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:10:22.222779 kubelet[2313]: I0913 00:10:22.222380 2313 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:10:22.222779 kubelet[2313]: E0913 00:10:22.222455 2313 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:10:22.229799 kubelet[2313]: I0913 00:10:22.229410 2313 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:10:22.229799 kubelet[2313]: I0913 00:10:22.229437 2313 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:10:22.229799 kubelet[2313]: I0913 00:10:22.229472 2313 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:22.230435 kubelet[2313]: W0913 00:10:22.230369 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.90.159.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.90.159.5:6443: connect: connection refused Sep 13 00:10:22.230579 kubelet[2313]: E0913 00:10:22.230554 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://164.90.159.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.90.159.5:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:22.233844 kubelet[2313]: I0913 00:10:22.233803 2313 policy_none.go:49] "None policy: Start" Sep 13 00:10:22.235514 kubelet[2313]: I0913 00:10:22.235168 2313 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:10:22.235514 kubelet[2313]: I0913 00:10:22.235208 2313 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:10:22.242733 kubelet[2313]: I0913 00:10:22.242682 2313 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:10:22.243127 kubelet[2313]: I0913 00:10:22.243112 2313 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:10:22.243276 kubelet[2313]: I0913 00:10:22.243225 2313 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:10:22.245206 kubelet[2313]: I0913 00:10:22.245124 2313 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:10:22.252133 kubelet[2313]: E0913 00:10:22.252071 2313 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.5-n-738365eea6\" not found" Sep 13 00:10:22.346201 kubelet[2313]: I0913 00:10:22.345572 2313 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-738365eea6" Sep 13 00:10:22.346658 kubelet[2313]: E0913 00:10:22.346610 2313 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://164.90.159.5:6443/api/v1/nodes\": dial tcp 164.90.159.5:6443: connect: connection refused" node="ci-4081.3.5-n-738365eea6" Sep 13 00:10:22.362523 kubelet[2313]: E0913 00:10:22.362454 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.159.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-738365eea6?timeout=10s\": dial tcp 164.90.159.5:6443: connect: connection refused" interval="400ms" Sep 13 00:10:22.462483 kubelet[2313]: I0913 00:10:22.462157 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b65abc49cf17605ae7d7f7128c22cc0-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-738365eea6\" (UID: \"3b65abc49cf17605ae7d7f7128c22cc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-738365eea6" Sep 13 00:10:22.462483 kubelet[2313]: I0913 00:10:22.462229 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3b65abc49cf17605ae7d7f7128c22cc0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-738365eea6\" (UID: \"3b65abc49cf17605ae7d7f7128c22cc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-738365eea6" Sep 13 00:10:22.462483 kubelet[2313]: I0913 00:10:22.462254 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b65abc49cf17605ae7d7f7128c22cc0-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-738365eea6\" (UID: \"3b65abc49cf17605ae7d7f7128c22cc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-738365eea6" Sep 13 00:10:22.462483 kubelet[2313]: I0913 00:10:22.462274 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b65abc49cf17605ae7d7f7128c22cc0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-738365eea6\" (UID: \"3b65abc49cf17605ae7d7f7128c22cc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-738365eea6" Sep 13 00:10:22.462483 kubelet[2313]: I0913 00:10:22.462294 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa5b0a1022c8c90e028790f82152a36d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-738365eea6\" (UID: \"aa5b0a1022c8c90e028790f82152a36d\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-738365eea6" Sep 13 00:10:22.462863 kubelet[2313]: I0913 00:10:22.462320 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b9eb57e79c69fa593396c4ad60137bb-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-738365eea6\" (UID: \"5b9eb57e79c69fa593396c4ad60137bb\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-738365eea6" Sep 13 00:10:22.462863 kubelet[2313]: I0913 00:10:22.462337 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b9eb57e79c69fa593396c4ad60137bb-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-738365eea6\" (UID: \"5b9eb57e79c69fa593396c4ad60137bb\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-738365eea6" Sep 13 00:10:22.462863 kubelet[2313]: I0913 00:10:22.462353 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b9eb57e79c69fa593396c4ad60137bb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-738365eea6\" (UID: \"5b9eb57e79c69fa593396c4ad60137bb\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-738365eea6" Sep 13 00:10:22.462863 kubelet[2313]: I0913 00:10:22.462369 2313 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b65abc49cf17605ae7d7f7128c22cc0-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-738365eea6\" (UID: \"3b65abc49cf17605ae7d7f7128c22cc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-738365eea6" Sep 13 00:10:22.548564 kubelet[2313]: I0913 00:10:22.548497 2313 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-738365eea6" Sep 13 00:10:22.549160 kubelet[2313]: E0913 00:10:22.549099 2313 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://164.90.159.5:6443/api/v1/nodes\": dial tcp 164.90.159.5:6443: connect: connection refused" node="ci-4081.3.5-n-738365eea6" Sep 13 00:10:22.631735 kubelet[2313]: E0913 00:10:22.631030 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:22.632896 containerd[1590]: time="2025-09-13T00:10:22.632419316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-738365eea6,Uid:5b9eb57e79c69fa593396c4ad60137bb,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:22.634768 kubelet[2313]: E0913 00:10:22.634734 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:22.635125 kubelet[2313]: E0913 00:10:22.635094 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:22.636126 containerd[1590]: time="2025-09-13T00:10:22.636089863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-738365eea6,Uid:aa5b0a1022c8c90e028790f82152a36d,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:22.636523 containerd[1590]: time="2025-09-13T00:10:22.636488651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-738365eea6,Uid:3b65abc49cf17605ae7d7f7128c22cc0,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:22.638020 systemd-resolved[1473]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Sep 13 00:10:22.763989 kubelet[2313]: E0913 00:10:22.763912 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.159.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-738365eea6?timeout=10s\": dial tcp 164.90.159.5:6443: connect: connection refused" interval="800ms" Sep 13 00:10:22.951545 kubelet[2313]: I0913 00:10:22.951371 2313 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-738365eea6" Sep 13 00:10:22.951920 kubelet[2313]: E0913 00:10:22.951874 2313 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://164.90.159.5:6443/api/v1/nodes\": dial tcp 164.90.159.5:6443: connect: connection refused" node="ci-4081.3.5-n-738365eea6" Sep 13 00:10:23.043606 kubelet[2313]: W0913 00:10:23.043525 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://164.90.159.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 164.90.159.5:6443: connect: connection refused Sep 13 00:10:23.043606 kubelet[2313]: E0913 00:10:23.043607 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://164.90.159.5:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 164.90.159.5:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:23.080340 kubelet[2313]: W0913 00:10:23.080285 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://164.90.159.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 164.90.159.5:6443: connect: connection refused Sep 13 00:10:23.080340 kubelet[2313]: E0913 00:10:23.080349 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://164.90.159.5:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 164.90.159.5:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:23.216632 kubelet[2313]: W0913 00:10:23.216284 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://164.90.159.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 164.90.159.5:6443: connect: connection refused Sep 13 00:10:23.216632 kubelet[2313]: E0913 00:10:23.216394 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://164.90.159.5:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 164.90.159.5:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:23.242522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3700613232.mount: Deactivated successfully. Sep 13 00:10:23.250209 containerd[1590]: time="2025-09-13T00:10:23.250128388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:23.251346 containerd[1590]: time="2025-09-13T00:10:23.251285450Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 13 00:10:23.252473 containerd[1590]: time="2025-09-13T00:10:23.252430633Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:23.253769 containerd[1590]: time="2025-09-13T00:10:23.253730973Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:23.254734 containerd[1590]: time="2025-09-13T00:10:23.254332944Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:10:23.254734 containerd[1590]: time="2025-09-13T00:10:23.254432249Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:23.255446 containerd[1590]: time="2025-09-13T00:10:23.255396132Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:10:23.256746 containerd[1590]: time="2025-09-13T00:10:23.256411472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:10:23.259964 containerd[1590]: time="2025-09-13T00:10:23.259789032Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 627.229448ms" Sep 13 00:10:23.262226 containerd[1590]: time="2025-09-13T00:10:23.261842710Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 625.267504ms" Sep 13 00:10:23.264299 containerd[1590]: time="2025-09-13T00:10:23.264233845Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 627.914133ms" Sep 13 00:10:23.463580 containerd[1590]: time="2025-09-13T00:10:23.462023667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:23.463580 containerd[1590]: time="2025-09-13T00:10:23.462298873Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:23.463580 containerd[1590]: time="2025-09-13T00:10:23.462317127Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:23.463580 containerd[1590]: time="2025-09-13T00:10:23.460594236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:23.463580 containerd[1590]: time="2025-09-13T00:10:23.461181772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:23.463580 containerd[1590]: time="2025-09-13T00:10:23.461207831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:23.466491 containerd[1590]: time="2025-09-13T00:10:23.464856636Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:23.469941 containerd[1590]: time="2025-09-13T00:10:23.468577112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:23.472819 containerd[1590]: time="2025-09-13T00:10:23.472199266Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:23.472819 containerd[1590]: time="2025-09-13T00:10:23.472292751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:23.472819 containerd[1590]: time="2025-09-13T00:10:23.472321415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:23.472819 containerd[1590]: time="2025-09-13T00:10:23.472461152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:23.562568 kubelet[2313]: W0913 00:10:23.561206 2313 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://164.90.159.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-738365eea6&limit=500&resourceVersion=0": dial tcp 164.90.159.5:6443: connect: connection refused Sep 13 00:10:23.562568 kubelet[2313]: E0913 00:10:23.561287 2313 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://164.90.159.5:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.5-n-738365eea6&limit=500&resourceVersion=0\": dial tcp 164.90.159.5:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:10:23.566149 kubelet[2313]: E0913 00:10:23.566067 2313 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://164.90.159.5:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.5-n-738365eea6?timeout=10s\": dial tcp 164.90.159.5:6443: connect: connection refused" interval="1.6s" Sep 13 00:10:23.614764 containerd[1590]: time="2025-09-13T00:10:23.614717039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.5-n-738365eea6,Uid:5b9eb57e79c69fa593396c4ad60137bb,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd0911525cca00f275d919dfa520cf7dca55d8af67e701a54ddd028f6585dc91\"" Sep 13 00:10:23.619396 kubelet[2313]: E0913 00:10:23.618967 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:23.621859 containerd[1590]: time="2025-09-13T00:10:23.621508070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.5-n-738365eea6,Uid:3b65abc49cf17605ae7d7f7128c22cc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"885190eebb4060378fcee597e1bd7ebfbff0a2a48af7596e3eb5c5f8591522a9\"" Sep 13 00:10:23.622462 kubelet[2313]: E0913 00:10:23.622204 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:23.625324 containerd[1590]: time="2025-09-13T00:10:23.625105086Z" level=info msg="CreateContainer within sandbox \"885190eebb4060378fcee597e1bd7ebfbff0a2a48af7596e3eb5c5f8591522a9\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:10:23.627105 containerd[1590]: time="2025-09-13T00:10:23.627056001Z" level=info msg="CreateContainer within sandbox \"bd0911525cca00f275d919dfa520cf7dca55d8af67e701a54ddd028f6585dc91\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:10:23.647292 containerd[1590]: time="2025-09-13T00:10:23.647161709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.5-n-738365eea6,Uid:aa5b0a1022c8c90e028790f82152a36d,Namespace:kube-system,Attempt:0,} returns sandbox id \"84b72713f56eeed24bbbd34ea3d8e045d7bcae3374c9a6676ed90516459e415d\"" Sep 13 00:10:23.648591 kubelet[2313]: E0913 00:10:23.648567 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:23.651537 containerd[1590]: time="2025-09-13T00:10:23.651498247Z" level=info msg="CreateContainer within sandbox \"84b72713f56eeed24bbbd34ea3d8e045d7bcae3374c9a6676ed90516459e415d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:10:23.656490 containerd[1590]: time="2025-09-13T00:10:23.656302589Z" level=info msg="CreateContainer within sandbox \"885190eebb4060378fcee597e1bd7ebfbff0a2a48af7596e3eb5c5f8591522a9\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8762e579b630bee781faa5ab962a2aa606e19185f1ee8791c60e5eaf62058006\"" Sep 13 00:10:23.657559 containerd[1590]: time="2025-09-13T00:10:23.657329093Z" level=info msg="StartContainer for \"8762e579b630bee781faa5ab962a2aa606e19185f1ee8791c60e5eaf62058006\"" Sep 13 00:10:23.669235 containerd[1590]: time="2025-09-13T00:10:23.669044375Z" level=info msg="CreateContainer within sandbox \"84b72713f56eeed24bbbd34ea3d8e045d7bcae3374c9a6676ed90516459e415d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2267e4926e0cf77cc4368766c4974ee30722bf682a173d5220d3146f49b01034\"" Sep 13 00:10:23.670638 containerd[1590]: time="2025-09-13T00:10:23.670410555Z" level=info msg="StartContainer for \"2267e4926e0cf77cc4368766c4974ee30722bf682a173d5220d3146f49b01034\"" Sep 13 00:10:23.675468 containerd[1590]: time="2025-09-13T00:10:23.675355007Z" level=info msg="CreateContainer within sandbox \"bd0911525cca00f275d919dfa520cf7dca55d8af67e701a54ddd028f6585dc91\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"21bdaa538d3003f099e5cfa188304e64fbee7ebe317c0d6d3f6eb36369b9a1d5\"" Sep 13 00:10:23.677543 containerd[1590]: time="2025-09-13T00:10:23.677383515Z" level=info msg="StartContainer for \"21bdaa538d3003f099e5cfa188304e64fbee7ebe317c0d6d3f6eb36369b9a1d5\"" Sep 13 00:10:23.757836 kubelet[2313]: I0913 00:10:23.755482 2313 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-738365eea6" Sep 13 00:10:23.758556 kubelet[2313]: E0913 00:10:23.758521 2313 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://164.90.159.5:6443/api/v1/nodes\": dial tcp 164.90.159.5:6443: connect: connection refused" node="ci-4081.3.5-n-738365eea6" Sep 13 00:10:23.834946 containerd[1590]: time="2025-09-13T00:10:23.834889405Z" level=info msg="StartContainer for \"8762e579b630bee781faa5ab962a2aa606e19185f1ee8791c60e5eaf62058006\" returns successfully" Sep 13 00:10:23.857922 containerd[1590]: time="2025-09-13T00:10:23.857287122Z" level=info msg="StartContainer for \"21bdaa538d3003f099e5cfa188304e64fbee7ebe317c0d6d3f6eb36369b9a1d5\" returns successfully" Sep 13 00:10:23.876761 containerd[1590]: time="2025-09-13T00:10:23.875476729Z" level=info msg="StartContainer for \"2267e4926e0cf77cc4368766c4974ee30722bf682a173d5220d3146f49b01034\" returns successfully" Sep 13 00:10:24.254167 kubelet[2313]: E0913 00:10:24.254110 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:24.261780 kubelet[2313]: E0913 00:10:24.261732 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:24.270050 kubelet[2313]: E0913 00:10:24.269851 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:25.269316 kubelet[2313]: E0913 00:10:25.269254 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:25.362783 kubelet[2313]: I0913 00:10:25.361728 2313 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-738365eea6" Sep 13 00:10:25.972259 kubelet[2313]: E0913 00:10:25.972172 2313 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.5-n-738365eea6\" not found" node="ci-4081.3.5-n-738365eea6" Sep 13 00:10:26.064381 kubelet[2313]: I0913 00:10:26.064078 2313 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-n-738365eea6" Sep 13 00:10:26.141805 kubelet[2313]: I0913 00:10:26.139985 2313 apiserver.go:52] "Watching apiserver" Sep 13 00:10:26.157680 kubelet[2313]: E0913 00:10:26.157312 2313 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.5-n-738365eea6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-738365eea6" Sep 13 00:10:26.157680 kubelet[2313]: E0913 00:10:26.157589 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:26.162285 kubelet[2313]: I0913 00:10:26.162200 2313 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:10:26.274134 kubelet[2313]: E0913 00:10:26.273635 2313 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.5-n-738365eea6\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.5-n-738365eea6" Sep 13 00:10:26.274134 kubelet[2313]: E0913 00:10:26.273933 2313 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:28.061384 systemd[1]: Reloading requested from client PID 2584 ('systemctl') (unit session-7.scope)... Sep 13 00:10:28.061409 systemd[1]: Reloading... Sep 13 00:10:28.175789 zram_generator::config[2626]: No configuration found. Sep 13 00:10:28.364463 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:10:28.458897 systemd[1]: Reloading finished in 396 ms. Sep 13 00:10:28.510137 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:28.525857 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:10:28.526504 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:28.535121 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:10:28.707150 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:10:28.711583 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:10:28.800119 kubelet[2684]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:28.800119 kubelet[2684]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:10:28.800119 kubelet[2684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:10:28.800657 kubelet[2684]: I0913 00:10:28.800237 2684 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:10:28.812968 kubelet[2684]: I0913 00:10:28.812529 2684 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:10:28.812968 kubelet[2684]: I0913 00:10:28.812581 2684 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:10:28.813771 kubelet[2684]: I0913 00:10:28.813481 2684 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:10:28.816178 kubelet[2684]: I0913 00:10:28.816125 2684 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:10:28.831674 kubelet[2684]: I0913 00:10:28.831618 2684 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:10:28.838978 kubelet[2684]: E0913 00:10:28.838920 2684 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:10:28.838978 kubelet[2684]: I0913 00:10:28.838972 2684 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:10:28.843870 kubelet[2684]: I0913 00:10:28.843817 2684 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:10:28.844932 kubelet[2684]: I0913 00:10:28.844810 2684 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:10:28.845238 kubelet[2684]: I0913 00:10:28.845167 2684 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:10:28.846191 kubelet[2684]: I0913 00:10:28.845241 2684 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.5-n-738365eea6","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:10:28.846356 kubelet[2684]: I0913 00:10:28.846211 2684 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:10:28.846356 kubelet[2684]: I0913 00:10:28.846231 2684 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:10:28.846356 kubelet[2684]: I0913 00:10:28.846284 2684 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:28.847258 kubelet[2684]: I0913 00:10:28.846454 2684 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:10:28.847258 kubelet[2684]: I0913 00:10:28.846479 2684 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:10:28.847258 kubelet[2684]: I0913 00:10:28.846525 2684 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:10:28.847258 kubelet[2684]: I0913 00:10:28.846540 2684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:10:28.851770 kubelet[2684]: I0913 00:10:28.851738 2684 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:10:28.852952 kubelet[2684]: I0913 00:10:28.852924 2684 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:10:28.853821 kubelet[2684]: I0913 00:10:28.853793 2684 server.go:1274] "Started kubelet" Sep 13 00:10:28.865920 kubelet[2684]: I0913 00:10:28.865581 2684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:10:28.869875 kubelet[2684]: I0913 00:10:28.869815 2684 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:10:28.876318 kubelet[2684]: I0913 00:10:28.874520 2684 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:10:28.878819 kubelet[2684]: I0913 00:10:28.876879 2684 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:10:28.878819 kubelet[2684]: I0913 00:10:28.877821 2684 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:10:28.878819 kubelet[2684]: I0913 00:10:28.878296 2684 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:10:28.882481 kubelet[2684]: E0913 00:10:28.882447 2684 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:10:28.885095 kubelet[2684]: I0913 00:10:28.885051 2684 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:10:28.885588 kubelet[2684]: I0913 00:10:28.885446 2684 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:10:28.885699 kubelet[2684]: I0913 00:10:28.885683 2684 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:10:28.889318 kubelet[2684]: I0913 00:10:28.889279 2684 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:10:28.889478 kubelet[2684]: I0913 00:10:28.889402 2684 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:10:28.894101 kubelet[2684]: I0913 00:10:28.893483 2684 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:10:28.919510 kubelet[2684]: I0913 00:10:28.919327 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:10:28.923893 kubelet[2684]: I0913 00:10:28.923130 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:10:28.923893 kubelet[2684]: I0913 00:10:28.923168 2684 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:10:28.923893 kubelet[2684]: I0913 00:10:28.923192 2684 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:10:28.923893 kubelet[2684]: E0913 00:10:28.923249 2684 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:10:28.978548 kubelet[2684]: I0913 00:10:28.978421 2684 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:10:28.978548 kubelet[2684]: I0913 00:10:28.978446 2684 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:10:28.978548 kubelet[2684]: I0913 00:10:28.978469 2684 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:10:28.980488 kubelet[2684]: I0913 00:10:28.980446 2684 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:10:28.980488 kubelet[2684]: I0913 00:10:28.980471 2684 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:10:28.980488 kubelet[2684]: I0913 00:10:28.980496 2684 policy_none.go:49] "None policy: Start" Sep 13 00:10:28.982054 kubelet[2684]: I0913 00:10:28.982023 2684 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:10:28.982579 kubelet[2684]: I0913 00:10:28.982177 2684 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:10:28.982579 kubelet[2684]: I0913 00:10:28.982388 2684 state_mem.go:75] "Updated machine memory state" Sep 13 00:10:28.985393 kubelet[2684]: I0913 00:10:28.984236 2684 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:10:28.985393 kubelet[2684]: I0913 00:10:28.984437 2684 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:10:28.985393 kubelet[2684]: I0913 00:10:28.984449 2684 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:10:28.985620 kubelet[2684]: I0913 00:10:28.985603 2684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:10:29.030913 kubelet[2684]: W0913 00:10:29.030505 2684 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:10:29.032522 kubelet[2684]: W0913 00:10:29.032008 2684 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:10:29.035129 kubelet[2684]: W0913 00:10:29.033667 2684 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:10:29.090134 kubelet[2684]: I0913 00:10:29.089954 2684 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.5-n-738365eea6" Sep 13 00:10:29.110632 kubelet[2684]: I0913 00:10:29.110577 2684 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.5-n-738365eea6" Sep 13 00:10:29.111407 kubelet[2684]: I0913 00:10:29.110695 2684 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.5-n-738365eea6" Sep 13 00:10:29.188570 kubelet[2684]: I0913 00:10:29.187626 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5b9eb57e79c69fa593396c4ad60137bb-ca-certs\") pod \"kube-apiserver-ci-4081.3.5-n-738365eea6\" (UID: \"5b9eb57e79c69fa593396c4ad60137bb\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-738365eea6" Sep 13 00:10:29.188570 kubelet[2684]: I0913 00:10:29.187724 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5b9eb57e79c69fa593396c4ad60137bb-k8s-certs\") pod \"kube-apiserver-ci-4081.3.5-n-738365eea6\" (UID: \"5b9eb57e79c69fa593396c4ad60137bb\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-738365eea6" Sep 13 00:10:29.188570 kubelet[2684]: I0913 00:10:29.187762 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b65abc49cf17605ae7d7f7128c22cc0-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-738365eea6\" (UID: \"3b65abc49cf17605ae7d7f7128c22cc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-738365eea6" Sep 13 00:10:29.188570 kubelet[2684]: I0913 00:10:29.187802 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5b9eb57e79c69fa593396c4ad60137bb-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.5-n-738365eea6\" (UID: \"5b9eb57e79c69fa593396c4ad60137bb\") " pod="kube-system/kube-apiserver-ci-4081.3.5-n-738365eea6" Sep 13 00:10:29.188570 kubelet[2684]: I0913 00:10:29.187829 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b65abc49cf17605ae7d7f7128c22cc0-ca-certs\") pod \"kube-controller-manager-ci-4081.3.5-n-738365eea6\" (UID: \"3b65abc49cf17605ae7d7f7128c22cc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-738365eea6" Sep 13 00:10:29.189004 kubelet[2684]: I0913 00:10:29.187855 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3b65abc49cf17605ae7d7f7128c22cc0-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.5-n-738365eea6\" (UID: \"3b65abc49cf17605ae7d7f7128c22cc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-738365eea6" Sep 13 00:10:29.189004 kubelet[2684]: I0913 00:10:29.187882 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3b65abc49cf17605ae7d7f7128c22cc0-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.5-n-738365eea6\" (UID: \"3b65abc49cf17605ae7d7f7128c22cc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-738365eea6" Sep 13 00:10:29.189004 kubelet[2684]: I0913 00:10:29.187910 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b65abc49cf17605ae7d7f7128c22cc0-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.5-n-738365eea6\" (UID: \"3b65abc49cf17605ae7d7f7128c22cc0\") " pod="kube-system/kube-controller-manager-ci-4081.3.5-n-738365eea6" Sep 13 00:10:29.189004 kubelet[2684]: I0913 00:10:29.187937 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/aa5b0a1022c8c90e028790f82152a36d-kubeconfig\") pod \"kube-scheduler-ci-4081.3.5-n-738365eea6\" (UID: \"aa5b0a1022c8c90e028790f82152a36d\") " pod="kube-system/kube-scheduler-ci-4081.3.5-n-738365eea6" Sep 13 00:10:29.333384 kubelet[2684]: E0913 00:10:29.332364 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:29.336183 kubelet[2684]: E0913 00:10:29.335420 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:29.337249 kubelet[2684]: E0913 00:10:29.336744 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:29.848928 kubelet[2684]: I0913 00:10:29.848866 2684 apiserver.go:52] "Watching apiserver" Sep 13 00:10:29.887246 kubelet[2684]: I0913 00:10:29.886832 2684 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:10:29.957739 kubelet[2684]: E0913 00:10:29.957674 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:29.958232 kubelet[2684]: E0913 00:10:29.958199 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:29.969886 kubelet[2684]: W0913 00:10:29.969843 2684 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 13 00:10:29.970058 kubelet[2684]: E0913 00:10:29.969936 2684 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.5-n-738365eea6\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.5-n-738365eea6" Sep 13 00:10:29.970216 kubelet[2684]: E0913 00:10:29.970165 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:30.002033 kubelet[2684]: I0913 00:10:30.001678 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.5-n-738365eea6" podStartSLOduration=1.001581954 podStartE2EDuration="1.001581954s" podCreationTimestamp="2025-09-13 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:29.999742172 +0000 UTC m=+1.280377405" watchObservedRunningTime="2025-09-13 00:10:30.001581954 +0000 UTC m=+1.282217175" Sep 13 00:10:30.020113 kubelet[2684]: I0913 00:10:30.018622 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.5-n-738365eea6" podStartSLOduration=1.018592549 podStartE2EDuration="1.018592549s" podCreationTimestamp="2025-09-13 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:30.018556162 +0000 UTC m=+1.299191390" watchObservedRunningTime="2025-09-13 00:10:30.018592549 +0000 UTC m=+1.299227779" Sep 13 00:10:30.960096 kubelet[2684]: E0913 00:10:30.960036 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:33.815307 kubelet[2684]: I0913 00:10:33.815244 2684 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:10:33.816188 containerd[1590]: time="2025-09-13T00:10:33.816121853Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:10:33.816608 kubelet[2684]: I0913 00:10:33.816535 2684 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:10:34.766573 kubelet[2684]: I0913 00:10:34.763083 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.5-n-738365eea6" podStartSLOduration=5.76306361 podStartE2EDuration="5.76306361s" podCreationTimestamp="2025-09-13 00:10:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:30.041570038 +0000 UTC m=+1.322205270" watchObservedRunningTime="2025-09-13 00:10:34.76306361 +0000 UTC m=+6.043698826" Sep 13 00:10:34.824398 kubelet[2684]: I0913 00:10:34.824338 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c130efd6-2cc6-4051-84d3-463ee8021bc1-kube-proxy\") pod \"kube-proxy-kr967\" (UID: \"c130efd6-2cc6-4051-84d3-463ee8021bc1\") " pod="kube-system/kube-proxy-kr967" Sep 13 00:10:34.824398 kubelet[2684]: I0913 00:10:34.824392 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzb52\" (UniqueName: \"kubernetes.io/projected/c130efd6-2cc6-4051-84d3-463ee8021bc1-kube-api-access-zzb52\") pod \"kube-proxy-kr967\" (UID: \"c130efd6-2cc6-4051-84d3-463ee8021bc1\") " pod="kube-system/kube-proxy-kr967" Sep 13 00:10:34.824933 kubelet[2684]: I0913 00:10:34.824413 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c130efd6-2cc6-4051-84d3-463ee8021bc1-xtables-lock\") pod \"kube-proxy-kr967\" (UID: \"c130efd6-2cc6-4051-84d3-463ee8021bc1\") " pod="kube-system/kube-proxy-kr967" Sep 13 00:10:34.824933 kubelet[2684]: I0913 00:10:34.824432 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c130efd6-2cc6-4051-84d3-463ee8021bc1-lib-modules\") pod \"kube-proxy-kr967\" (UID: \"c130efd6-2cc6-4051-84d3-463ee8021bc1\") " pod="kube-system/kube-proxy-kr967" Sep 13 00:10:34.840819 kubelet[2684]: W0913 00:10:34.838555 2684 reflector.go:561] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.5-n-738365eea6" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.5-n-738365eea6' and this object Sep 13 00:10:34.840819 kubelet[2684]: E0913 00:10:34.838628 2684 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.5-n-738365eea6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.3.5-n-738365eea6' and this object" logger="UnhandledError" Sep 13 00:10:34.840819 kubelet[2684]: W0913 00:10:34.840002 2684 reflector.go:561] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ci-4081.3.5-n-738365eea6" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ci-4081.3.5-n-738365eea6' and this object Sep 13 00:10:34.840819 kubelet[2684]: E0913 00:10:34.840043 2684 reflector.go:158] "Unhandled Error" err="object-\"tigera-operator\"/\"kubernetes-services-endpoint\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kubernetes-services-endpoint\" is forbidden: User \"system:node:ci-4081.3.5-n-738365eea6\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ci-4081.3.5-n-738365eea6' and this object" logger="UnhandledError" Sep 13 00:10:34.924891 kubelet[2684]: I0913 00:10:34.924851 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/630fe97c-f0dc-4bbd-a458-69f46660c617-var-lib-calico\") pod \"tigera-operator-58fc44c59b-q24vt\" (UID: \"630fe97c-f0dc-4bbd-a458-69f46660c617\") " pod="tigera-operator/tigera-operator-58fc44c59b-q24vt" Sep 13 00:10:34.925523 kubelet[2684]: I0913 00:10:34.925218 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69qdv\" (UniqueName: \"kubernetes.io/projected/630fe97c-f0dc-4bbd-a458-69f46660c617-kube-api-access-69qdv\") pod \"tigera-operator-58fc44c59b-q24vt\" (UID: \"630fe97c-f0dc-4bbd-a458-69f46660c617\") " pod="tigera-operator/tigera-operator-58fc44c59b-q24vt" Sep 13 00:10:35.070263 kubelet[2684]: E0913 00:10:35.069780 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:35.071595 containerd[1590]: time="2025-09-13T00:10:35.071539783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kr967,Uid:c130efd6-2cc6-4051-84d3-463ee8021bc1,Namespace:kube-system,Attempt:0,}" Sep 13 00:10:35.114635 containerd[1590]: time="2025-09-13T00:10:35.114427333Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:35.114635 containerd[1590]: time="2025-09-13T00:10:35.114530655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:35.114635 containerd[1590]: time="2025-09-13T00:10:35.114548285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:35.115510 containerd[1590]: time="2025-09-13T00:10:35.114742368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:35.141176 systemd[1]: run-containerd-runc-k8s.io-a09113fac2c7be7026295375a7e8a11ba3baf2f47a9cb3a3d776ab271d120583-runc.krqUj9.mount: Deactivated successfully. Sep 13 00:10:35.173192 containerd[1590]: time="2025-09-13T00:10:35.173004128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kr967,Uid:c130efd6-2cc6-4051-84d3-463ee8021bc1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a09113fac2c7be7026295375a7e8a11ba3baf2f47a9cb3a3d776ab271d120583\"" Sep 13 00:10:35.174773 kubelet[2684]: E0913 00:10:35.174063 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:35.179053 containerd[1590]: time="2025-09-13T00:10:35.179004354Z" level=info msg="CreateContainer within sandbox \"a09113fac2c7be7026295375a7e8a11ba3baf2f47a9cb3a3d776ab271d120583\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:10:35.200088 containerd[1590]: time="2025-09-13T00:10:35.199990360Z" level=info msg="CreateContainer within sandbox \"a09113fac2c7be7026295375a7e8a11ba3baf2f47a9cb3a3d776ab271d120583\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ca5f9a35a6c1abe54967bf581b39db0a79af6b88a0855f11f03a87243fb1f07a\"" Sep 13 00:10:35.202167 containerd[1590]: time="2025-09-13T00:10:35.202016416Z" level=info msg="StartContainer for \"ca5f9a35a6c1abe54967bf581b39db0a79af6b88a0855f11f03a87243fb1f07a\"" Sep 13 00:10:35.293763 containerd[1590]: time="2025-09-13T00:10:35.292575934Z" level=info msg="StartContainer for \"ca5f9a35a6c1abe54967bf581b39db0a79af6b88a0855f11f03a87243fb1f07a\" returns successfully" Sep 13 00:10:35.581051 kubelet[2684]: E0913 00:10:35.580923 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:35.973655 kubelet[2684]: E0913 00:10:35.973251 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:35.973655 kubelet[2684]: E0913 00:10:35.973296 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:36.007212 kubelet[2684]: I0913 00:10:36.006877 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kr967" podStartSLOduration=2.006853888 podStartE2EDuration="2.006853888s" podCreationTimestamp="2025-09-13 00:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:10:36.005678458 +0000 UTC m=+7.286313704" watchObservedRunningTime="2025-09-13 00:10:36.006853888 +0000 UTC m=+7.287489104" Sep 13 00:10:36.048893 containerd[1590]: time="2025-09-13T00:10:36.048284540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-q24vt,Uid:630fe97c-f0dc-4bbd-a458-69f46660c617,Namespace:tigera-operator,Attempt:0,}" Sep 13 00:10:36.087219 containerd[1590]: time="2025-09-13T00:10:36.086887379Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:36.087219 containerd[1590]: time="2025-09-13T00:10:36.086963409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:36.087219 containerd[1590]: time="2025-09-13T00:10:36.086979588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:36.087219 containerd[1590]: time="2025-09-13T00:10:36.087093271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:36.120458 systemd[1]: run-containerd-runc-k8s.io-3400cf86d8aaf94b73796910b86062ca298dad9c6e9c229e21f30196de82e6fd-runc.2zkDBy.mount: Deactivated successfully. Sep 13 00:10:36.172039 containerd[1590]: time="2025-09-13T00:10:36.171973783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-q24vt,Uid:630fe97c-f0dc-4bbd-a458-69f46660c617,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3400cf86d8aaf94b73796910b86062ca298dad9c6e9c229e21f30196de82e6fd\"" Sep 13 00:10:36.174379 containerd[1590]: time="2025-09-13T00:10:36.174337925Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 13 00:10:37.592861 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3238469137.mount: Deactivated successfully. Sep 13 00:10:38.027367 kubelet[2684]: E0913 00:10:38.026648 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:38.790896 kubelet[2684]: E0913 00:10:38.790790 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:38.915744 containerd[1590]: time="2025-09-13T00:10:38.915626600Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:38.917325 containerd[1590]: time="2025-09-13T00:10:38.916966544Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 13 00:10:38.919753 containerd[1590]: time="2025-09-13T00:10:38.918219275Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:38.921609 containerd[1590]: time="2025-09-13T00:10:38.921559483Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:38.923109 containerd[1590]: time="2025-09-13T00:10:38.923056538Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.74867049s" Sep 13 00:10:38.923308 containerd[1590]: time="2025-09-13T00:10:38.923278388Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 13 00:10:38.928759 containerd[1590]: time="2025-09-13T00:10:38.928663184Z" level=info msg="CreateContainer within sandbox \"3400cf86d8aaf94b73796910b86062ca298dad9c6e9c229e21f30196de82e6fd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 13 00:10:38.947844 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989518578.mount: Deactivated successfully. Sep 13 00:10:38.949809 containerd[1590]: time="2025-09-13T00:10:38.949448678Z" level=info msg="CreateContainer within sandbox \"3400cf86d8aaf94b73796910b86062ca298dad9c6e9c229e21f30196de82e6fd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"eb1df61969ae45690a034bba988b9db3a8d5928471abdd40f368b66b32d682fa\"" Sep 13 00:10:38.951784 containerd[1590]: time="2025-09-13T00:10:38.951732858Z" level=info msg="StartContainer for \"eb1df61969ae45690a034bba988b9db3a8d5928471abdd40f368b66b32d682fa\"" Sep 13 00:10:39.000778 kubelet[2684]: E0913 00:10:38.994315 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:39.000778 kubelet[2684]: E0913 00:10:38.996182 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:39.111581 containerd[1590]: time="2025-09-13T00:10:39.111514998Z" level=info msg="StartContainer for \"eb1df61969ae45690a034bba988b9db3a8d5928471abdd40f368b66b32d682fa\" returns successfully" Sep 13 00:10:41.490240 update_engine[1560]: I20250913 00:10:41.490037 1560 update_attempter.cc:509] Updating boot flags... Sep 13 00:10:41.564185 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3024) Sep 13 00:10:46.832379 sudo[1797]: pam_unix(sudo:session): session closed for user root Sep 13 00:10:46.842816 sshd[1791]: pam_unix(sshd:session): session closed for user core Sep 13 00:10:46.856080 systemd[1]: sshd@6-164.90.159.5:22-139.178.68.195:56354.service: Deactivated successfully. Sep 13 00:10:46.862582 systemd-logind[1559]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:10:46.863048 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:10:46.866119 systemd-logind[1559]: Removed session 7. Sep 13 00:10:52.605368 kubelet[2684]: I0913 00:10:52.605271 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-q24vt" podStartSLOduration=15.853865675 podStartE2EDuration="18.605238865s" podCreationTimestamp="2025-09-13 00:10:34 +0000 UTC" firstStartedPulling="2025-09-13 00:10:36.173425446 +0000 UTC m=+7.454060638" lastFinishedPulling="2025-09-13 00:10:38.92479861 +0000 UTC m=+10.205433828" observedRunningTime="2025-09-13 00:10:40.014811336 +0000 UTC m=+11.295446552" watchObservedRunningTime="2025-09-13 00:10:52.605238865 +0000 UTC m=+23.885874110" Sep 13 00:10:52.652574 kubelet[2684]: I0913 00:10:52.652505 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4a5d4e8f-969c-4dfa-acc7-9beb04843fdc-typha-certs\") pod \"calico-typha-7bc66b9b75-w4n6j\" (UID: \"4a5d4e8f-969c-4dfa-acc7-9beb04843fdc\") " pod="calico-system/calico-typha-7bc66b9b75-w4n6j" Sep 13 00:10:52.652574 kubelet[2684]: I0913 00:10:52.652573 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4a5d4e8f-969c-4dfa-acc7-9beb04843fdc-tigera-ca-bundle\") pod \"calico-typha-7bc66b9b75-w4n6j\" (UID: \"4a5d4e8f-969c-4dfa-acc7-9beb04843fdc\") " pod="calico-system/calico-typha-7bc66b9b75-w4n6j" Sep 13 00:10:52.652903 kubelet[2684]: I0913 00:10:52.652605 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2fwg\" (UniqueName: \"kubernetes.io/projected/4a5d4e8f-969c-4dfa-acc7-9beb04843fdc-kube-api-access-k2fwg\") pod \"calico-typha-7bc66b9b75-w4n6j\" (UID: \"4a5d4e8f-969c-4dfa-acc7-9beb04843fdc\") " pod="calico-system/calico-typha-7bc66b9b75-w4n6j" Sep 13 00:10:52.939628 kubelet[2684]: E0913 00:10:52.939361 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:52.943515 containerd[1590]: time="2025-09-13T00:10:52.943304453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bc66b9b75-w4n6j,Uid:4a5d4e8f-969c-4dfa-acc7-9beb04843fdc,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:52.955213 kubelet[2684]: I0913 00:10:52.955108 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a27479e1-2b3f-414b-b48b-5537dd39e032-lib-modules\") pod \"calico-node-7fcww\" (UID: \"a27479e1-2b3f-414b-b48b-5537dd39e032\") " pod="calico-system/calico-node-7fcww" Sep 13 00:10:52.955923 kubelet[2684]: I0913 00:10:52.955889 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a27479e1-2b3f-414b-b48b-5537dd39e032-node-certs\") pod \"calico-node-7fcww\" (UID: \"a27479e1-2b3f-414b-b48b-5537dd39e032\") " pod="calico-system/calico-node-7fcww" Sep 13 00:10:52.956006 kubelet[2684]: I0913 00:10:52.955947 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a27479e1-2b3f-414b-b48b-5537dd39e032-xtables-lock\") pod \"calico-node-7fcww\" (UID: \"a27479e1-2b3f-414b-b48b-5537dd39e032\") " pod="calico-system/calico-node-7fcww" Sep 13 00:10:52.956006 kubelet[2684]: I0913 00:10:52.955976 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a27479e1-2b3f-414b-b48b-5537dd39e032-policysync\") pod \"calico-node-7fcww\" (UID: \"a27479e1-2b3f-414b-b48b-5537dd39e032\") " pod="calico-system/calico-node-7fcww" Sep 13 00:10:52.956006 kubelet[2684]: I0913 00:10:52.956001 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a27479e1-2b3f-414b-b48b-5537dd39e032-cni-log-dir\") pod \"calico-node-7fcww\" (UID: \"a27479e1-2b3f-414b-b48b-5537dd39e032\") " pod="calico-system/calico-node-7fcww" Sep 13 00:10:52.956134 kubelet[2684]: I0913 00:10:52.956031 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a27479e1-2b3f-414b-b48b-5537dd39e032-tigera-ca-bundle\") pod \"calico-node-7fcww\" (UID: \"a27479e1-2b3f-414b-b48b-5537dd39e032\") " pod="calico-system/calico-node-7fcww" Sep 13 00:10:52.956134 kubelet[2684]: I0913 00:10:52.956064 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzsrb\" (UniqueName: \"kubernetes.io/projected/a27479e1-2b3f-414b-b48b-5537dd39e032-kube-api-access-qzsrb\") pod \"calico-node-7fcww\" (UID: \"a27479e1-2b3f-414b-b48b-5537dd39e032\") " pod="calico-system/calico-node-7fcww" Sep 13 00:10:52.956134 kubelet[2684]: I0913 00:10:52.956095 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a27479e1-2b3f-414b-b48b-5537dd39e032-cni-bin-dir\") pod \"calico-node-7fcww\" (UID: \"a27479e1-2b3f-414b-b48b-5537dd39e032\") " pod="calico-system/calico-node-7fcww" Sep 13 00:10:52.956134 kubelet[2684]: I0913 00:10:52.956123 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a27479e1-2b3f-414b-b48b-5537dd39e032-flexvol-driver-host\") pod \"calico-node-7fcww\" (UID: \"a27479e1-2b3f-414b-b48b-5537dd39e032\") " pod="calico-system/calico-node-7fcww" Sep 13 00:10:52.958426 kubelet[2684]: I0913 00:10:52.956147 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a27479e1-2b3f-414b-b48b-5537dd39e032-var-run-calico\") pod \"calico-node-7fcww\" (UID: \"a27479e1-2b3f-414b-b48b-5537dd39e032\") " pod="calico-system/calico-node-7fcww" Sep 13 00:10:52.958426 kubelet[2684]: I0913 00:10:52.956170 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a27479e1-2b3f-414b-b48b-5537dd39e032-cni-net-dir\") pod \"calico-node-7fcww\" (UID: \"a27479e1-2b3f-414b-b48b-5537dd39e032\") " pod="calico-system/calico-node-7fcww" Sep 13 00:10:52.958426 kubelet[2684]: I0913 00:10:52.956195 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a27479e1-2b3f-414b-b48b-5537dd39e032-var-lib-calico\") pod \"calico-node-7fcww\" (UID: \"a27479e1-2b3f-414b-b48b-5537dd39e032\") " pod="calico-system/calico-node-7fcww" Sep 13 00:10:53.042013 containerd[1590]: time="2025-09-13T00:10:53.041488888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:53.042013 containerd[1590]: time="2025-09-13T00:10:53.041632904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:53.042013 containerd[1590]: time="2025-09-13T00:10:53.041672853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:53.043583 containerd[1590]: time="2025-09-13T00:10:53.043349776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:53.063787 kubelet[2684]: E0913 00:10:53.062938 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.063787 kubelet[2684]: W0913 00:10:53.063005 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.063787 kubelet[2684]: E0913 00:10:53.063060 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.107002 kubelet[2684]: E0913 00:10:53.106375 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.107002 kubelet[2684]: W0913 00:10:53.106415 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.107002 kubelet[2684]: E0913 00:10:53.106454 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.115228 kubelet[2684]: E0913 00:10:53.115076 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.115228 kubelet[2684]: W0913 00:10:53.115114 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.115228 kubelet[2684]: E0913 00:10:53.115153 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.183847 containerd[1590]: time="2025-09-13T00:10:53.183781715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7fcww,Uid:a27479e1-2b3f-414b-b48b-5537dd39e032,Namespace:calico-system,Attempt:0,}" Sep 13 00:10:53.287589 kubelet[2684]: E0913 00:10:53.284052 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qsg52" podUID="04b956de-e0b5-4148-a086-60a45e92f38a" Sep 13 00:10:53.300093 containerd[1590]: time="2025-09-13T00:10:53.299939820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7bc66b9b75-w4n6j,Uid:4a5d4e8f-969c-4dfa-acc7-9beb04843fdc,Namespace:calico-system,Attempt:0,} returns sandbox id \"52f18ad603f802ec760ffbb3600c143dd4958cec67fe7224f8201508a391a580\"" Sep 13 00:10:53.309440 kubelet[2684]: E0913 00:10:53.309382 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:53.322922 containerd[1590]: time="2025-09-13T00:10:53.322857771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 13 00:10:53.348892 containerd[1590]: time="2025-09-13T00:10:53.348346608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:10:53.348892 containerd[1590]: time="2025-09-13T00:10:53.348430933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:10:53.348892 containerd[1590]: time="2025-09-13T00:10:53.348448105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:53.348892 containerd[1590]: time="2025-09-13T00:10:53.348660516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:10:53.358324 kubelet[2684]: E0913 00:10:53.358283 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.358973 kubelet[2684]: W0913 00:10:53.358563 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.358973 kubelet[2684]: E0913 00:10:53.358605 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.360320 kubelet[2684]: E0913 00:10:53.360145 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.360320 kubelet[2684]: W0913 00:10:53.360169 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.360320 kubelet[2684]: E0913 00:10:53.360193 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.360968 kubelet[2684]: E0913 00:10:53.360462 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.360968 kubelet[2684]: W0913 00:10:53.360493 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.360968 kubelet[2684]: E0913 00:10:53.360508 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.361470 kubelet[2684]: E0913 00:10:53.361173 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.361470 kubelet[2684]: W0913 00:10:53.361199 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.361470 kubelet[2684]: E0913 00:10:53.361217 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.362732 kubelet[2684]: E0913 00:10:53.361944 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.362732 kubelet[2684]: W0913 00:10:53.361973 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.362732 kubelet[2684]: E0913 00:10:53.361989 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.363215 kubelet[2684]: E0913 00:10:53.363011 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.363215 kubelet[2684]: W0913 00:10:53.363030 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.363215 kubelet[2684]: E0913 00:10:53.363048 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.363485 kubelet[2684]: E0913 00:10:53.363456 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.363799 kubelet[2684]: W0913 00:10:53.363566 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.363799 kubelet[2684]: E0913 00:10:53.363591 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.364203 kubelet[2684]: E0913 00:10:53.364187 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.364314 kubelet[2684]: W0913 00:10:53.364300 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.364432 kubelet[2684]: E0913 00:10:53.364405 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.365489 kubelet[2684]: E0913 00:10:53.365439 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.365489 kubelet[2684]: W0913 00:10:53.365457 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.365850 kubelet[2684]: E0913 00:10:53.365664 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.366229 kubelet[2684]: E0913 00:10:53.366096 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.366229 kubelet[2684]: W0913 00:10:53.366112 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.366229 kubelet[2684]: E0913 00:10:53.366141 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.366719 kubelet[2684]: E0913 00:10:53.366601 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.366719 kubelet[2684]: W0913 00:10:53.366636 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.366978 kubelet[2684]: E0913 00:10:53.366872 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.367268 kubelet[2684]: E0913 00:10:53.367252 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.367547 kubelet[2684]: W0913 00:10:53.367332 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.367547 kubelet[2684]: E0913 00:10:53.367439 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.367888 kubelet[2684]: I0913 00:10:53.367685 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/04b956de-e0b5-4148-a086-60a45e92f38a-kubelet-dir\") pod \"csi-node-driver-qsg52\" (UID: \"04b956de-e0b5-4148-a086-60a45e92f38a\") " pod="calico-system/csi-node-driver-qsg52" Sep 13 00:10:53.368258 kubelet[2684]: E0913 00:10:53.368183 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.368258 kubelet[2684]: W0913 00:10:53.368199 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.368447 kubelet[2684]: E0913 00:10:53.368247 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.368447 kubelet[2684]: I0913 00:10:53.368302 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/04b956de-e0b5-4148-a086-60a45e92f38a-registration-dir\") pod \"csi-node-driver-qsg52\" (UID: \"04b956de-e0b5-4148-a086-60a45e92f38a\") " pod="calico-system/csi-node-driver-qsg52" Sep 13 00:10:53.369111 kubelet[2684]: E0913 00:10:53.368867 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.369111 kubelet[2684]: W0913 00:10:53.368895 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.369111 kubelet[2684]: E0913 00:10:53.369057 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.369790 kubelet[2684]: E0913 00:10:53.369629 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.369790 kubelet[2684]: W0913 00:10:53.369649 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.369790 kubelet[2684]: E0913 00:10:53.369679 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.371161 kubelet[2684]: E0913 00:10:53.370858 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.371161 kubelet[2684]: W0913 00:10:53.370876 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.371161 kubelet[2684]: E0913 00:10:53.371118 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.371957 kubelet[2684]: E0913 00:10:53.371614 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.371957 kubelet[2684]: W0913 00:10:53.371632 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.371957 kubelet[2684]: E0913 00:10:53.371922 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.373163 kubelet[2684]: E0913 00:10:53.373020 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.373163 kubelet[2684]: W0913 00:10:53.373038 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.373163 kubelet[2684]: E0913 00:10:53.373068 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.373743 kubelet[2684]: E0913 00:10:53.373493 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.373743 kubelet[2684]: W0913 00:10:53.373511 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.373743 kubelet[2684]: E0913 00:10:53.373527 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.374132 kubelet[2684]: E0913 00:10:53.373994 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.374132 kubelet[2684]: W0913 00:10:53.374012 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.374132 kubelet[2684]: E0913 00:10:53.374028 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.375211 kubelet[2684]: E0913 00:10:53.374823 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.375211 kubelet[2684]: W0913 00:10:53.374843 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.375611 kubelet[2684]: E0913 00:10:53.375361 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.376566 kubelet[2684]: E0913 00:10:53.376258 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.376566 kubelet[2684]: W0913 00:10:53.376278 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.376566 kubelet[2684]: E0913 00:10:53.376306 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.376963 kubelet[2684]: E0913 00:10:53.376822 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.376963 kubelet[2684]: W0913 00:10:53.376839 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.376963 kubelet[2684]: E0913 00:10:53.376856 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.377534 kubelet[2684]: E0913 00:10:53.377409 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.377534 kubelet[2684]: W0913 00:10:53.377427 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.377534 kubelet[2684]: E0913 00:10:53.377443 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.378242 kubelet[2684]: E0913 00:10:53.377993 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.378242 kubelet[2684]: W0913 00:10:53.378011 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.378242 kubelet[2684]: E0913 00:10:53.378026 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.378606 kubelet[2684]: E0913 00:10:53.378456 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.378606 kubelet[2684]: W0913 00:10:53.378484 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.378606 kubelet[2684]: E0913 00:10:53.378499 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.472066 kubelet[2684]: E0913 00:10:53.471178 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.472066 kubelet[2684]: W0913 00:10:53.471222 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.472066 kubelet[2684]: E0913 00:10:53.471263 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.475694 kubelet[2684]: I0913 00:10:53.472025 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/04b956de-e0b5-4148-a086-60a45e92f38a-varrun\") pod \"csi-node-driver-qsg52\" (UID: \"04b956de-e0b5-4148-a086-60a45e92f38a\") " pod="calico-system/csi-node-driver-qsg52" Sep 13 00:10:53.477361 kubelet[2684]: E0913 00:10:53.477323 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.478831 kubelet[2684]: W0913 00:10:53.478516 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.478831 kubelet[2684]: E0913 00:10:53.478607 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.479685 kubelet[2684]: E0913 00:10:53.479637 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.479948 kubelet[2684]: W0913 00:10:53.479837 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.480415 kubelet[2684]: E0913 00:10:53.480355 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.482257 kubelet[2684]: E0913 00:10:53.482128 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.482257 kubelet[2684]: W0913 00:10:53.482186 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.482540 kubelet[2684]: E0913 00:10:53.482466 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.483061 kubelet[2684]: E0913 00:10:53.482987 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.483061 kubelet[2684]: W0913 00:10:53.483000 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.483249 kubelet[2684]: E0913 00:10:53.483145 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.483249 kubelet[2684]: I0913 00:10:53.483175 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x2mn\" (UniqueName: \"kubernetes.io/projected/04b956de-e0b5-4148-a086-60a45e92f38a-kube-api-access-5x2mn\") pod \"csi-node-driver-qsg52\" (UID: \"04b956de-e0b5-4148-a086-60a45e92f38a\") " pod="calico-system/csi-node-driver-qsg52" Sep 13 00:10:53.484608 kubelet[2684]: E0913 00:10:53.484248 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.484608 kubelet[2684]: W0913 00:10:53.484277 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.484608 kubelet[2684]: E0913 00:10:53.484301 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.484608 kubelet[2684]: E0913 00:10:53.484556 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.484608 kubelet[2684]: W0913 00:10:53.484565 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.484608 kubelet[2684]: E0913 00:10:53.484578 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.485280 kubelet[2684]: I0913 00:10:53.484902 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/04b956de-e0b5-4148-a086-60a45e92f38a-socket-dir\") pod \"csi-node-driver-qsg52\" (UID: \"04b956de-e0b5-4148-a086-60a45e92f38a\") " pod="calico-system/csi-node-driver-qsg52" Sep 13 00:10:53.486483 kubelet[2684]: E0913 00:10:53.486072 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.486483 kubelet[2684]: W0913 00:10:53.486189 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.486483 kubelet[2684]: E0913 00:10:53.486455 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.487530 kubelet[2684]: E0913 00:10:53.487255 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.487530 kubelet[2684]: W0913 00:10:53.487272 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.487530 kubelet[2684]: E0913 00:10:53.487327 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.488226 kubelet[2684]: E0913 00:10:53.488053 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.488226 kubelet[2684]: W0913 00:10:53.488070 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.488960 kubelet[2684]: E0913 00:10:53.488782 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.488960 kubelet[2684]: E0913 00:10:53.488911 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.489342 kubelet[2684]: W0913 00:10:53.489202 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.489342 kubelet[2684]: E0913 00:10:53.489296 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.490292 kubelet[2684]: E0913 00:10:53.490194 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.490292 kubelet[2684]: W0913 00:10:53.490209 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.490787 kubelet[2684]: E0913 00:10:53.490222 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.492393 kubelet[2684]: E0913 00:10:53.491886 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.492393 kubelet[2684]: W0913 00:10:53.491927 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.492393 kubelet[2684]: E0913 00:10:53.491944 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.492393 kubelet[2684]: E0913 00:10:53.492174 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.492393 kubelet[2684]: W0913 00:10:53.492182 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.492393 kubelet[2684]: E0913 00:10:53.492191 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.493338 containerd[1590]: time="2025-09-13T00:10:53.490877624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7fcww,Uid:a27479e1-2b3f-414b-b48b-5537dd39e032,Namespace:calico-system,Attempt:0,} returns sandbox id \"9bce4e44e16e0939478eb64f4a9c905546db8515372ebc246907dd1872922619\"" Sep 13 00:10:53.493768 kubelet[2684]: E0913 00:10:53.493417 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.493768 kubelet[2684]: W0913 00:10:53.493443 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.493768 kubelet[2684]: E0913 00:10:53.493464 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.494789 kubelet[2684]: E0913 00:10:53.494332 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.494789 kubelet[2684]: W0913 00:10:53.494352 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.494789 kubelet[2684]: E0913 00:10:53.494371 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.495687 kubelet[2684]: E0913 00:10:53.495457 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.495687 kubelet[2684]: W0913 00:10:53.495473 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.495687 kubelet[2684]: E0913 00:10:53.495488 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.496152 kubelet[2684]: E0913 00:10:53.496072 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.496152 kubelet[2684]: W0913 00:10:53.496124 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.496598 kubelet[2684]: E0913 00:10:53.496139 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.497026 kubelet[2684]: E0913 00:10:53.496878 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.497026 kubelet[2684]: W0913 00:10:53.496891 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.497026 kubelet[2684]: E0913 00:10:53.496904 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.586473 kubelet[2684]: E0913 00:10:53.586187 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.586473 kubelet[2684]: W0913 00:10:53.586223 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.586473 kubelet[2684]: E0913 00:10:53.586269 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.587030 kubelet[2684]: E0913 00:10:53.586955 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.587030 kubelet[2684]: W0913 00:10:53.586982 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.587030 kubelet[2684]: E0913 00:10:53.587021 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.587375 kubelet[2684]: E0913 00:10:53.587330 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.587375 kubelet[2684]: W0913 00:10:53.587373 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.587463 kubelet[2684]: E0913 00:10:53.587401 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.587982 kubelet[2684]: E0913 00:10:53.587943 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.587982 kubelet[2684]: W0913 00:10:53.587965 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.588278 kubelet[2684]: E0913 00:10:53.588003 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.588278 kubelet[2684]: E0913 00:10:53.588265 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.588391 kubelet[2684]: W0913 00:10:53.588288 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.588391 kubelet[2684]: E0913 00:10:53.588304 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.588566 kubelet[2684]: E0913 00:10:53.588528 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.588566 kubelet[2684]: W0913 00:10:53.588537 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.588729 kubelet[2684]: E0913 00:10:53.588625 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.588850 kubelet[2684]: E0913 00:10:53.588832 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.588896 kubelet[2684]: W0913 00:10:53.588874 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.589265 kubelet[2684]: E0913 00:10:53.589200 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.589590 kubelet[2684]: E0913 00:10:53.589569 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.589590 kubelet[2684]: W0913 00:10:53.589587 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.589780 kubelet[2684]: E0913 00:10:53.589606 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.589994 kubelet[2684]: E0913 00:10:53.589964 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.589994 kubelet[2684]: W0913 00:10:53.589991 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.590239 kubelet[2684]: E0913 00:10:53.590005 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.590331 kubelet[2684]: E0913 00:10:53.590291 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.590331 kubelet[2684]: W0913 00:10:53.590304 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.590565 kubelet[2684]: E0913 00:10:53.590434 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.590870 kubelet[2684]: E0913 00:10:53.590846 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.590870 kubelet[2684]: W0913 00:10:53.590863 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.591107 kubelet[2684]: E0913 00:10:53.590964 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.591363 kubelet[2684]: E0913 00:10:53.591341 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.591363 kubelet[2684]: W0913 00:10:53.591360 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.591613 kubelet[2684]: E0913 00:10:53.591514 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.591671 kubelet[2684]: E0913 00:10:53.591654 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.591671 kubelet[2684]: W0913 00:10:53.591665 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.591980 kubelet[2684]: E0913 00:10:53.591684 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.592576 kubelet[2684]: E0913 00:10:53.592339 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.592576 kubelet[2684]: W0913 00:10:53.592360 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.592576 kubelet[2684]: E0913 00:10:53.592412 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.592959 kubelet[2684]: E0913 00:10:53.592939 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.593239 kubelet[2684]: W0913 00:10:53.593165 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.593239 kubelet[2684]: E0913 00:10:53.593194 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:53.607551 kubelet[2684]: E0913 00:10:53.607496 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:53.607551 kubelet[2684]: W0913 00:10:53.607533 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:53.608097 kubelet[2684]: E0913 00:10:53.607579 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:54.899506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount272664662.mount: Deactivated successfully. Sep 13 00:10:54.924230 kubelet[2684]: E0913 00:10:54.924102 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qsg52" podUID="04b956de-e0b5-4148-a086-60a45e92f38a" Sep 13 00:10:56.226080 containerd[1590]: time="2025-09-13T00:10:56.226000542Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:56.227374 containerd[1590]: time="2025-09-13T00:10:56.227304643Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 13 00:10:56.228339 containerd[1590]: time="2025-09-13T00:10:56.228072745Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:56.230687 containerd[1590]: time="2025-09-13T00:10:56.230641589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:56.231664 containerd[1590]: time="2025-09-13T00:10:56.231623530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.908701536s" Sep 13 00:10:56.232396 containerd[1590]: time="2025-09-13T00:10:56.231773467Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 13 00:10:56.233176 containerd[1590]: time="2025-09-13T00:10:56.233136121Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 13 00:10:56.260018 containerd[1590]: time="2025-09-13T00:10:56.259961569Z" level=info msg="CreateContainer within sandbox \"52f18ad603f802ec760ffbb3600c143dd4958cec67fe7224f8201508a391a580\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 13 00:10:56.276735 containerd[1590]: time="2025-09-13T00:10:56.276538349Z" level=info msg="CreateContainer within sandbox \"52f18ad603f802ec760ffbb3600c143dd4958cec67fe7224f8201508a391a580\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0c5211570f787d7e6ea3ba1120fb520866186cdd9ea60d0dd97534e18dcfa9c2\"" Sep 13 00:10:56.277814 containerd[1590]: time="2025-09-13T00:10:56.277373261Z" level=info msg="StartContainer for \"0c5211570f787d7e6ea3ba1120fb520866186cdd9ea60d0dd97534e18dcfa9c2\"" Sep 13 00:10:56.435442 containerd[1590]: time="2025-09-13T00:10:56.435380947Z" level=info msg="StartContainer for \"0c5211570f787d7e6ea3ba1120fb520866186cdd9ea60d0dd97534e18dcfa9c2\" returns successfully" Sep 13 00:10:56.925567 kubelet[2684]: E0913 00:10:56.925122 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qsg52" podUID="04b956de-e0b5-4148-a086-60a45e92f38a" Sep 13 00:10:57.074920 kubelet[2684]: E0913 00:10:57.073464 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:57.095277 kubelet[2684]: I0913 00:10:57.095187 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7bc66b9b75-w4n6j" podStartSLOduration=2.178476119 podStartE2EDuration="5.095153372s" podCreationTimestamp="2025-09-13 00:10:52 +0000 UTC" firstStartedPulling="2025-09-13 00:10:53.316150978 +0000 UTC m=+24.596786202" lastFinishedPulling="2025-09-13 00:10:56.232828247 +0000 UTC m=+27.513463455" observedRunningTime="2025-09-13 00:10:57.092905525 +0000 UTC m=+28.373540773" watchObservedRunningTime="2025-09-13 00:10:57.095153372 +0000 UTC m=+28.375788612" Sep 13 00:10:57.119507 kubelet[2684]: E0913 00:10:57.116875 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.119507 kubelet[2684]: W0913 00:10:57.116939 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.119507 kubelet[2684]: E0913 00:10:57.116975 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.121142 kubelet[2684]: E0913 00:10:57.120547 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.121142 kubelet[2684]: W0913 00:10:57.120589 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.121142 kubelet[2684]: E0913 00:10:57.120623 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.122344 kubelet[2684]: E0913 00:10:57.122291 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.122795 kubelet[2684]: W0913 00:10:57.122494 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.122795 kubelet[2684]: E0913 00:10:57.122563 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.125975 kubelet[2684]: E0913 00:10:57.125250 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.125975 kubelet[2684]: W0913 00:10:57.125589 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.125975 kubelet[2684]: E0913 00:10:57.125970 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.128130 kubelet[2684]: E0913 00:10:57.128092 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.128353 kubelet[2684]: W0913 00:10:57.128171 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.128353 kubelet[2684]: E0913 00:10:57.128276 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.130142 kubelet[2684]: E0913 00:10:57.129935 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.130142 kubelet[2684]: W0913 00:10:57.130142 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.130399 kubelet[2684]: E0913 00:10:57.130179 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.131289 kubelet[2684]: E0913 00:10:57.131264 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.131370 kubelet[2684]: W0913 00:10:57.131288 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.131370 kubelet[2684]: E0913 00:10:57.131315 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.132873 kubelet[2684]: E0913 00:10:57.132845 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.132958 kubelet[2684]: W0913 00:10:57.132872 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.133993 kubelet[2684]: E0913 00:10:57.133780 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.134366 kubelet[2684]: E0913 00:10:57.134333 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.134366 kubelet[2684]: W0913 00:10:57.134352 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.135764 kubelet[2684]: E0913 00:10:57.134387 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.136824 kubelet[2684]: E0913 00:10:57.136793 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.136918 kubelet[2684]: W0913 00:10:57.136887 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.137196 kubelet[2684]: E0913 00:10:57.137170 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.138655 kubelet[2684]: E0913 00:10:57.138619 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.139152 kubelet[2684]: W0913 00:10:57.138652 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.139152 kubelet[2684]: E0913 00:10:57.138910 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.139920 kubelet[2684]: E0913 00:10:57.139893 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.139982 kubelet[2684]: W0913 00:10:57.139918 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.139982 kubelet[2684]: E0913 00:10:57.139944 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.140803 kubelet[2684]: E0913 00:10:57.140785 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.141001 kubelet[2684]: W0913 00:10:57.140898 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.141001 kubelet[2684]: E0913 00:10:57.140929 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.141385 kubelet[2684]: E0913 00:10:57.141289 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.141385 kubelet[2684]: W0913 00:10:57.141302 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.141385 kubelet[2684]: E0913 00:10:57.141314 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.141821 kubelet[2684]: E0913 00:10:57.141730 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.141821 kubelet[2684]: W0913 00:10:57.141741 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.141821 kubelet[2684]: E0913 00:10:57.141760 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.217297 kubelet[2684]: E0913 00:10:57.215636 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.217297 kubelet[2684]: W0913 00:10:57.215673 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.217297 kubelet[2684]: E0913 00:10:57.215732 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.217297 kubelet[2684]: E0913 00:10:57.217060 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.217297 kubelet[2684]: W0913 00:10:57.217084 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.217297 kubelet[2684]: E0913 00:10:57.217139 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.219228 kubelet[2684]: E0913 00:10:57.218687 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.219228 kubelet[2684]: W0913 00:10:57.218739 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.220048 kubelet[2684]: E0913 00:10:57.219422 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.220530 kubelet[2684]: E0913 00:10:57.220380 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.220530 kubelet[2684]: W0913 00:10:57.220400 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.220530 kubelet[2684]: E0913 00:10:57.220448 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.221601 kubelet[2684]: E0913 00:10:57.221439 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.221601 kubelet[2684]: W0913 00:10:57.221458 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.222143 kubelet[2684]: E0913 00:10:57.221796 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.223583 kubelet[2684]: E0913 00:10:57.222364 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.223583 kubelet[2684]: W0913 00:10:57.222381 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.223583 kubelet[2684]: E0913 00:10:57.222436 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.224220 kubelet[2684]: E0913 00:10:57.224072 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.224491 kubelet[2684]: W0913 00:10:57.224419 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.224555 kubelet[2684]: E0913 00:10:57.224483 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.225135 kubelet[2684]: E0913 00:10:57.225032 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.225135 kubelet[2684]: W0913 00:10:57.225050 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.225135 kubelet[2684]: E0913 00:10:57.225082 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.225737 kubelet[2684]: E0913 00:10:57.225605 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.225737 kubelet[2684]: W0913 00:10:57.225647 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.225837 kubelet[2684]: E0913 00:10:57.225698 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.230801 kubelet[2684]: E0913 00:10:57.229999 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.230801 kubelet[2684]: W0913 00:10:57.230042 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.230801 kubelet[2684]: E0913 00:10:57.230453 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.230801 kubelet[2684]: W0913 00:10:57.230471 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.231769 kubelet[2684]: E0913 00:10:57.231243 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.231769 kubelet[2684]: W0913 00:10:57.231274 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.231769 kubelet[2684]: E0913 00:10:57.231302 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.231769 kubelet[2684]: E0913 00:10:57.231682 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.231943 kubelet[2684]: W0913 00:10:57.231697 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.231943 kubelet[2684]: E0913 00:10:57.231936 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.232632 kubelet[2684]: E0913 00:10:57.232368 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.232992 kubelet[2684]: E0913 00:10:57.232973 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.233781 kubelet[2684]: W0913 00:10:57.233280 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.233781 kubelet[2684]: E0913 00:10:57.233322 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.234041 kubelet[2684]: E0913 00:10:57.234025 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.234134 kubelet[2684]: W0913 00:10:57.234110 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.234219 kubelet[2684]: E0913 00:10:57.234199 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.234549 kubelet[2684]: E0913 00:10:57.234518 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.234549 kubelet[2684]: W0913 00:10:57.234547 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.234655 kubelet[2684]: E0913 00:10:57.234563 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.236304 kubelet[2684]: E0913 00:10:57.235978 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.236304 kubelet[2684]: W0913 00:10:57.236001 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.236304 kubelet[2684]: E0913 00:10:57.236078 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.236304 kubelet[2684]: E0913 00:10:57.236132 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.238011 kubelet[2684]: E0913 00:10:57.237986 2684 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 13 00:10:57.239326 kubelet[2684]: W0913 00:10:57.238148 2684 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 13 00:10:57.239326 kubelet[2684]: E0913 00:10:57.238186 2684 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 13 00:10:57.851777 containerd[1590]: time="2025-09-13T00:10:57.851009858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:57.856100 containerd[1590]: time="2025-09-13T00:10:57.855873792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 13 00:10:57.859294 containerd[1590]: time="2025-09-13T00:10:57.858030145Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:57.860600 containerd[1590]: time="2025-09-13T00:10:57.860534658Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:10:57.861783 containerd[1590]: time="2025-09-13T00:10:57.861740283Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.628529051s" Sep 13 00:10:57.861912 containerd[1590]: time="2025-09-13T00:10:57.861895596Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 13 00:10:57.867103 containerd[1590]: time="2025-09-13T00:10:57.866172014Z" level=info msg="CreateContainer within sandbox \"9bce4e44e16e0939478eb64f4a9c905546db8515372ebc246907dd1872922619\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 13 00:10:57.887864 containerd[1590]: time="2025-09-13T00:10:57.887812182Z" level=info msg="CreateContainer within sandbox \"9bce4e44e16e0939478eb64f4a9c905546db8515372ebc246907dd1872922619\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f02973ce6c2956e3a8632a3edb41f756618a8f471c799cc2f6e034cba859aee0\"" Sep 13 00:10:57.888996 containerd[1590]: time="2025-09-13T00:10:57.888965777Z" level=info msg="StartContainer for \"f02973ce6c2956e3a8632a3edb41f756618a8f471c799cc2f6e034cba859aee0\"" Sep 13 00:10:57.999073 containerd[1590]: time="2025-09-13T00:10:57.999001564Z" level=info msg="StartContainer for \"f02973ce6c2956e3a8632a3edb41f756618a8f471c799cc2f6e034cba859aee0\" returns successfully" Sep 13 00:10:58.053652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f02973ce6c2956e3a8632a3edb41f756618a8f471c799cc2f6e034cba859aee0-rootfs.mount: Deactivated successfully. Sep 13 00:10:58.081755 kubelet[2684]: E0913 00:10:58.081440 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:58.100827 containerd[1590]: time="2025-09-13T00:10:58.062653136Z" level=info msg="shim disconnected" id=f02973ce6c2956e3a8632a3edb41f756618a8f471c799cc2f6e034cba859aee0 namespace=k8s.io Sep 13 00:10:58.101099 containerd[1590]: time="2025-09-13T00:10:58.101067638Z" level=warning msg="cleaning up after shim disconnected" id=f02973ce6c2956e3a8632a3edb41f756618a8f471c799cc2f6e034cba859aee0 namespace=k8s.io Sep 13 00:10:58.101175 containerd[1590]: time="2025-09-13T00:10:58.101162550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:10:58.925441 kubelet[2684]: E0913 00:10:58.925288 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qsg52" podUID="04b956de-e0b5-4148-a086-60a45e92f38a" Sep 13 00:10:59.085663 kubelet[2684]: E0913 00:10:59.085578 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:10:59.088293 containerd[1590]: time="2025-09-13T00:10:59.087918639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 13 00:11:00.925673 kubelet[2684]: E0913 00:11:00.925151 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qsg52" podUID="04b956de-e0b5-4148-a086-60a45e92f38a" Sep 13 00:11:02.924939 kubelet[2684]: E0913 00:11:02.924419 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qsg52" podUID="04b956de-e0b5-4148-a086-60a45e92f38a" Sep 13 00:11:02.950765 containerd[1590]: time="2025-09-13T00:11:02.950652122Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:02.952350 containerd[1590]: time="2025-09-13T00:11:02.952166672Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 13 00:11:02.953671 containerd[1590]: time="2025-09-13T00:11:02.953394908Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:02.967906 containerd[1590]: time="2025-09-13T00:11:02.967605101Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:02.968684 containerd[1590]: time="2025-09-13T00:11:02.968630824Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 3.880652152s" Sep 13 00:11:02.969389 containerd[1590]: time="2025-09-13T00:11:02.968690713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 13 00:11:02.974019 containerd[1590]: time="2025-09-13T00:11:02.973976724Z" level=info msg="CreateContainer within sandbox \"9bce4e44e16e0939478eb64f4a9c905546db8515372ebc246907dd1872922619\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 13 00:11:03.002566 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount666128202.mount: Deactivated successfully. Sep 13 00:11:03.006513 containerd[1590]: time="2025-09-13T00:11:03.004888506Z" level=info msg="CreateContainer within sandbox \"9bce4e44e16e0939478eb64f4a9c905546db8515372ebc246907dd1872922619\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"66f4a2b9d2e8d691edaab0f835ff70167ac6e7295205d7826aaed1ae8b3ebde8\"" Sep 13 00:11:03.006513 containerd[1590]: time="2025-09-13T00:11:03.005797444Z" level=info msg="StartContainer for \"66f4a2b9d2e8d691edaab0f835ff70167ac6e7295205d7826aaed1ae8b3ebde8\"" Sep 13 00:11:03.141829 containerd[1590]: time="2025-09-13T00:11:03.141678518Z" level=info msg="StartContainer for \"66f4a2b9d2e8d691edaab0f835ff70167ac6e7295205d7826aaed1ae8b3ebde8\" returns successfully" Sep 13 00:11:03.855621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-66f4a2b9d2e8d691edaab0f835ff70167ac6e7295205d7826aaed1ae8b3ebde8-rootfs.mount: Deactivated successfully. Sep 13 00:11:03.856348 containerd[1590]: time="2025-09-13T00:11:03.855566342Z" level=info msg="shim disconnected" id=66f4a2b9d2e8d691edaab0f835ff70167ac6e7295205d7826aaed1ae8b3ebde8 namespace=k8s.io Sep 13 00:11:03.856947 containerd[1590]: time="2025-09-13T00:11:03.856496863Z" level=warning msg="cleaning up after shim disconnected" id=66f4a2b9d2e8d691edaab0f835ff70167ac6e7295205d7826aaed1ae8b3ebde8 namespace=k8s.io Sep 13 00:11:03.856947 containerd[1590]: time="2025-09-13T00:11:03.856526568Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:11:03.883093 kubelet[2684]: I0913 00:11:03.882784 2684 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:11:04.075878 kubelet[2684]: I0913 00:11:04.075538 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a2c2404d-4798-433c-8a7a-69c289c0678c-config\") pod \"goldmane-7988f88666-nslpp\" (UID: \"a2c2404d-4798-433c-8a7a-69c289c0678c\") " pod="calico-system/goldmane-7988f88666-nslpp" Sep 13 00:11:04.075878 kubelet[2684]: I0913 00:11:04.075601 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a98a040e-2ea2-4095-a76f-86a4e06c7abd-tigera-ca-bundle\") pod \"calico-kube-controllers-5c4785d957-fpstv\" (UID: \"a98a040e-2ea2-4095-a76f-86a4e06c7abd\") " pod="calico-system/calico-kube-controllers-5c4785d957-fpstv" Sep 13 00:11:04.075878 kubelet[2684]: I0913 00:11:04.075635 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv25m\" (UniqueName: \"kubernetes.io/projected/e4db298b-8af8-4bf0-9f46-9f2489a8fa88-kube-api-access-jv25m\") pod \"calico-apiserver-75744756bc-gcpv4\" (UID: \"e4db298b-8af8-4bf0-9f46-9f2489a8fa88\") " pod="calico-apiserver/calico-apiserver-75744756bc-gcpv4" Sep 13 00:11:04.075878 kubelet[2684]: I0913 00:11:04.075668 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37599b9e-55ad-4d52-b2bf-48d68b18c66b-whisker-ca-bundle\") pod \"whisker-7d994bd988-rfj2l\" (UID: \"37599b9e-55ad-4d52-b2bf-48d68b18c66b\") " pod="calico-system/whisker-7d994bd988-rfj2l" Sep 13 00:11:04.075878 kubelet[2684]: I0913 00:11:04.075697 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5-config-volume\") pod \"coredns-7c65d6cfc9-vzpdb\" (UID: \"7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5\") " pod="kube-system/coredns-7c65d6cfc9-vzpdb" Sep 13 00:11:04.078888 kubelet[2684]: I0913 00:11:04.075749 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r478x\" (UniqueName: \"kubernetes.io/projected/0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d-kube-api-access-r478x\") pod \"calico-apiserver-75744756bc-dm52h\" (UID: \"0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d\") " pod="calico-apiserver/calico-apiserver-75744756bc-dm52h" Sep 13 00:11:04.078888 kubelet[2684]: I0913 00:11:04.075787 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/a2c2404d-4798-433c-8a7a-69c289c0678c-goldmane-key-pair\") pod \"goldmane-7988f88666-nslpp\" (UID: \"a2c2404d-4798-433c-8a7a-69c289c0678c\") " pod="calico-system/goldmane-7988f88666-nslpp" Sep 13 00:11:04.078888 kubelet[2684]: I0913 00:11:04.075814 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8hmr\" (UniqueName: \"kubernetes.io/projected/ed56a258-9735-4b0f-b601-d65b5338149b-kube-api-access-q8hmr\") pod \"coredns-7c65d6cfc9-bk5m4\" (UID: \"ed56a258-9735-4b0f-b601-d65b5338149b\") " pod="kube-system/coredns-7c65d6cfc9-bk5m4" Sep 13 00:11:04.078888 kubelet[2684]: I0913 00:11:04.075849 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/37599b9e-55ad-4d52-b2bf-48d68b18c66b-whisker-backend-key-pair\") pod \"whisker-7d994bd988-rfj2l\" (UID: \"37599b9e-55ad-4d52-b2bf-48d68b18c66b\") " pod="calico-system/whisker-7d994bd988-rfj2l" Sep 13 00:11:04.078888 kubelet[2684]: I0913 00:11:04.077873 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ed56a258-9735-4b0f-b601-d65b5338149b-config-volume\") pod \"coredns-7c65d6cfc9-bk5m4\" (UID: \"ed56a258-9735-4b0f-b601-d65b5338149b\") " pod="kube-system/coredns-7c65d6cfc9-bk5m4" Sep 13 00:11:04.079093 kubelet[2684]: I0913 00:11:04.077925 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9g2pb\" (UniqueName: \"kubernetes.io/projected/7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5-kube-api-access-9g2pb\") pod \"coredns-7c65d6cfc9-vzpdb\" (UID: \"7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5\") " pod="kube-system/coredns-7c65d6cfc9-vzpdb" Sep 13 00:11:04.079093 kubelet[2684]: I0913 00:11:04.078074 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d-calico-apiserver-certs\") pod \"calico-apiserver-75744756bc-dm52h\" (UID: \"0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d\") " pod="calico-apiserver/calico-apiserver-75744756bc-dm52h" Sep 13 00:11:04.079093 kubelet[2684]: I0913 00:11:04.078112 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjgh7\" (UniqueName: \"kubernetes.io/projected/37599b9e-55ad-4d52-b2bf-48d68b18c66b-kube-api-access-gjgh7\") pod \"whisker-7d994bd988-rfj2l\" (UID: \"37599b9e-55ad-4d52-b2bf-48d68b18c66b\") " pod="calico-system/whisker-7d994bd988-rfj2l" Sep 13 00:11:04.079093 kubelet[2684]: I0913 00:11:04.078146 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt6rn\" (UniqueName: \"kubernetes.io/projected/a98a040e-2ea2-4095-a76f-86a4e06c7abd-kube-api-access-vt6rn\") pod \"calico-kube-controllers-5c4785d957-fpstv\" (UID: \"a98a040e-2ea2-4095-a76f-86a4e06c7abd\") " pod="calico-system/calico-kube-controllers-5c4785d957-fpstv" Sep 13 00:11:04.079093 kubelet[2684]: I0913 00:11:04.078173 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c24p9\" (UniqueName: \"kubernetes.io/projected/a2c2404d-4798-433c-8a7a-69c289c0678c-kube-api-access-c24p9\") pod \"goldmane-7988f88666-nslpp\" (UID: \"a2c2404d-4798-433c-8a7a-69c289c0678c\") " pod="calico-system/goldmane-7988f88666-nslpp" Sep 13 00:11:04.079303 kubelet[2684]: I0913 00:11:04.078202 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2c2404d-4798-433c-8a7a-69c289c0678c-goldmane-ca-bundle\") pod \"goldmane-7988f88666-nslpp\" (UID: \"a2c2404d-4798-433c-8a7a-69c289c0678c\") " pod="calico-system/goldmane-7988f88666-nslpp" Sep 13 00:11:04.079303 kubelet[2684]: I0913 00:11:04.078243 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e4db298b-8af8-4bf0-9f46-9f2489a8fa88-calico-apiserver-certs\") pod \"calico-apiserver-75744756bc-gcpv4\" (UID: \"e4db298b-8af8-4bf0-9f46-9f2489a8fa88\") " pod="calico-apiserver/calico-apiserver-75744756bc-gcpv4" Sep 13 00:11:04.122915 containerd[1590]: time="2025-09-13T00:11:04.122553542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 13 00:11:04.264194 containerd[1590]: time="2025-09-13T00:11:04.263795948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d994bd988-rfj2l,Uid:37599b9e-55ad-4d52-b2bf-48d68b18c66b,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:04.283003 containerd[1590]: time="2025-09-13T00:11:04.282592514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75744756bc-gcpv4,Uid:e4db298b-8af8-4bf0-9f46-9f2489a8fa88,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:11:04.318249 containerd[1590]: time="2025-09-13T00:11:04.318182912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c4785d957-fpstv,Uid:a98a040e-2ea2-4095-a76f-86a4e06c7abd,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:04.325217 kubelet[2684]: E0913 00:11:04.324894 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:11:04.328761 containerd[1590]: time="2025-09-13T00:11:04.319348768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nslpp,Uid:a2c2404d-4798-433c-8a7a-69c289c0678c,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:04.331770 containerd[1590]: time="2025-09-13T00:11:04.331625466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vzpdb,Uid:7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:04.335620 containerd[1590]: time="2025-09-13T00:11:04.332152929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75744756bc-dm52h,Uid:0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d,Namespace:calico-apiserver,Attempt:0,}" Sep 13 00:11:04.558085 kubelet[2684]: E0913 00:11:04.556978 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:11:04.558765 containerd[1590]: time="2025-09-13T00:11:04.558571615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bk5m4,Uid:ed56a258-9735-4b0f-b601-d65b5338149b,Namespace:kube-system,Attempt:0,}" Sep 13 00:11:04.650238 containerd[1590]: time="2025-09-13T00:11:04.650150596Z" level=error msg="Failed to destroy network for sandbox \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.662333 containerd[1590]: time="2025-09-13T00:11:04.662257590Z" level=error msg="encountered an error cleaning up failed sandbox \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.698953 containerd[1590]: time="2025-09-13T00:11:04.698873209Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d994bd988-rfj2l,Uid:37599b9e-55ad-4d52-b2bf-48d68b18c66b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.702307 kubelet[2684]: E0913 00:11:04.701771 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.702728 kubelet[2684]: E0913 00:11:04.702174 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d994bd988-rfj2l" Sep 13 00:11:04.702728 kubelet[2684]: E0913 00:11:04.702417 2684 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d994bd988-rfj2l" Sep 13 00:11:04.703865 kubelet[2684]: E0913 00:11:04.703012 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d994bd988-rfj2l_calico-system(37599b9e-55ad-4d52-b2bf-48d68b18c66b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d994bd988-rfj2l_calico-system(37599b9e-55ad-4d52-b2bf-48d68b18c66b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d994bd988-rfj2l" podUID="37599b9e-55ad-4d52-b2bf-48d68b18c66b" Sep 13 00:11:04.725318 containerd[1590]: time="2025-09-13T00:11:04.725024929Z" level=error msg="Failed to destroy network for sandbox \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.726119 containerd[1590]: time="2025-09-13T00:11:04.725909872Z" level=error msg="encountered an error cleaning up failed sandbox \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.726119 containerd[1590]: time="2025-09-13T00:11:04.726015441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75744756bc-gcpv4,Uid:e4db298b-8af8-4bf0-9f46-9f2489a8fa88,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.727274 kubelet[2684]: E0913 00:11:04.726912 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.727274 kubelet[2684]: E0913 00:11:04.727026 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75744756bc-gcpv4" Sep 13 00:11:04.727274 kubelet[2684]: E0913 00:11:04.727221 2684 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75744756bc-gcpv4" Sep 13 00:11:04.728607 kubelet[2684]: E0913 00:11:04.727499 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75744756bc-gcpv4_calico-apiserver(e4db298b-8af8-4bf0-9f46-9f2489a8fa88)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75744756bc-gcpv4_calico-apiserver(e4db298b-8af8-4bf0-9f46-9f2489a8fa88)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75744756bc-gcpv4" podUID="e4db298b-8af8-4bf0-9f46-9f2489a8fa88" Sep 13 00:11:04.743014 containerd[1590]: time="2025-09-13T00:11:04.742938010Z" level=error msg="Failed to destroy network for sandbox \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.746374 containerd[1590]: time="2025-09-13T00:11:04.746179781Z" level=error msg="encountered an error cleaning up failed sandbox \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.746374 containerd[1590]: time="2025-09-13T00:11:04.746277724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c4785d957-fpstv,Uid:a98a040e-2ea2-4095-a76f-86a4e06c7abd,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.747238 kubelet[2684]: E0913 00:11:04.746591 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.747238 kubelet[2684]: E0913 00:11:04.746659 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c4785d957-fpstv" Sep 13 00:11:04.747238 kubelet[2684]: E0913 00:11:04.746693 2684 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c4785d957-fpstv" Sep 13 00:11:04.747340 kubelet[2684]: E0913 00:11:04.746757 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c4785d957-fpstv_calico-system(a98a040e-2ea2-4095-a76f-86a4e06c7abd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c4785d957-fpstv_calico-system(a98a040e-2ea2-4095-a76f-86a4e06c7abd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c4785d957-fpstv" podUID="a98a040e-2ea2-4095-a76f-86a4e06c7abd" Sep 13 00:11:04.788892 containerd[1590]: time="2025-09-13T00:11:04.788764858Z" level=error msg="Failed to destroy network for sandbox \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.789473 containerd[1590]: time="2025-09-13T00:11:04.789301695Z" level=error msg="encountered an error cleaning up failed sandbox \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.789473 containerd[1590]: time="2025-09-13T00:11:04.789371288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75744756bc-dm52h,Uid:0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.790428 kubelet[2684]: E0913 00:11:04.789804 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.790428 kubelet[2684]: E0913 00:11:04.789870 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75744756bc-dm52h" Sep 13 00:11:04.790428 kubelet[2684]: E0913 00:11:04.789894 2684 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75744756bc-dm52h" Sep 13 00:11:04.790589 kubelet[2684]: E0913 00:11:04.789945 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75744756bc-dm52h_calico-apiserver(0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75744756bc-dm52h_calico-apiserver(0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75744756bc-dm52h" podUID="0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d" Sep 13 00:11:04.794730 containerd[1590]: time="2025-09-13T00:11:04.793798449Z" level=error msg="Failed to destroy network for sandbox \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.794730 containerd[1590]: time="2025-09-13T00:11:04.794251638Z" level=error msg="encountered an error cleaning up failed sandbox \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.794730 containerd[1590]: time="2025-09-13T00:11:04.794310083Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vzpdb,Uid:7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.795310 kubelet[2684]: E0913 00:11:04.795054 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.795310 kubelet[2684]: E0913 00:11:04.795135 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vzpdb" Sep 13 00:11:04.795310 kubelet[2684]: E0913 00:11:04.795163 2684 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-vzpdb" Sep 13 00:11:04.795441 kubelet[2684]: E0913 00:11:04.795244 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-vzpdb_kube-system(7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-vzpdb_kube-system(7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vzpdb" podUID="7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5" Sep 13 00:11:04.813920 containerd[1590]: time="2025-09-13T00:11:04.813775778Z" level=error msg="Failed to destroy network for sandbox \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.814798 containerd[1590]: time="2025-09-13T00:11:04.814731912Z" level=error msg="encountered an error cleaning up failed sandbox \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.815089 containerd[1590]: time="2025-09-13T00:11:04.815057443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nslpp,Uid:a2c2404d-4798-433c-8a7a-69c289c0678c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.815499 kubelet[2684]: E0913 00:11:04.815442 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.815596 kubelet[2684]: E0913 00:11:04.815530 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-nslpp" Sep 13 00:11:04.815596 kubelet[2684]: E0913 00:11:04.815555 2684 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-nslpp" Sep 13 00:11:04.816776 kubelet[2684]: E0913 00:11:04.815607 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-nslpp_calico-system(a2c2404d-4798-433c-8a7a-69c289c0678c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-nslpp_calico-system(a2c2404d-4798-433c-8a7a-69c289c0678c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-nslpp" podUID="a2c2404d-4798-433c-8a7a-69c289c0678c" Sep 13 00:11:04.834389 containerd[1590]: time="2025-09-13T00:11:04.834322485Z" level=error msg="Failed to destroy network for sandbox \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.834833 containerd[1590]: time="2025-09-13T00:11:04.834774642Z" level=error msg="encountered an error cleaning up failed sandbox \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.834903 containerd[1590]: time="2025-09-13T00:11:04.834856206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bk5m4,Uid:ed56a258-9735-4b0f-b601-d65b5338149b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.835568 kubelet[2684]: E0913 00:11:04.835139 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:04.835568 kubelet[2684]: E0913 00:11:04.835228 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bk5m4" Sep 13 00:11:04.835568 kubelet[2684]: E0913 00:11:04.835252 2684 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-bk5m4" Sep 13 00:11:04.835844 kubelet[2684]: E0913 00:11:04.835296 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-bk5m4_kube-system(ed56a258-9735-4b0f-b601-d65b5338149b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-bk5m4_kube-system(ed56a258-9735-4b0f-b601-d65b5338149b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bk5m4" podUID="ed56a258-9735-4b0f-b601-d65b5338149b" Sep 13 00:11:04.930309 containerd[1590]: time="2025-09-13T00:11:04.929756711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qsg52,Uid:04b956de-e0b5-4148-a086-60a45e92f38a,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:05.025153 containerd[1590]: time="2025-09-13T00:11:05.024891474Z" level=error msg="Failed to destroy network for sandbox \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:05.025952 containerd[1590]: time="2025-09-13T00:11:05.025689010Z" level=error msg="encountered an error cleaning up failed sandbox \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:05.025952 containerd[1590]: time="2025-09-13T00:11:05.025800149Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qsg52,Uid:04b956de-e0b5-4148-a086-60a45e92f38a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:05.026741 kubelet[2684]: E0913 00:11:05.026386 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:05.026741 kubelet[2684]: E0913 00:11:05.026480 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qsg52" Sep 13 00:11:05.026741 kubelet[2684]: E0913 00:11:05.026508 2684 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qsg52" Sep 13 00:11:05.026987 kubelet[2684]: E0913 00:11:05.026578 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qsg52_calico-system(04b956de-e0b5-4148-a086-60a45e92f38a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qsg52_calico-system(04b956de-e0b5-4148-a086-60a45e92f38a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qsg52" podUID="04b956de-e0b5-4148-a086-60a45e92f38a" Sep 13 00:11:05.119660 kubelet[2684]: I0913 00:11:05.119616 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Sep 13 00:11:05.127806 kubelet[2684]: I0913 00:11:05.127127 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Sep 13 00:11:05.129909 containerd[1590]: time="2025-09-13T00:11:05.129442781Z" level=info msg="StopPodSandbox for \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\"" Sep 13 00:11:05.129909 containerd[1590]: time="2025-09-13T00:11:05.129828514Z" level=info msg="StopPodSandbox for \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\"" Sep 13 00:11:05.134107 containerd[1590]: time="2025-09-13T00:11:05.131102447Z" level=info msg="Ensure that sandbox f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc in task-service has been cleanup successfully" Sep 13 00:11:05.134107 containerd[1590]: time="2025-09-13T00:11:05.133681952Z" level=info msg="Ensure that sandbox c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57 in task-service has been cleanup successfully" Sep 13 00:11:05.142660 kubelet[2684]: I0913 00:11:05.141776 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Sep 13 00:11:05.147384 containerd[1590]: time="2025-09-13T00:11:05.146232359Z" level=info msg="StopPodSandbox for \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\"" Sep 13 00:11:05.151380 containerd[1590]: time="2025-09-13T00:11:05.150967651Z" level=info msg="Ensure that sandbox f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1 in task-service has been cleanup successfully" Sep 13 00:11:05.156689 kubelet[2684]: I0913 00:11:05.155015 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Sep 13 00:11:05.158290 containerd[1590]: time="2025-09-13T00:11:05.157549909Z" level=info msg="StopPodSandbox for \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\"" Sep 13 00:11:05.159245 containerd[1590]: time="2025-09-13T00:11:05.159095110Z" level=info msg="Ensure that sandbox ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3 in task-service has been cleanup successfully" Sep 13 00:11:05.160149 kubelet[2684]: I0913 00:11:05.159691 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Sep 13 00:11:05.160316 containerd[1590]: time="2025-09-13T00:11:05.160274691Z" level=info msg="StopPodSandbox for \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\"" Sep 13 00:11:05.160464 containerd[1590]: time="2025-09-13T00:11:05.160441327Z" level=info msg="Ensure that sandbox 01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e in task-service has been cleanup successfully" Sep 13 00:11:05.169868 kubelet[2684]: I0913 00:11:05.169824 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Sep 13 00:11:05.174784 containerd[1590]: time="2025-09-13T00:11:05.174748651Z" level=info msg="StopPodSandbox for \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\"" Sep 13 00:11:05.176597 containerd[1590]: time="2025-09-13T00:11:05.176097128Z" level=info msg="Ensure that sandbox bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378 in task-service has been cleanup successfully" Sep 13 00:11:05.179844 kubelet[2684]: I0913 00:11:05.179814 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Sep 13 00:11:05.190622 containerd[1590]: time="2025-09-13T00:11:05.190152929Z" level=info msg="StopPodSandbox for \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\"" Sep 13 00:11:05.191673 kubelet[2684]: I0913 00:11:05.191307 2684 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Sep 13 00:11:05.193744 containerd[1590]: time="2025-09-13T00:11:05.191824352Z" level=info msg="Ensure that sandbox 66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826 in task-service has been cleanup successfully" Sep 13 00:11:05.206230 containerd[1590]: time="2025-09-13T00:11:05.206081939Z" level=info msg="StopPodSandbox for \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\"" Sep 13 00:11:05.206449 containerd[1590]: time="2025-09-13T00:11:05.206341692Z" level=info msg="Ensure that sandbox 7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8 in task-service has been cleanup successfully" Sep 13 00:11:05.308251 containerd[1590]: time="2025-09-13T00:11:05.308167332Z" level=error msg="StopPodSandbox for \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\" failed" error="failed to destroy network for sandbox \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:05.309101 kubelet[2684]: E0913 00:11:05.308498 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Sep 13 00:11:05.309101 kubelet[2684]: E0913 00:11:05.308608 2684 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc"} Sep 13 00:11:05.309101 kubelet[2684]: E0913 00:11:05.308676 2684 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"04b956de-e0b5-4148-a086-60a45e92f38a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:05.309101 kubelet[2684]: E0913 00:11:05.308716 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"04b956de-e0b5-4148-a086-60a45e92f38a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qsg52" podUID="04b956de-e0b5-4148-a086-60a45e92f38a" Sep 13 00:11:05.319491 containerd[1590]: time="2025-09-13T00:11:05.319433576Z" level=error msg="StopPodSandbox for \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\" failed" error="failed to destroy network for sandbox \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:05.320241 kubelet[2684]: E0913 00:11:05.320030 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Sep 13 00:11:05.320241 kubelet[2684]: E0913 00:11:05.320106 2684 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57"} Sep 13 00:11:05.320241 kubelet[2684]: E0913 00:11:05.320169 2684 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:05.320241 kubelet[2684]: E0913 00:11:05.320196 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-vzpdb" podUID="7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5" Sep 13 00:11:05.333547 containerd[1590]: time="2025-09-13T00:11:05.333177539Z" level=error msg="StopPodSandbox for \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\" failed" error="failed to destroy network for sandbox \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:05.333729 kubelet[2684]: E0913 00:11:05.333467 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Sep 13 00:11:05.333729 kubelet[2684]: E0913 00:11:05.333543 2684 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378"} Sep 13 00:11:05.333729 kubelet[2684]: E0913 00:11:05.333590 2684 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e4db298b-8af8-4bf0-9f46-9f2489a8fa88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:05.333729 kubelet[2684]: E0913 00:11:05.333617 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e4db298b-8af8-4bf0-9f46-9f2489a8fa88\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75744756bc-gcpv4" podUID="e4db298b-8af8-4bf0-9f46-9f2489a8fa88" Sep 13 00:11:05.348846 containerd[1590]: time="2025-09-13T00:11:05.348283943Z" level=error msg="StopPodSandbox for \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\" failed" error="failed to destroy network for sandbox \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:05.349437 kubelet[2684]: E0913 00:11:05.349335 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Sep 13 00:11:05.349745 kubelet[2684]: E0913 00:11:05.349525 2684 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8"} Sep 13 00:11:05.349745 kubelet[2684]: E0913 00:11:05.349566 2684 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ed56a258-9735-4b0f-b601-d65b5338149b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:05.349866 kubelet[2684]: E0913 00:11:05.349776 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ed56a258-9735-4b0f-b601-d65b5338149b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-bk5m4" podUID="ed56a258-9735-4b0f-b601-d65b5338149b" Sep 13 00:11:05.364756 containerd[1590]: time="2025-09-13T00:11:05.363435640Z" level=error msg="StopPodSandbox for \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\" failed" error="failed to destroy network for sandbox \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:05.364896 kubelet[2684]: E0913 00:11:05.363805 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Sep 13 00:11:05.364896 kubelet[2684]: E0913 00:11:05.363877 2684 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3"} Sep 13 00:11:05.364896 kubelet[2684]: E0913 00:11:05.363925 2684 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:05.364896 kubelet[2684]: E0913 00:11:05.363956 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75744756bc-dm52h" podUID="0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d" Sep 13 00:11:05.366756 containerd[1590]: time="2025-09-13T00:11:05.366688418Z" level=error msg="StopPodSandbox for \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\" failed" error="failed to destroy network for sandbox \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:05.367185 kubelet[2684]: E0913 00:11:05.367141 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Sep 13 00:11:05.367446 kubelet[2684]: E0913 00:11:05.367302 2684 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1"} Sep 13 00:11:05.367446 kubelet[2684]: E0913 00:11:05.367358 2684 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a98a040e-2ea2-4095-a76f-86a4e06c7abd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:05.367446 kubelet[2684]: E0913 00:11:05.367400 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a98a040e-2ea2-4095-a76f-86a4e06c7abd\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c4785d957-fpstv" podUID="a98a040e-2ea2-4095-a76f-86a4e06c7abd" Sep 13 00:11:05.380173 containerd[1590]: time="2025-09-13T00:11:05.380036357Z" level=error msg="StopPodSandbox for \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\" failed" error="failed to destroy network for sandbox \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:05.380768 containerd[1590]: time="2025-09-13T00:11:05.380664742Z" level=error msg="StopPodSandbox for \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\" failed" error="failed to destroy network for sandbox \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 13 00:11:05.381549 kubelet[2684]: E0913 00:11:05.381503 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Sep 13 00:11:05.381678 kubelet[2684]: E0913 00:11:05.381566 2684 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826"} Sep 13 00:11:05.381678 kubelet[2684]: E0913 00:11:05.381602 2684 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37599b9e-55ad-4d52-b2bf-48d68b18c66b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:05.381678 kubelet[2684]: E0913 00:11:05.381627 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37599b9e-55ad-4d52-b2bf-48d68b18c66b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d994bd988-rfj2l" podUID="37599b9e-55ad-4d52-b2bf-48d68b18c66b" Sep 13 00:11:05.383007 kubelet[2684]: E0913 00:11:05.382384 2684 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Sep 13 00:11:05.383007 kubelet[2684]: E0913 00:11:05.382534 2684 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e"} Sep 13 00:11:05.383007 kubelet[2684]: E0913 00:11:05.382787 2684 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a2c2404d-4798-433c-8a7a-69c289c0678c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 13 00:11:05.383007 kubelet[2684]: E0913 00:11:05.382860 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a2c2404d-4798-433c-8a7a-69c289c0678c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-nslpp" podUID="a2c2404d-4798-433c-8a7a-69c289c0678c" Sep 13 00:11:12.911860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3979487860.mount: Deactivated successfully. Sep 13 00:11:13.005769 containerd[1590]: time="2025-09-13T00:11:12.997439539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 13 00:11:13.067822 containerd[1590]: time="2025-09-13T00:11:13.066586364Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 8.931788965s" Sep 13 00:11:13.067822 containerd[1590]: time="2025-09-13T00:11:13.066696271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 13 00:11:13.110196 containerd[1590]: time="2025-09-13T00:11:13.109138859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:13.129354 containerd[1590]: time="2025-09-13T00:11:13.129290526Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:13.131082 containerd[1590]: time="2025-09-13T00:11:13.131033524Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:13.160361 containerd[1590]: time="2025-09-13T00:11:13.160250156Z" level=info msg="CreateContainer within sandbox \"9bce4e44e16e0939478eb64f4a9c905546db8515372ebc246907dd1872922619\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 13 00:11:13.220222 containerd[1590]: time="2025-09-13T00:11:13.213361630Z" level=info msg="CreateContainer within sandbox \"9bce4e44e16e0939478eb64f4a9c905546db8515372ebc246907dd1872922619\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4f459de11107551deebbc6ff30442fe2ee36a74faa8ffee59db8166f7268916c\"" Sep 13 00:11:13.223001 containerd[1590]: time="2025-09-13T00:11:13.222007259Z" level=info msg="StartContainer for \"4f459de11107551deebbc6ff30442fe2ee36a74faa8ffee59db8166f7268916c\"" Sep 13 00:11:13.506262 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:13.511507 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:13.506340 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:13.612675 containerd[1590]: time="2025-09-13T00:11:13.612611905Z" level=info msg="StartContainer for \"4f459de11107551deebbc6ff30442fe2ee36a74faa8ffee59db8166f7268916c\" returns successfully" Sep 13 00:11:13.753672 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 13 00:11:13.753887 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 13 00:11:14.039619 containerd[1590]: time="2025-09-13T00:11:14.038624694Z" level=info msg="StopPodSandbox for \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\"" Sep 13 00:11:14.534087 containerd[1590]: 2025-09-13 00:11:14.218 [INFO][3885] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Sep 13 00:11:14.534087 containerd[1590]: 2025-09-13 00:11:14.221 [INFO][3885] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" iface="eth0" netns="/var/run/netns/cni-d772d16e-22d3-3607-7632-a916d76c32bc" Sep 13 00:11:14.534087 containerd[1590]: 2025-09-13 00:11:14.224 [INFO][3885] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" iface="eth0" netns="/var/run/netns/cni-d772d16e-22d3-3607-7632-a916d76c32bc" Sep 13 00:11:14.534087 containerd[1590]: 2025-09-13 00:11:14.226 [INFO][3885] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" iface="eth0" netns="/var/run/netns/cni-d772d16e-22d3-3607-7632-a916d76c32bc" Sep 13 00:11:14.534087 containerd[1590]: 2025-09-13 00:11:14.226 [INFO][3885] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Sep 13 00:11:14.534087 containerd[1590]: 2025-09-13 00:11:14.226 [INFO][3885] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Sep 13 00:11:14.534087 containerd[1590]: 2025-09-13 00:11:14.508 [INFO][3892] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" HandleID="k8s-pod-network.66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--7d994bd988--rfj2l-eth0" Sep 13 00:11:14.534087 containerd[1590]: 2025-09-13 00:11:14.510 [INFO][3892] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:14.534087 containerd[1590]: 2025-09-13 00:11:14.511 [INFO][3892] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:14.534087 containerd[1590]: 2025-09-13 00:11:14.523 [WARNING][3892] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" HandleID="k8s-pod-network.66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--7d994bd988--rfj2l-eth0" Sep 13 00:11:14.534087 containerd[1590]: 2025-09-13 00:11:14.523 [INFO][3892] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" HandleID="k8s-pod-network.66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--7d994bd988--rfj2l-eth0" Sep 13 00:11:14.534087 containerd[1590]: 2025-09-13 00:11:14.527 [INFO][3892] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:14.534087 containerd[1590]: 2025-09-13 00:11:14.530 [INFO][3885] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Sep 13 00:11:14.539165 containerd[1590]: time="2025-09-13T00:11:14.538774332Z" level=info msg="TearDown network for sandbox \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\" successfully" Sep 13 00:11:14.539165 containerd[1590]: time="2025-09-13T00:11:14.538834412Z" level=info msg="StopPodSandbox for \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\" returns successfully" Sep 13 00:11:14.547261 systemd[1]: run-netns-cni\x2dd772d16e\x2d22d3\x2d3607\x2d7632\x2da916d76c32bc.mount: Deactivated successfully. Sep 13 00:11:14.693064 kubelet[2684]: I0913 00:11:14.692992 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/37599b9e-55ad-4d52-b2bf-48d68b18c66b-whisker-backend-key-pair\") pod \"37599b9e-55ad-4d52-b2bf-48d68b18c66b\" (UID: \"37599b9e-55ad-4d52-b2bf-48d68b18c66b\") " Sep 13 00:11:14.693064 kubelet[2684]: I0913 00:11:14.693079 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjgh7\" (UniqueName: \"kubernetes.io/projected/37599b9e-55ad-4d52-b2bf-48d68b18c66b-kube-api-access-gjgh7\") pod \"37599b9e-55ad-4d52-b2bf-48d68b18c66b\" (UID: \"37599b9e-55ad-4d52-b2bf-48d68b18c66b\") " Sep 13 00:11:14.693915 kubelet[2684]: I0913 00:11:14.693110 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37599b9e-55ad-4d52-b2bf-48d68b18c66b-whisker-ca-bundle\") pod \"37599b9e-55ad-4d52-b2bf-48d68b18c66b\" (UID: \"37599b9e-55ad-4d52-b2bf-48d68b18c66b\") " Sep 13 00:11:14.705987 systemd[1]: var-lib-kubelet-pods-37599b9e\x2d55ad\x2d4d52\x2db2bf\x2d48d68b18c66b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgjgh7.mount: Deactivated successfully. Sep 13 00:11:14.706181 systemd[1]: var-lib-kubelet-pods-37599b9e\x2d55ad\x2d4d52\x2db2bf\x2d48d68b18c66b-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 13 00:11:14.709824 kubelet[2684]: I0913 00:11:14.709745 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37599b9e-55ad-4d52-b2bf-48d68b18c66b-kube-api-access-gjgh7" (OuterVolumeSpecName: "kube-api-access-gjgh7") pod "37599b9e-55ad-4d52-b2bf-48d68b18c66b" (UID: "37599b9e-55ad-4d52-b2bf-48d68b18c66b"). InnerVolumeSpecName "kube-api-access-gjgh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:11:14.712599 kubelet[2684]: I0913 00:11:14.712549 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37599b9e-55ad-4d52-b2bf-48d68b18c66b-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "37599b9e-55ad-4d52-b2bf-48d68b18c66b" (UID: "37599b9e-55ad-4d52-b2bf-48d68b18c66b"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:11:14.712826 kubelet[2684]: I0913 00:11:14.704354 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37599b9e-55ad-4d52-b2bf-48d68b18c66b-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "37599b9e-55ad-4d52-b2bf-48d68b18c66b" (UID: "37599b9e-55ad-4d52-b2bf-48d68b18c66b"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:11:14.794621 kubelet[2684]: I0913 00:11:14.794420 2684 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/37599b9e-55ad-4d52-b2bf-48d68b18c66b-whisker-backend-key-pair\") on node \"ci-4081.3.5-n-738365eea6\" DevicePath \"\"" Sep 13 00:11:14.794621 kubelet[2684]: I0913 00:11:14.794481 2684 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/37599b9e-55ad-4d52-b2bf-48d68b18c66b-whisker-ca-bundle\") on node \"ci-4081.3.5-n-738365eea6\" DevicePath \"\"" Sep 13 00:11:14.794621 kubelet[2684]: I0913 00:11:14.794500 2684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjgh7\" (UniqueName: \"kubernetes.io/projected/37599b9e-55ad-4d52-b2bf-48d68b18c66b-kube-api-access-gjgh7\") on node \"ci-4081.3.5-n-738365eea6\" DevicePath \"\"" Sep 13 00:11:15.341931 kubelet[2684]: I0913 00:11:15.328021 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7fcww" podStartSLOduration=3.726839452 podStartE2EDuration="23.302521517s" podCreationTimestamp="2025-09-13 00:10:52 +0000 UTC" firstStartedPulling="2025-09-13 00:10:53.499647553 +0000 UTC m=+24.780282760" lastFinishedPulling="2025-09-13 00:11:13.075329585 +0000 UTC m=+44.355964825" observedRunningTime="2025-09-13 00:11:14.295325378 +0000 UTC m=+45.575960593" watchObservedRunningTime="2025-09-13 00:11:15.302521517 +0000 UTC m=+46.583156744" Sep 13 00:11:15.509520 kubelet[2684]: I0913 00:11:15.509457 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76ca7db6-5a90-4717-a52a-ed703eb1a5a7-whisker-backend-key-pair\") pod \"whisker-cbf85fcc5-wtk7f\" (UID: \"76ca7db6-5a90-4717-a52a-ed703eb1a5a7\") " pod="calico-system/whisker-cbf85fcc5-wtk7f" Sep 13 00:11:15.512691 kubelet[2684]: I0913 00:11:15.512303 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkmwd\" (UniqueName: \"kubernetes.io/projected/76ca7db6-5a90-4717-a52a-ed703eb1a5a7-kube-api-access-hkmwd\") pod \"whisker-cbf85fcc5-wtk7f\" (UID: \"76ca7db6-5a90-4717-a52a-ed703eb1a5a7\") " pod="calico-system/whisker-cbf85fcc5-wtk7f" Sep 13 00:11:15.512691 kubelet[2684]: I0913 00:11:15.512376 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76ca7db6-5a90-4717-a52a-ed703eb1a5a7-whisker-ca-bundle\") pod \"whisker-cbf85fcc5-wtk7f\" (UID: \"76ca7db6-5a90-4717-a52a-ed703eb1a5a7\") " pod="calico-system/whisker-cbf85fcc5-wtk7f" Sep 13 00:11:15.552846 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:15.552858 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:15.557072 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:15.687134 containerd[1590]: time="2025-09-13T00:11:15.686994582Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cbf85fcc5-wtk7f,Uid:76ca7db6-5a90-4717-a52a-ed703eb1a5a7,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:15.930138 containerd[1590]: time="2025-09-13T00:11:15.930013604Z" level=info msg="StopPodSandbox for \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\"" Sep 13 00:11:15.934787 containerd[1590]: time="2025-09-13T00:11:15.933738644Z" level=info msg="StopPodSandbox for \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\"" Sep 13 00:11:16.323752 kernel: bpftool[4123]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 13 00:11:16.596298 containerd[1590]: 2025-09-13 00:11:16.283 [INFO][4071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Sep 13 00:11:16.596298 containerd[1590]: 2025-09-13 00:11:16.286 [INFO][4071] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" iface="eth0" netns="/var/run/netns/cni-a5871bf9-1aa7-f5f0-2e14-5143372a51b6" Sep 13 00:11:16.596298 containerd[1590]: 2025-09-13 00:11:16.288 [INFO][4071] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" iface="eth0" netns="/var/run/netns/cni-a5871bf9-1aa7-f5f0-2e14-5143372a51b6" Sep 13 00:11:16.596298 containerd[1590]: 2025-09-13 00:11:16.308 [INFO][4071] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" iface="eth0" netns="/var/run/netns/cni-a5871bf9-1aa7-f5f0-2e14-5143372a51b6" Sep 13 00:11:16.596298 containerd[1590]: 2025-09-13 00:11:16.323 [INFO][4071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Sep 13 00:11:16.596298 containerd[1590]: 2025-09-13 00:11:16.323 [INFO][4071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Sep 13 00:11:16.596298 containerd[1590]: 2025-09-13 00:11:16.554 [INFO][4126] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" HandleID="k8s-pod-network.bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:16.596298 containerd[1590]: 2025-09-13 00:11:16.554 [INFO][4126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:16.596298 containerd[1590]: 2025-09-13 00:11:16.554 [INFO][4126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:16.596298 containerd[1590]: 2025-09-13 00:11:16.568 [WARNING][4126] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" HandleID="k8s-pod-network.bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:16.596298 containerd[1590]: 2025-09-13 00:11:16.568 [INFO][4126] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" HandleID="k8s-pod-network.bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:16.596298 containerd[1590]: 2025-09-13 00:11:16.571 [INFO][4126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:16.596298 containerd[1590]: 2025-09-13 00:11:16.582 [INFO][4071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Sep 13 00:11:16.601158 containerd[1590]: time="2025-09-13T00:11:16.600837573Z" level=info msg="TearDown network for sandbox \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\" successfully" Sep 13 00:11:16.601158 containerd[1590]: time="2025-09-13T00:11:16.600898543Z" level=info msg="StopPodSandbox for \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\" returns successfully" Sep 13 00:11:16.606990 containerd[1590]: time="2025-09-13T00:11:16.605187436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75744756bc-gcpv4,Uid:e4db298b-8af8-4bf0-9f46-9f2489a8fa88,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:11:16.613854 systemd[1]: run-netns-cni\x2da5871bf9\x2d1aa7\x2df5f0\x2d2e14\x2d5143372a51b6.mount: Deactivated successfully. Sep 13 00:11:16.632791 containerd[1590]: 2025-09-13 00:11:16.461 [INFO][4087] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Sep 13 00:11:16.632791 containerd[1590]: 2025-09-13 00:11:16.461 [INFO][4087] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" iface="eth0" netns="/var/run/netns/cni-f44d9166-a28e-ac31-eb26-e3c97477653b" Sep 13 00:11:16.632791 containerd[1590]: 2025-09-13 00:11:16.462 [INFO][4087] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" iface="eth0" netns="/var/run/netns/cni-f44d9166-a28e-ac31-eb26-e3c97477653b" Sep 13 00:11:16.632791 containerd[1590]: 2025-09-13 00:11:16.463 [INFO][4087] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" iface="eth0" netns="/var/run/netns/cni-f44d9166-a28e-ac31-eb26-e3c97477653b" Sep 13 00:11:16.632791 containerd[1590]: 2025-09-13 00:11:16.464 [INFO][4087] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Sep 13 00:11:16.632791 containerd[1590]: 2025-09-13 00:11:16.464 [INFO][4087] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Sep 13 00:11:16.632791 containerd[1590]: 2025-09-13 00:11:16.558 [INFO][4144] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" HandleID="k8s-pod-network.f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Workload="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:16.632791 containerd[1590]: 2025-09-13 00:11:16.570 [INFO][4144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:16.632791 containerd[1590]: 2025-09-13 00:11:16.574 [INFO][4144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:16.632791 containerd[1590]: 2025-09-13 00:11:16.605 [WARNING][4144] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" HandleID="k8s-pod-network.f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Workload="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:16.632791 containerd[1590]: 2025-09-13 00:11:16.605 [INFO][4144] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" HandleID="k8s-pod-network.f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Workload="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:16.632791 containerd[1590]: 2025-09-13 00:11:16.612 [INFO][4144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:16.632791 containerd[1590]: 2025-09-13 00:11:16.622 [INFO][4087] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Sep 13 00:11:16.632791 containerd[1590]: time="2025-09-13T00:11:16.632583618Z" level=info msg="TearDown network for sandbox \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\" successfully" Sep 13 00:11:16.632791 containerd[1590]: time="2025-09-13T00:11:16.632624657Z" level=info msg="StopPodSandbox for \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\" returns successfully" Sep 13 00:11:16.638300 containerd[1590]: time="2025-09-13T00:11:16.638130708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qsg52,Uid:04b956de-e0b5-4148-a086-60a45e92f38a,Namespace:calico-system,Attempt:1,}" Sep 13 00:11:16.651687 systemd[1]: run-netns-cni\x2df44d9166\x2da28e\x2dac31\x2deb26\x2de3c97477653b.mount: Deactivated successfully. Sep 13 00:11:16.933986 containerd[1590]: time="2025-09-13T00:11:16.932086598Z" level=info msg="StopPodSandbox for \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\"" Sep 13 00:11:16.965755 kubelet[2684]: I0913 00:11:16.965352 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37599b9e-55ad-4d52-b2bf-48d68b18c66b" path="/var/lib/kubelet/pods/37599b9e-55ad-4d52-b2bf-48d68b18c66b/volumes" Sep 13 00:11:17.081836 containerd[1590]: 2025-09-13 00:11:16.718 [INFO][4171] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be" Sep 13 00:11:17.081836 containerd[1590]: 2025-09-13 00:11:16.719 [INFO][4171] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be" iface="eth0" netns="/var/run/netns/cni-cfcd7548-46d2-42b0-2826-742258193d82" Sep 13 00:11:17.081836 containerd[1590]: 2025-09-13 00:11:16.720 [INFO][4171] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be" iface="eth0" netns="/var/run/netns/cni-cfcd7548-46d2-42b0-2826-742258193d82" Sep 13 00:11:17.081836 containerd[1590]: 2025-09-13 00:11:16.722 [INFO][4171] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be" iface="eth0" netns="/var/run/netns/cni-cfcd7548-46d2-42b0-2826-742258193d82" Sep 13 00:11:17.081836 containerd[1590]: 2025-09-13 00:11:16.723 [INFO][4171] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be" Sep 13 00:11:17.081836 containerd[1590]: 2025-09-13 00:11:16.723 [INFO][4171] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be" Sep 13 00:11:17.081836 containerd[1590]: 2025-09-13 00:11:17.031 [INFO][4197] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be" HandleID="k8s-pod-network.6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0" Sep 13 00:11:17.081836 containerd[1590]: 2025-09-13 00:11:17.033 [INFO][4197] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:17.081836 containerd[1590]: 2025-09-13 00:11:17.034 [INFO][4197] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:17.081836 containerd[1590]: 2025-09-13 00:11:17.050 [WARNING][4197] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be" HandleID="k8s-pod-network.6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0" Sep 13 00:11:17.081836 containerd[1590]: 2025-09-13 00:11:17.050 [INFO][4197] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be" HandleID="k8s-pod-network.6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0" Sep 13 00:11:17.081836 containerd[1590]: 2025-09-13 00:11:17.053 [INFO][4197] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:17.081836 containerd[1590]: 2025-09-13 00:11:17.072 [INFO][4171] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be" Sep 13 00:11:17.094899 containerd[1590]: time="2025-09-13T00:11:17.094275861Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cbf85fcc5-wtk7f,Uid:76ca7db6-5a90-4717-a52a-ed703eb1a5a7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be\": plugin type=\"calico\" failed (add): failed to look up reserved IPs: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/ipreservations?limit=500\": tls: failed to verify certificate: x509: certificate signed by unknown authority" Sep 13 00:11:17.116210 kubelet[2684]: E0913 00:11:17.116127 2684 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be\": plugin type=\"calico\" failed (add): failed to look up reserved IPs: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/ipreservations?limit=500\": tls: failed to verify certificate: x509: certificate signed by unknown authority" Sep 13 00:11:17.119155 kubelet[2684]: E0913 00:11:17.119071 2684 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be\": plugin type=\"calico\" failed (add): failed to look up reserved IPs: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/ipreservations?limit=500\": tls: failed to verify certificate: x509: certificate signed by unknown authority" pod="calico-system/whisker-cbf85fcc5-wtk7f" Sep 13 00:11:17.120179 kubelet[2684]: E0913 00:11:17.119174 2684 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be\": plugin type=\"calico\" failed (add): failed to look up reserved IPs: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/ipreservations?limit=500\": tls: failed to verify certificate: x509: certificate signed by unknown authority" pod="calico-system/whisker-cbf85fcc5-wtk7f" Sep 13 00:11:17.134425 kubelet[2684]: E0913 00:11:17.134257 2684 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-cbf85fcc5-wtk7f_calico-system(76ca7db6-5a90-4717-a52a-ed703eb1a5a7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-cbf85fcc5-wtk7f_calico-system(76ca7db6-5a90-4717-a52a-ed703eb1a5a7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be\\\": plugin type=\\\"calico\\\" failed (add): failed to look up reserved IPs: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/ipreservations?limit=500\\\": tls: failed to verify certificate: x509: certificate signed by unknown authority\"" pod="calico-system/whisker-cbf85fcc5-wtk7f" podUID="76ca7db6-5a90-4717-a52a-ed703eb1a5a7" Sep 13 00:11:17.282416 systemd-networkd[1221]: vxlan.calico: Link UP Sep 13 00:11:17.282432 systemd-networkd[1221]: vxlan.calico: Gained carrier Sep 13 00:11:17.363374 containerd[1590]: time="2025-09-13T00:11:17.362467195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cbf85fcc5-wtk7f,Uid:76ca7db6-5a90-4717-a52a-ed703eb1a5a7,Namespace:calico-system,Attempt:0,}" Sep 13 00:11:17.373482 systemd-networkd[1221]: calie58d815b1e0: Link UP Sep 13 00:11:17.377437 systemd-networkd[1221]: calie58d815b1e0: Gained carrier Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:16.907 [INFO][4181] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0 calico-apiserver-75744756bc- calico-apiserver e4db298b-8af8-4bf0-9f46-9f2489a8fa88 919 0 2025-09-13 00:10:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75744756bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-738365eea6 calico-apiserver-75744756bc-gcpv4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie58d815b1e0 [] [] }} ContainerID="1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-gcpv4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-" Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:16.907 [INFO][4181] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-gcpv4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.189 [INFO][4216] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" HandleID="k8s-pod-network.1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.190 [INFO][4216] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" HandleID="k8s-pod-network.1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123a60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-738365eea6", "pod":"calico-apiserver-75744756bc-gcpv4", "timestamp":"2025-09-13 00:11:17.189738285 +0000 UTC"}, Hostname:"ci-4081.3.5-n-738365eea6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.190 [INFO][4216] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.190 [INFO][4216] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.190 [INFO][4216] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-738365eea6' Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.215 [INFO][4216] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.228 [INFO][4216] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.242 [INFO][4216] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.251 [INFO][4216] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.257 [INFO][4216] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.257 [INFO][4216] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.263 [INFO][4216] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692 Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.282 [INFO][4216] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.301 [INFO][4216] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.1/26] block=192.168.109.0/26 handle="k8s-pod-network.1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.301 [INFO][4216] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.1/26] handle="k8s-pod-network.1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.301 [INFO][4216] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:17.425912 containerd[1590]: 2025-09-13 00:11:17.301 [INFO][4216] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.1/26] IPv6=[] ContainerID="1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" HandleID="k8s-pod-network.1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:17.427026 containerd[1590]: 2025-09-13 00:11:17.326 [INFO][4181] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-gcpv4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0", GenerateName:"calico-apiserver-75744756bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e4db298b-8af8-4bf0-9f46-9f2489a8fa88", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75744756bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"", Pod:"calico-apiserver-75744756bc-gcpv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie58d815b1e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:17.427026 containerd[1590]: 2025-09-13 00:11:17.328 [INFO][4181] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.1/32] ContainerID="1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-gcpv4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:17.427026 containerd[1590]: 2025-09-13 00:11:17.329 [INFO][4181] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie58d815b1e0 ContainerID="1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-gcpv4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:17.427026 containerd[1590]: 2025-09-13 00:11:17.378 [INFO][4181] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-gcpv4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:17.427026 containerd[1590]: 2025-09-13 00:11:17.385 [INFO][4181] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-gcpv4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0", GenerateName:"calico-apiserver-75744756bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e4db298b-8af8-4bf0-9f46-9f2489a8fa88", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75744756bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692", Pod:"calico-apiserver-75744756bc-gcpv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie58d815b1e0", MAC:"c6:d1:33:e2:a1:b0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:17.427026 containerd[1590]: 2025-09-13 00:11:17.409 [INFO][4181] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-gcpv4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:17.453486 systemd[1]: run-netns-cni\x2dcfcd7548\x2d46d2\x2d42b0\x2d2826\x2d742258193d82.mount: Deactivated successfully. Sep 13 00:11:17.453688 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ced17aab6196dd69a69980d125c3454a077c3f20b969c1f538ea990981f58be-shm.mount: Deactivated successfully. Sep 13 00:11:17.511590 systemd-networkd[1221]: calic2b4b23d530: Link UP Sep 13 00:11:17.514046 systemd-networkd[1221]: calic2b4b23d530: Gained carrier Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.036 [INFO][4194] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0 csi-node-driver- calico-system 04b956de-e0b5-4148-a086-60a45e92f38a 921 0 2025-09-13 00:10:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.5-n-738365eea6 csi-node-driver-qsg52 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic2b4b23d530 [] [] }} ContainerID="7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" Namespace="calico-system" Pod="csi-node-driver-qsg52" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-" Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.046 [INFO][4194] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" Namespace="calico-system" Pod="csi-node-driver-qsg52" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.197 [INFO][4232] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" HandleID="k8s-pod-network.7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" Workload="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.198 [INFO][4232] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" HandleID="k8s-pod-network.7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" Workload="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037c050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-738365eea6", "pod":"csi-node-driver-qsg52", "timestamp":"2025-09-13 00:11:17.197817365 +0000 UTC"}, Hostname:"ci-4081.3.5-n-738365eea6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.198 [INFO][4232] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.301 [INFO][4232] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.302 [INFO][4232] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-738365eea6' Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.333 [INFO][4232] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.353 [INFO][4232] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.380 [INFO][4232] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.386 [INFO][4232] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.399 [INFO][4232] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.399 [INFO][4232] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.410 [INFO][4232] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93 Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.422 [INFO][4232] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.437 [INFO][4232] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.2/26] block=192.168.109.0/26 handle="k8s-pod-network.7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.437 [INFO][4232] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.2/26] handle="k8s-pod-network.7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.437 [INFO][4232] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:17.544476 containerd[1590]: 2025-09-13 00:11:17.437 [INFO][4232] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.2/26] IPv6=[] ContainerID="7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" HandleID="k8s-pod-network.7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" Workload="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:17.547552 containerd[1590]: 2025-09-13 00:11:17.481 [INFO][4194] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" Namespace="calico-system" Pod="csi-node-driver-qsg52" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"04b956de-e0b5-4148-a086-60a45e92f38a", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"", Pod:"csi-node-driver-qsg52", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2b4b23d530", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:17.547552 containerd[1590]: 2025-09-13 00:11:17.489 [INFO][4194] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.2/32] ContainerID="7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" Namespace="calico-system" Pod="csi-node-driver-qsg52" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:17.547552 containerd[1590]: 2025-09-13 00:11:17.490 [INFO][4194] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic2b4b23d530 ContainerID="7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" Namespace="calico-system" Pod="csi-node-driver-qsg52" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:17.547552 containerd[1590]: 2025-09-13 00:11:17.512 [INFO][4194] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" Namespace="calico-system" Pod="csi-node-driver-qsg52" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:17.547552 containerd[1590]: 2025-09-13 00:11:17.512 [INFO][4194] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" Namespace="calico-system" Pod="csi-node-driver-qsg52" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"04b956de-e0b5-4148-a086-60a45e92f38a", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93", Pod:"csi-node-driver-qsg52", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2b4b23d530", MAC:"da:6d:3c:dc:fa:78", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:17.547552 containerd[1590]: 2025-09-13 00:11:17.540 [INFO][4194] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93" Namespace="calico-system" Pod="csi-node-driver-qsg52" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:17.558650 containerd[1590]: 2025-09-13 00:11:17.236 [INFO][4225] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Sep 13 00:11:17.558650 containerd[1590]: 2025-09-13 00:11:17.237 [INFO][4225] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" iface="eth0" netns="/var/run/netns/cni-543a2c34-e096-f9f6-3359-7411af3b11eb" Sep 13 00:11:17.558650 containerd[1590]: 2025-09-13 00:11:17.237 [INFO][4225] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" iface="eth0" netns="/var/run/netns/cni-543a2c34-e096-f9f6-3359-7411af3b11eb" Sep 13 00:11:17.558650 containerd[1590]: 2025-09-13 00:11:17.237 [INFO][4225] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" iface="eth0" netns="/var/run/netns/cni-543a2c34-e096-f9f6-3359-7411af3b11eb" Sep 13 00:11:17.558650 containerd[1590]: 2025-09-13 00:11:17.237 [INFO][4225] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Sep 13 00:11:17.558650 containerd[1590]: 2025-09-13 00:11:17.237 [INFO][4225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Sep 13 00:11:17.558650 containerd[1590]: 2025-09-13 00:11:17.493 [INFO][4257] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" HandleID="k8s-pod-network.f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:17.558650 containerd[1590]: 2025-09-13 00:11:17.494 [INFO][4257] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:17.558650 containerd[1590]: 2025-09-13 00:11:17.494 [INFO][4257] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:17.558650 containerd[1590]: 2025-09-13 00:11:17.516 [WARNING][4257] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" HandleID="k8s-pod-network.f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:17.558650 containerd[1590]: 2025-09-13 00:11:17.521 [INFO][4257] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" HandleID="k8s-pod-network.f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:17.558650 containerd[1590]: 2025-09-13 00:11:17.531 [INFO][4257] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:17.558650 containerd[1590]: 2025-09-13 00:11:17.544 [INFO][4225] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Sep 13 00:11:17.562250 containerd[1590]: time="2025-09-13T00:11:17.561882279Z" level=info msg="TearDown network for sandbox \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\" successfully" Sep 13 00:11:17.563477 containerd[1590]: time="2025-09-13T00:11:17.563021557Z" level=info msg="StopPodSandbox for \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\" returns successfully" Sep 13 00:11:17.566566 systemd[1]: run-netns-cni\x2d543a2c34\x2de096\x2df9f6\x2d3359\x2d7411af3b11eb.mount: Deactivated successfully. Sep 13 00:11:17.572130 containerd[1590]: time="2025-09-13T00:11:17.572076517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c4785d957-fpstv,Uid:a98a040e-2ea2-4095-a76f-86a4e06c7abd,Namespace:calico-system,Attempt:1,}" Sep 13 00:11:17.735171 containerd[1590]: time="2025-09-13T00:11:17.735013726Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:17.738874 containerd[1590]: time="2025-09-13T00:11:17.737542621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:17.738874 containerd[1590]: time="2025-09-13T00:11:17.737627361Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:17.738874 containerd[1590]: time="2025-09-13T00:11:17.737667936Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:17.738874 containerd[1590]: time="2025-09-13T00:11:17.737871900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:17.738874 containerd[1590]: time="2025-09-13T00:11:17.735108169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:17.738874 containerd[1590]: time="2025-09-13T00:11:17.735742410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:17.738874 containerd[1590]: time="2025-09-13T00:11:17.736438891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:17.929754 containerd[1590]: time="2025-09-13T00:11:17.929235949Z" level=info msg="StopPodSandbox for \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\"" Sep 13 00:11:17.934479 containerd[1590]: time="2025-09-13T00:11:17.931078906Z" level=info msg="StopPodSandbox for \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\"" Sep 13 00:11:18.145299 containerd[1590]: time="2025-09-13T00:11:18.145210385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qsg52,Uid:04b956de-e0b5-4148-a086-60a45e92f38a,Namespace:calico-system,Attempt:1,} returns sandbox id \"7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93\"" Sep 13 00:11:18.166333 containerd[1590]: time="2025-09-13T00:11:18.166180422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 13 00:11:18.270334 containerd[1590]: time="2025-09-13T00:11:18.269759241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75744756bc-gcpv4,Uid:e4db298b-8af8-4bf0-9f46-9f2489a8fa88,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692\"" Sep 13 00:11:18.309372 systemd-networkd[1221]: calib8f0ddd9204: Link UP Sep 13 00:11:18.309636 systemd-networkd[1221]: calib8f0ddd9204: Gained carrier Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:17.783 [INFO][4324] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0 calico-kube-controllers-5c4785d957- calico-system a98a040e-2ea2-4095-a76f-86a4e06c7abd 928 0 2025-09-13 00:10:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c4785d957 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.5-n-738365eea6 calico-kube-controllers-5c4785d957-fpstv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib8f0ddd9204 [] [] }} ContainerID="a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" Namespace="calico-system" Pod="calico-kube-controllers-5c4785d957-fpstv" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-" Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:17.783 [INFO][4324] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" Namespace="calico-system" Pod="calico-kube-controllers-5c4785d957-fpstv" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.044 [INFO][4371] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" HandleID="k8s-pod-network.a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.044 [INFO][4371] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" HandleID="k8s-pod-network.a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000326860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-738365eea6", "pod":"calico-kube-controllers-5c4785d957-fpstv", "timestamp":"2025-09-13 00:11:18.044473947 +0000 UTC"}, Hostname:"ci-4081.3.5-n-738365eea6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.044 [INFO][4371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.044 [INFO][4371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.045 [INFO][4371] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-738365eea6' Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.127 [INFO][4371] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.147 [INFO][4371] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.181 [INFO][4371] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.188 [INFO][4371] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.217 [INFO][4371] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.217 [INFO][4371] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.222 [INFO][4371] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.236 [INFO][4371] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.255 [INFO][4371] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.3/26] block=192.168.109.0/26 handle="k8s-pod-network.a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.257 [INFO][4371] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.3/26] handle="k8s-pod-network.a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.257 [INFO][4371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:18.361070 containerd[1590]: 2025-09-13 00:11:18.257 [INFO][4371] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.3/26] IPv6=[] ContainerID="a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" HandleID="k8s-pod-network.a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:18.364352 containerd[1590]: 2025-09-13 00:11:18.279 [INFO][4324] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" Namespace="calico-system" Pod="calico-kube-controllers-5c4785d957-fpstv" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0", GenerateName:"calico-kube-controllers-5c4785d957-", Namespace:"calico-system", SelfLink:"", UID:"a98a040e-2ea2-4095-a76f-86a4e06c7abd", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c4785d957", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"", Pod:"calico-kube-controllers-5c4785d957-fpstv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib8f0ddd9204", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:18.364352 containerd[1590]: 2025-09-13 00:11:18.285 [INFO][4324] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.3/32] ContainerID="a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" Namespace="calico-system" Pod="calico-kube-controllers-5c4785d957-fpstv" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:18.364352 containerd[1590]: 2025-09-13 00:11:18.286 [INFO][4324] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8f0ddd9204 ContainerID="a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" Namespace="calico-system" Pod="calico-kube-controllers-5c4785d957-fpstv" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:18.364352 containerd[1590]: 2025-09-13 00:11:18.308 [INFO][4324] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" Namespace="calico-system" Pod="calico-kube-controllers-5c4785d957-fpstv" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:18.364352 containerd[1590]: 2025-09-13 00:11:18.311 [INFO][4324] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" Namespace="calico-system" Pod="calico-kube-controllers-5c4785d957-fpstv" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0", GenerateName:"calico-kube-controllers-5c4785d957-", Namespace:"calico-system", SelfLink:"", UID:"a98a040e-2ea2-4095-a76f-86a4e06c7abd", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c4785d957", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a", Pod:"calico-kube-controllers-5c4785d957-fpstv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib8f0ddd9204", MAC:"72:6a:4d:3b:34:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:18.364352 containerd[1590]: 2025-09-13 00:11:18.346 [INFO][4324] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a" Namespace="calico-system" Pod="calico-kube-controllers-5c4785d957-fpstv" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:18.561485 systemd-networkd[1221]: calie58d815b1e0: Gained IPv6LL Sep 13 00:11:18.586858 systemd-networkd[1221]: calib929bb3ac3c: Link UP Sep 13 00:11:18.606768 containerd[1590]: time="2025-09-13T00:11:18.606016032Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:18.612040 containerd[1590]: time="2025-09-13T00:11:18.611841986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:18.612040 containerd[1590]: time="2025-09-13T00:11:18.611963816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:18.616944 containerd[1590]: time="2025-09-13T00:11:18.616791498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:18.621611 systemd-networkd[1221]: calib929bb3ac3c: Gained carrier Sep 13 00:11:18.692834 containerd[1590]: 2025-09-13 00:11:18.315 [INFO][4422] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Sep 13 00:11:18.692834 containerd[1590]: 2025-09-13 00:11:18.318 [INFO][4422] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" iface="eth0" netns="/var/run/netns/cni-e9b3c2cd-38c1-b064-f838-b8916854e275" Sep 13 00:11:18.692834 containerd[1590]: 2025-09-13 00:11:18.330 [INFO][4422] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" iface="eth0" netns="/var/run/netns/cni-e9b3c2cd-38c1-b064-f838-b8916854e275" Sep 13 00:11:18.692834 containerd[1590]: 2025-09-13 00:11:18.331 [INFO][4422] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" iface="eth0" netns="/var/run/netns/cni-e9b3c2cd-38c1-b064-f838-b8916854e275" Sep 13 00:11:18.692834 containerd[1590]: 2025-09-13 00:11:18.331 [INFO][4422] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Sep 13 00:11:18.692834 containerd[1590]: 2025-09-13 00:11:18.331 [INFO][4422] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Sep 13 00:11:18.692834 containerd[1590]: 2025-09-13 00:11:18.621 [INFO][4467] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" HandleID="k8s-pod-network.ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:18.692834 containerd[1590]: 2025-09-13 00:11:18.622 [INFO][4467] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:18.692834 containerd[1590]: 2025-09-13 00:11:18.622 [INFO][4467] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:18.692834 containerd[1590]: 2025-09-13 00:11:18.655 [WARNING][4467] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" HandleID="k8s-pod-network.ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:18.692834 containerd[1590]: 2025-09-13 00:11:18.655 [INFO][4467] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" HandleID="k8s-pod-network.ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:18.692834 containerd[1590]: 2025-09-13 00:11:18.667 [INFO][4467] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:18.692834 containerd[1590]: 2025-09-13 00:11:18.685 [INFO][4422] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:17.831 [INFO][4293] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0 whisker-cbf85fcc5- calico-system 76ca7db6-5a90-4717-a52a-ed703eb1a5a7 922 0 2025-09-13 00:11:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:cbf85fcc5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.5-n-738365eea6 whisker-cbf85fcc5-wtk7f eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib929bb3ac3c [] [] }} ContainerID="eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" Namespace="calico-system" Pod="whisker-cbf85fcc5-wtk7f" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-" Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:17.832 [INFO][4293] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" Namespace="calico-system" Pod="whisker-cbf85fcc5-wtk7f" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0" Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.291 [INFO][4397] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" HandleID="k8s-pod-network.eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0" Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.291 [INFO][4397] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" HandleID="k8s-pod-network.eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e460), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-738365eea6", "pod":"whisker-cbf85fcc5-wtk7f", "timestamp":"2025-09-13 00:11:18.291625904 +0000 UTC"}, Hostname:"ci-4081.3.5-n-738365eea6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.292 [INFO][4397] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.292 [INFO][4397] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.292 [INFO][4397] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-738365eea6' Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.352 [INFO][4397] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.374 [INFO][4397] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.397 [INFO][4397] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.407 [INFO][4397] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.417 [INFO][4397] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.417 [INFO][4397] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.424 [INFO][4397] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.436 [INFO][4397] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.465 [INFO][4397] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.4/26] block=192.168.109.0/26 handle="k8s-pod-network.eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.465 [INFO][4397] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.4/26] handle="k8s-pod-network.eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.465 [INFO][4397] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:18.696828 containerd[1590]: 2025-09-13 00:11:18.465 [INFO][4397] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.4/26] IPv6=[] ContainerID="eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" HandleID="k8s-pod-network.eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0" Sep 13 00:11:18.701186 containerd[1590]: 2025-09-13 00:11:18.509 [INFO][4293] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" Namespace="calico-system" Pod="whisker-cbf85fcc5-wtk7f" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0", GenerateName:"whisker-cbf85fcc5-", Namespace:"calico-system", SelfLink:"", UID:"76ca7db6-5a90-4717-a52a-ed703eb1a5a7", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cbf85fcc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"", Pod:"whisker-cbf85fcc5-wtk7f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib929bb3ac3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:18.701186 containerd[1590]: 2025-09-13 00:11:18.511 [INFO][4293] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.4/32] ContainerID="eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" Namespace="calico-system" Pod="whisker-cbf85fcc5-wtk7f" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0" Sep 13 00:11:18.701186 containerd[1590]: 2025-09-13 00:11:18.512 [INFO][4293] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib929bb3ac3c ContainerID="eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" Namespace="calico-system" Pod="whisker-cbf85fcc5-wtk7f" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0" Sep 13 00:11:18.701186 containerd[1590]: 2025-09-13 00:11:18.619 [INFO][4293] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" Namespace="calico-system" Pod="whisker-cbf85fcc5-wtk7f" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0" Sep 13 00:11:18.701186 containerd[1590]: 2025-09-13 00:11:18.643 [INFO][4293] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" Namespace="calico-system" Pod="whisker-cbf85fcc5-wtk7f" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0", GenerateName:"whisker-cbf85fcc5-", Namespace:"calico-system", SelfLink:"", UID:"76ca7db6-5a90-4717-a52a-ed703eb1a5a7", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"cbf85fcc5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d", Pod:"whisker-cbf85fcc5-wtk7f", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.109.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib929bb3ac3c", MAC:"ea:a6:c4:02:d2:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:18.701186 containerd[1590]: 2025-09-13 00:11:18.681 [INFO][4293] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d" Namespace="calico-system" Pod="whisker-cbf85fcc5-wtk7f" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-whisker--cbf85fcc5--wtk7f-eth0" Sep 13 00:11:18.706464 containerd[1590]: time="2025-09-13T00:11:18.697687527Z" level=info msg="TearDown network for sandbox \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\" successfully" Sep 13 00:11:18.706464 containerd[1590]: time="2025-09-13T00:11:18.706129741Z" level=info msg="StopPodSandbox for \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\" returns successfully" Sep 13 00:11:18.707769 systemd[1]: run-netns-cni\x2de9b3c2cd\x2d38c1\x2db064\x2df838\x2db8916854e275.mount: Deactivated successfully. Sep 13 00:11:18.718994 containerd[1590]: time="2025-09-13T00:11:18.718286338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75744756bc-dm52h,Uid:0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d,Namespace:calico-apiserver,Attempt:1,}" Sep 13 00:11:18.825460 systemd[1]: run-containerd-runc-k8s.io-a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a-runc.yU3WbK.mount: Deactivated successfully. Sep 13 00:11:18.881818 containerd[1590]: time="2025-09-13T00:11:18.880109644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:18.881818 containerd[1590]: time="2025-09-13T00:11:18.880207920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:18.881818 containerd[1590]: time="2025-09-13T00:11:18.880226407Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:18.881818 containerd[1590]: time="2025-09-13T00:11:18.880388288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:18.986649 containerd[1590]: time="2025-09-13T00:11:18.979049566Z" level=info msg="StopPodSandbox for \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\"" Sep 13 00:11:19.074144 containerd[1590]: 2025-09-13 00:11:18.503 [INFO][4429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Sep 13 00:11:19.074144 containerd[1590]: 2025-09-13 00:11:18.505 [INFO][4429] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" iface="eth0" netns="/var/run/netns/cni-40f83694-c38b-9da9-1113-03fc972fbfec" Sep 13 00:11:19.074144 containerd[1590]: 2025-09-13 00:11:18.506 [INFO][4429] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" iface="eth0" netns="/var/run/netns/cni-40f83694-c38b-9da9-1113-03fc972fbfec" Sep 13 00:11:19.074144 containerd[1590]: 2025-09-13 00:11:18.507 [INFO][4429] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" iface="eth0" netns="/var/run/netns/cni-40f83694-c38b-9da9-1113-03fc972fbfec" Sep 13 00:11:19.074144 containerd[1590]: 2025-09-13 00:11:18.508 [INFO][4429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Sep 13 00:11:19.074144 containerd[1590]: 2025-09-13 00:11:18.508 [INFO][4429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Sep 13 00:11:19.074144 containerd[1590]: 2025-09-13 00:11:19.017 [INFO][4487] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" HandleID="k8s-pod-network.c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:19.074144 containerd[1590]: 2025-09-13 00:11:19.020 [INFO][4487] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:19.074144 containerd[1590]: 2025-09-13 00:11:19.021 [INFO][4487] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:19.074144 containerd[1590]: 2025-09-13 00:11:19.043 [WARNING][4487] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" HandleID="k8s-pod-network.c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:19.074144 containerd[1590]: 2025-09-13 00:11:19.043 [INFO][4487] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" HandleID="k8s-pod-network.c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:19.074144 containerd[1590]: 2025-09-13 00:11:19.046 [INFO][4487] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:19.074144 containerd[1590]: 2025-09-13 00:11:19.052 [INFO][4429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Sep 13 00:11:19.082216 containerd[1590]: time="2025-09-13T00:11:19.081458432Z" level=info msg="TearDown network for sandbox \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\" successfully" Sep 13 00:11:19.083860 containerd[1590]: time="2025-09-13T00:11:19.082666732Z" level=info msg="StopPodSandbox for \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\" returns successfully" Sep 13 00:11:19.094958 kubelet[2684]: E0913 00:11:19.094191 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:11:19.109318 containerd[1590]: time="2025-09-13T00:11:19.108678226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vzpdb,Uid:7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5,Namespace:kube-system,Attempt:1,}" Sep 13 00:11:19.142561 systemd-networkd[1221]: vxlan.calico: Gained IPv6LL Sep 13 00:11:19.203802 systemd-networkd[1221]: calic2b4b23d530: Gained IPv6LL Sep 13 00:11:19.215619 containerd[1590]: time="2025-09-13T00:11:19.215376443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c4785d957-fpstv,Uid:a98a040e-2ea2-4095-a76f-86a4e06c7abd,Namespace:calico-system,Attempt:1,} returns sandbox id \"a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a\"" Sep 13 00:11:19.394296 containerd[1590]: time="2025-09-13T00:11:19.394025175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-cbf85fcc5-wtk7f,Uid:76ca7db6-5a90-4717-a52a-ed703eb1a5a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d\"" Sep 13 00:11:19.470758 systemd[1]: run-netns-cni\x2d40f83694\x2dc38b\x2d9da9\x2d1113\x2d03fc972fbfec.mount: Deactivated successfully. Sep 13 00:11:19.530024 containerd[1590]: 2025-09-13 00:11:19.253 [INFO][4603] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Sep 13 00:11:19.530024 containerd[1590]: 2025-09-13 00:11:19.253 [INFO][4603] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" iface="eth0" netns="/var/run/netns/cni-d809ac7a-b34d-0bc2-c749-99a62fbfa39f" Sep 13 00:11:19.530024 containerd[1590]: 2025-09-13 00:11:19.254 [INFO][4603] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" iface="eth0" netns="/var/run/netns/cni-d809ac7a-b34d-0bc2-c749-99a62fbfa39f" Sep 13 00:11:19.530024 containerd[1590]: 2025-09-13 00:11:19.254 [INFO][4603] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" iface="eth0" netns="/var/run/netns/cni-d809ac7a-b34d-0bc2-c749-99a62fbfa39f" Sep 13 00:11:19.530024 containerd[1590]: 2025-09-13 00:11:19.254 [INFO][4603] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Sep 13 00:11:19.530024 containerd[1590]: 2025-09-13 00:11:19.254 [INFO][4603] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Sep 13 00:11:19.530024 containerd[1590]: 2025-09-13 00:11:19.419 [INFO][4628] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" HandleID="k8s-pod-network.01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Workload="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:19.530024 containerd[1590]: 2025-09-13 00:11:19.420 [INFO][4628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:19.530024 containerd[1590]: 2025-09-13 00:11:19.420 [INFO][4628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:19.530024 containerd[1590]: 2025-09-13 00:11:19.442 [WARNING][4628] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" HandleID="k8s-pod-network.01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Workload="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:19.530024 containerd[1590]: 2025-09-13 00:11:19.443 [INFO][4628] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" HandleID="k8s-pod-network.01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Workload="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:19.530024 containerd[1590]: 2025-09-13 00:11:19.456 [INFO][4628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:19.530024 containerd[1590]: 2025-09-13 00:11:19.496 [INFO][4603] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Sep 13 00:11:19.539768 containerd[1590]: time="2025-09-13T00:11:19.535437713Z" level=info msg="TearDown network for sandbox \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\" successfully" Sep 13 00:11:19.539768 containerd[1590]: time="2025-09-13T00:11:19.535498194Z" level=info msg="StopPodSandbox for \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\" returns successfully" Sep 13 00:11:19.539768 containerd[1590]: time="2025-09-13T00:11:19.537337429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nslpp,Uid:a2c2404d-4798-433c-8a7a-69c289c0678c,Namespace:calico-system,Attempt:1,}" Sep 13 00:11:19.539491 systemd[1]: run-netns-cni\x2dd809ac7a\x2db34d\x2d0bc2\x2dc749\x2d99a62fbfa39f.mount: Deactivated successfully. Sep 13 00:11:19.713320 systemd-networkd[1221]: calib8f0ddd9204: Gained IPv6LL Sep 13 00:11:19.782456 systemd-networkd[1221]: cali5fcb903c89b: Link UP Sep 13 00:11:19.782957 systemd-networkd[1221]: cali5fcb903c89b: Gained carrier Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.284 [INFO][4571] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0 calico-apiserver-75744756bc- calico-apiserver 0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d 943 0 2025-09-13 00:10:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75744756bc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.5-n-738365eea6 calico-apiserver-75744756bc-dm52h eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5fcb903c89b [] [] }} ContainerID="5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-dm52h" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-" Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.286 [INFO][4571] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-dm52h" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.544 [INFO][4635] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" HandleID="k8s-pod-network.5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.546 [INFO][4635] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" HandleID="k8s-pod-network.5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037b460), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.5-n-738365eea6", "pod":"calico-apiserver-75744756bc-dm52h", "timestamp":"2025-09-13 00:11:19.544037198 +0000 UTC"}, Hostname:"ci-4081.3.5-n-738365eea6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.546 [INFO][4635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.548 [INFO][4635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.548 [INFO][4635] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-738365eea6' Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.570 [INFO][4635] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.641 [INFO][4635] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.662 [INFO][4635] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.672 [INFO][4635] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.690 [INFO][4635] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.695 [INFO][4635] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.702 [INFO][4635] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0 Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.713 [INFO][4635] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.729 [INFO][4635] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.5/26] block=192.168.109.0/26 handle="k8s-pod-network.5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.729 [INFO][4635] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.5/26] handle="k8s-pod-network.5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.729 [INFO][4635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:19.840241 containerd[1590]: 2025-09-13 00:11:19.729 [INFO][4635] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.5/26] IPv6=[] ContainerID="5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" HandleID="k8s-pod-network.5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:19.841692 containerd[1590]: 2025-09-13 00:11:19.739 [INFO][4571] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-dm52h" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0", GenerateName:"calico-apiserver-75744756bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75744756bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"", Pod:"calico-apiserver-75744756bc-dm52h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5fcb903c89b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:19.841692 containerd[1590]: 2025-09-13 00:11:19.740 [INFO][4571] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.5/32] ContainerID="5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-dm52h" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:19.841692 containerd[1590]: 2025-09-13 00:11:19.740 [INFO][4571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fcb903c89b ContainerID="5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-dm52h" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:19.841692 containerd[1590]: 2025-09-13 00:11:19.795 [INFO][4571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-dm52h" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:19.841692 containerd[1590]: 2025-09-13 00:11:19.799 [INFO][4571] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-dm52h" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0", GenerateName:"calico-apiserver-75744756bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75744756bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0", Pod:"calico-apiserver-75744756bc-dm52h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5fcb903c89b", MAC:"0e:fa:f0:95:d1:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:19.841692 containerd[1590]: 2025-09-13 00:11:19.823 [INFO][4571] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0" Namespace="calico-apiserver" Pod="calico-apiserver-75744756bc-dm52h" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:19.897672 systemd-networkd[1221]: cali1d88ce599fc: Link UP Sep 13 00:11:19.898944 systemd-networkd[1221]: cali1d88ce599fc: Gained carrier Sep 13 00:11:19.905311 systemd-networkd[1221]: calib929bb3ac3c: Gained IPv6LL Sep 13 00:11:19.933663 containerd[1590]: time="2025-09-13T00:11:19.933112554Z" level=info msg="StopPodSandbox for \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\"" Sep 13 00:11:19.946883 containerd[1590]: time="2025-09-13T00:11:19.944864048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:19.946883 containerd[1590]: time="2025-09-13T00:11:19.945044258Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:19.946883 containerd[1590]: time="2025-09-13T00:11:19.945101279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:19.946883 containerd[1590]: time="2025-09-13T00:11:19.945369309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.517 [INFO][4609] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0 coredns-7c65d6cfc9- kube-system 7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5 948 0 2025-09-13 00:10:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-738365eea6 coredns-7c65d6cfc9-vzpdb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1d88ce599fc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vzpdb" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-" Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.518 [INFO][4609] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vzpdb" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.700 [INFO][4654] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" HandleID="k8s-pod-network.9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.700 [INFO][4654] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" HandleID="k8s-pod-network.9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004a5ac0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-738365eea6", "pod":"coredns-7c65d6cfc9-vzpdb", "timestamp":"2025-09-13 00:11:19.700184438 +0000 UTC"}, Hostname:"ci-4081.3.5-n-738365eea6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.700 [INFO][4654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.735 [INFO][4654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.735 [INFO][4654] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-738365eea6' Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.758 [INFO][4654] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.796 [INFO][4654] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.814 [INFO][4654] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.831 [INFO][4654] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.839 [INFO][4654] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.840 [INFO][4654] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.845 [INFO][4654] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.856 [INFO][4654] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.873 [INFO][4654] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.6/26] block=192.168.109.0/26 handle="k8s-pod-network.9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.873 [INFO][4654] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.6/26] handle="k8s-pod-network.9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.874 [INFO][4654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:19.951275 containerd[1590]: 2025-09-13 00:11:19.874 [INFO][4654] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.6/26] IPv6=[] ContainerID="9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" HandleID="k8s-pod-network.9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:19.953281 containerd[1590]: 2025-09-13 00:11:19.880 [INFO][4609] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vzpdb" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"", Pod:"coredns-7c65d6cfc9-vzpdb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d88ce599fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:19.953281 containerd[1590]: 2025-09-13 00:11:19.880 [INFO][4609] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.6/32] ContainerID="9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vzpdb" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:19.953281 containerd[1590]: 2025-09-13 00:11:19.880 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1d88ce599fc ContainerID="9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vzpdb" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:19.953281 containerd[1590]: 2025-09-13 00:11:19.899 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vzpdb" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:19.953281 containerd[1590]: 2025-09-13 00:11:19.901 [INFO][4609] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vzpdb" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab", Pod:"coredns-7c65d6cfc9-vzpdb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d88ce599fc", MAC:"86:cc:59:8a:90:1b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:19.953281 containerd[1590]: 2025-09-13 00:11:19.939 [INFO][4609] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab" Namespace="kube-system" Pod="coredns-7c65d6cfc9-vzpdb" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:20.069754 containerd[1590]: time="2025-09-13T00:11:20.068754854Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:20.071008 containerd[1590]: time="2025-09-13T00:11:20.069639074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:20.071008 containerd[1590]: time="2025-09-13T00:11:20.070054030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:20.074748 containerd[1590]: time="2025-09-13T00:11:20.071029393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:20.141172 systemd-networkd[1221]: calid324187a2a0: Link UP Sep 13 00:11:20.148347 systemd-networkd[1221]: calid324187a2a0: Gained carrier Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:19.792 [INFO][4660] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0 goldmane-7988f88666- calico-system a2c2404d-4798-433c-8a7a-69c289c0678c 954 0 2025-09-13 00:10:52 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.5-n-738365eea6 goldmane-7988f88666-nslpp eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid324187a2a0 [] [] }} ContainerID="aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" Namespace="calico-system" Pod="goldmane-7988f88666-nslpp" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-" Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:19.795 [INFO][4660] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" Namespace="calico-system" Pod="goldmane-7988f88666-nslpp" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:19.923 [INFO][4685] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" HandleID="k8s-pod-network.aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" Workload="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:19.925 [INFO][4685] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" HandleID="k8s-pod-network.aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" Workload="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122720), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.5-n-738365eea6", "pod":"goldmane-7988f88666-nslpp", "timestamp":"2025-09-13 00:11:19.923058262 +0000 UTC"}, Hostname:"ci-4081.3.5-n-738365eea6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:19.926 [INFO][4685] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:19.926 [INFO][4685] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:19.927 [INFO][4685] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-738365eea6' Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:19.954 [INFO][4685] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:19.987 [INFO][4685] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:20.007 [INFO][4685] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:20.015 [INFO][4685] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:20.049 [INFO][4685] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:20.050 [INFO][4685] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:20.063 [INFO][4685] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:20.084 [INFO][4685] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:20.105 [INFO][4685] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.7/26] block=192.168.109.0/26 handle="k8s-pod-network.aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:20.106 [INFO][4685] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.7/26] handle="k8s-pod-network.aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:20.106 [INFO][4685] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:20.197300 containerd[1590]: 2025-09-13 00:11:20.106 [INFO][4685] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.7/26] IPv6=[] ContainerID="aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" HandleID="k8s-pod-network.aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" Workload="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:20.198550 containerd[1590]: 2025-09-13 00:11:20.132 [INFO][4660] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" Namespace="calico-system" Pod="goldmane-7988f88666-nslpp" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"a2c2404d-4798-433c-8a7a-69c289c0678c", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"", Pod:"goldmane-7988f88666-nslpp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid324187a2a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:20.198550 containerd[1590]: 2025-09-13 00:11:20.132 [INFO][4660] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.7/32] ContainerID="aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" Namespace="calico-system" Pod="goldmane-7988f88666-nslpp" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:20.198550 containerd[1590]: 2025-09-13 00:11:20.132 [INFO][4660] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid324187a2a0 ContainerID="aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" Namespace="calico-system" Pod="goldmane-7988f88666-nslpp" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:20.198550 containerd[1590]: 2025-09-13 00:11:20.156 [INFO][4660] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" Namespace="calico-system" Pod="goldmane-7988f88666-nslpp" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:20.198550 containerd[1590]: 2025-09-13 00:11:20.163 [INFO][4660] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" Namespace="calico-system" Pod="goldmane-7988f88666-nslpp" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"a2c2404d-4798-433c-8a7a-69c289c0678c", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e", Pod:"goldmane-7988f88666-nslpp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid324187a2a0", MAC:"5a:24:e6:90:7c:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:20.198550 containerd[1590]: 2025-09-13 00:11:20.193 [INFO][4660] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e" Namespace="calico-system" Pod="goldmane-7988f88666-nslpp" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:20.361858 containerd[1590]: time="2025-09-13T00:11:20.359457043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-vzpdb,Uid:7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5,Namespace:kube-system,Attempt:1,} returns sandbox id \"9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab\"" Sep 13 00:11:20.368048 kubelet[2684]: E0913 00:11:20.365360 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:11:20.389630 containerd[1590]: time="2025-09-13T00:11:20.388837486Z" level=info msg="CreateContainer within sandbox \"9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:11:20.429189 containerd[1590]: time="2025-09-13T00:11:20.418952969Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:20.429189 containerd[1590]: time="2025-09-13T00:11:20.419289787Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:20.429189 containerd[1590]: time="2025-09-13T00:11:20.419309975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:20.434168 containerd[1590]: time="2025-09-13T00:11:20.432861981Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:20.528938 systemd[1]: run-containerd-runc-k8s.io-aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e-runc.mLfSdE.mount: Deactivated successfully. Sep 13 00:11:20.549550 containerd[1590]: time="2025-09-13T00:11:20.548500601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75744756bc-dm52h,Uid:0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0\"" Sep 13 00:11:20.561814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3654157125.mount: Deactivated successfully. Sep 13 00:11:20.625846 containerd[1590]: time="2025-09-13T00:11:20.625645673Z" level=info msg="CreateContainer within sandbox \"9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"acbe5ebcd76ef850b5b1472e4b174d4fd08676242a8be4d91102623e237ecce3\"" Sep 13 00:11:20.641158 containerd[1590]: time="2025-09-13T00:11:20.641026608Z" level=info msg="StartContainer for \"acbe5ebcd76ef850b5b1472e4b174d4fd08676242a8be4d91102623e237ecce3\"" Sep 13 00:11:20.677310 containerd[1590]: 2025-09-13 00:11:20.322 [INFO][4735] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Sep 13 00:11:20.677310 containerd[1590]: 2025-09-13 00:11:20.323 [INFO][4735] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" iface="eth0" netns="/var/run/netns/cni-ee837d8e-4124-1b42-e5de-cf085141114b" Sep 13 00:11:20.677310 containerd[1590]: 2025-09-13 00:11:20.327 [INFO][4735] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" iface="eth0" netns="/var/run/netns/cni-ee837d8e-4124-1b42-e5de-cf085141114b" Sep 13 00:11:20.677310 containerd[1590]: 2025-09-13 00:11:20.328 [INFO][4735] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" iface="eth0" netns="/var/run/netns/cni-ee837d8e-4124-1b42-e5de-cf085141114b" Sep 13 00:11:20.677310 containerd[1590]: 2025-09-13 00:11:20.328 [INFO][4735] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Sep 13 00:11:20.677310 containerd[1590]: 2025-09-13 00:11:20.328 [INFO][4735] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Sep 13 00:11:20.677310 containerd[1590]: 2025-09-13 00:11:20.635 [INFO][4815] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" HandleID="k8s-pod-network.7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:20.677310 containerd[1590]: 2025-09-13 00:11:20.636 [INFO][4815] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:20.677310 containerd[1590]: 2025-09-13 00:11:20.636 [INFO][4815] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:20.677310 containerd[1590]: 2025-09-13 00:11:20.651 [WARNING][4815] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" HandleID="k8s-pod-network.7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:20.677310 containerd[1590]: 2025-09-13 00:11:20.651 [INFO][4815] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" HandleID="k8s-pod-network.7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:20.677310 containerd[1590]: 2025-09-13 00:11:20.657 [INFO][4815] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:20.677310 containerd[1590]: 2025-09-13 00:11:20.661 [INFO][4735] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Sep 13 00:11:20.682286 containerd[1590]: time="2025-09-13T00:11:20.681577291Z" level=info msg="TearDown network for sandbox \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\" successfully" Sep 13 00:11:20.682286 containerd[1590]: time="2025-09-13T00:11:20.681741619Z" level=info msg="StopPodSandbox for \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\" returns successfully" Sep 13 00:11:20.683197 kubelet[2684]: E0913 00:11:20.682891 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:11:20.685576 containerd[1590]: time="2025-09-13T00:11:20.685041248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bk5m4,Uid:ed56a258-9735-4b0f-b601-d65b5338149b,Namespace:kube-system,Attempt:1,}" Sep 13 00:11:20.885026 containerd[1590]: time="2025-09-13T00:11:20.884796303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-nslpp,Uid:a2c2404d-4798-433c-8a7a-69c289c0678c,Namespace:calico-system,Attempt:1,} returns sandbox id \"aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e\"" Sep 13 00:11:20.930056 systemd-networkd[1221]: cali5fcb903c89b: Gained IPv6LL Sep 13 00:11:21.044288 containerd[1590]: time="2025-09-13T00:11:21.042975865Z" level=info msg="StartContainer for \"acbe5ebcd76ef850b5b1472e4b174d4fd08676242a8be4d91102623e237ecce3\" returns successfully" Sep 13 00:11:21.249464 systemd-networkd[1221]: cali1d88ce599fc: Gained IPv6LL Sep 13 00:11:21.305700 systemd-networkd[1221]: caliea38a8f3760: Link UP Sep 13 00:11:21.306224 systemd-networkd[1221]: caliea38a8f3760: Gained carrier Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.028 [INFO][4870] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0 coredns-7c65d6cfc9- kube-system ed56a258-9735-4b0f-b601-d65b5338149b 970 0 2025-09-13 00:10:34 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.5-n-738365eea6 coredns-7c65d6cfc9-bk5m4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliea38a8f3760 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5m4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-" Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.029 [INFO][4870] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5m4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.166 [INFO][4909] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" HandleID="k8s-pod-network.f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.168 [INFO][4909] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" HandleID="k8s-pod-network.f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038f990), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.5-n-738365eea6", "pod":"coredns-7c65d6cfc9-bk5m4", "timestamp":"2025-09-13 00:11:21.166258864 +0000 UTC"}, Hostname:"ci-4081.3.5-n-738365eea6", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.168 [INFO][4909] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.169 [INFO][4909] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.169 [INFO][4909] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.5-n-738365eea6' Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.191 [INFO][4909] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.204 [INFO][4909] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.217 [INFO][4909] ipam/ipam.go 511: Trying affinity for 192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.227 [INFO][4909] ipam/ipam.go 158: Attempting to load block cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.246 [INFO][4909] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.109.0/26 host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.248 [INFO][4909] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.109.0/26 handle="k8s-pod-network.f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.258 [INFO][4909] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5 Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.275 [INFO][4909] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.109.0/26 handle="k8s-pod-network.f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.289 [INFO][4909] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.109.8/26] block=192.168.109.0/26 handle="k8s-pod-network.f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.289 [INFO][4909] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.109.8/26] handle="k8s-pod-network.f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" host="ci-4081.3.5-n-738365eea6" Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.290 [INFO][4909] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:21.352162 containerd[1590]: 2025-09-13 00:11:21.290 [INFO][4909] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.109.8/26] IPv6=[] ContainerID="f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" HandleID="k8s-pod-network.f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:21.356733 containerd[1590]: 2025-09-13 00:11:21.296 [INFO][4870] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5m4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ed56a258-9735-4b0f-b601-d65b5338149b", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"", Pod:"coredns-7c65d6cfc9-bk5m4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea38a8f3760", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:21.356733 containerd[1590]: 2025-09-13 00:11:21.297 [INFO][4870] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.109.8/32] ContainerID="f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5m4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:21.356733 containerd[1590]: 2025-09-13 00:11:21.297 [INFO][4870] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliea38a8f3760 ContainerID="f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5m4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:21.356733 containerd[1590]: 2025-09-13 00:11:21.311 [INFO][4870] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5m4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:21.356733 containerd[1590]: 2025-09-13 00:11:21.313 [INFO][4870] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5m4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ed56a258-9735-4b0f-b601-d65b5338149b", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5", Pod:"coredns-7c65d6cfc9-bk5m4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea38a8f3760", MAC:"5a:8d:72:0a:e3:b6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:21.356733 containerd[1590]: 2025-09-13 00:11:21.346 [INFO][4870] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5" Namespace="kube-system" Pod="coredns-7c65d6cfc9-bk5m4" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:21.377513 systemd-networkd[1221]: calid324187a2a0: Gained IPv6LL Sep 13 00:11:21.402658 containerd[1590]: time="2025-09-13T00:11:21.400126790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:21.403147 containerd[1590]: time="2025-09-13T00:11:21.402550884Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:11:21.403147 containerd[1590]: time="2025-09-13T00:11:21.402640370Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:11:21.403147 containerd[1590]: time="2025-09-13T00:11:21.402694791Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:21.403147 containerd[1590]: time="2025-09-13T00:11:21.402899415Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:11:21.411983 containerd[1590]: time="2025-09-13T00:11:21.411904111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 13 00:11:21.415310 containerd[1590]: time="2025-09-13T00:11:21.413462270Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:21.419567 containerd[1590]: time="2025-09-13T00:11:21.419509648Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 3.252940758s" Sep 13 00:11:21.420024 containerd[1590]: time="2025-09-13T00:11:21.419979123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 13 00:11:21.420221 containerd[1590]: time="2025-09-13T00:11:21.419814154Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:21.426926 containerd[1590]: time="2025-09-13T00:11:21.426660750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:11:21.432356 containerd[1590]: time="2025-09-13T00:11:21.431666985Z" level=info msg="CreateContainer within sandbox \"7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 13 00:11:21.472388 systemd[1]: run-netns-cni\x2dee837d8e\x2d4124\x2d1b42\x2de5de\x2dcf085141114b.mount: Deactivated successfully. Sep 13 00:11:21.510528 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:21.512211 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:21.512275 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:21.519480 containerd[1590]: time="2025-09-13T00:11:21.514971587Z" level=info msg="CreateContainer within sandbox \"7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c2d5ee322e3718a58f5e8a1ba718605526ca1b11ac0172c16b79f5f57ec6e8cd\"" Sep 13 00:11:21.525733 containerd[1590]: time="2025-09-13T00:11:21.523180659Z" level=info msg="StartContainer for \"c2d5ee322e3718a58f5e8a1ba718605526ca1b11ac0172c16b79f5f57ec6e8cd\"" Sep 13 00:11:21.593564 containerd[1590]: time="2025-09-13T00:11:21.593472969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-bk5m4,Uid:ed56a258-9735-4b0f-b601-d65b5338149b,Namespace:kube-system,Attempt:1,} returns sandbox id \"f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5\"" Sep 13 00:11:21.596135 kubelet[2684]: E0913 00:11:21.595860 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:11:21.607506 containerd[1590]: time="2025-09-13T00:11:21.607357818Z" level=info msg="CreateContainer within sandbox \"f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:11:21.617063 kubelet[2684]: E0913 00:11:21.615808 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:11:21.642425 kubelet[2684]: I0913 00:11:21.642145 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-vzpdb" podStartSLOduration=47.642106844 podStartE2EDuration="47.642106844s" podCreationTimestamp="2025-09-13 00:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:21.634290382 +0000 UTC m=+52.914925600" watchObservedRunningTime="2025-09-13 00:11:21.642106844 +0000 UTC m=+52.922742073" Sep 13 00:11:21.696279 containerd[1590]: time="2025-09-13T00:11:21.696234663Z" level=info msg="CreateContainer within sandbox \"f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0eb5119581f652490fe79b754b2d2b222c7acc393f84507fad2c474aee544544\"" Sep 13 00:11:21.700311 containerd[1590]: time="2025-09-13T00:11:21.700259474Z" level=info msg="StartContainer for \"c2d5ee322e3718a58f5e8a1ba718605526ca1b11ac0172c16b79f5f57ec6e8cd\" returns successfully" Sep 13 00:11:21.703559 containerd[1590]: time="2025-09-13T00:11:21.703387503Z" level=info msg="StartContainer for \"0eb5119581f652490fe79b754b2d2b222c7acc393f84507fad2c474aee544544\"" Sep 13 00:11:21.813444 containerd[1590]: time="2025-09-13T00:11:21.813174587Z" level=info msg="StartContainer for \"0eb5119581f652490fe79b754b2d2b222c7acc393f84507fad2c474aee544544\" returns successfully" Sep 13 00:11:22.449514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3014487142.mount: Deactivated successfully. Sep 13 00:11:22.507191 systemd[1]: Started sshd@7-164.90.159.5:22-139.178.68.195:36672.service - OpenSSH per-connection server daemon (139.178.68.195:36672). Sep 13 00:11:22.615762 sshd[5049]: Accepted publickey for core from 139.178.68.195 port 36672 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:11:22.617581 sshd[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:22.637351 systemd-logind[1559]: New session 8 of user core. Sep 13 00:11:22.642249 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:11:22.665767 kubelet[2684]: E0913 00:11:22.665237 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:11:22.676463 kubelet[2684]: E0913 00:11:22.675694 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:11:22.688435 kubelet[2684]: I0913 00:11:22.688267 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-bk5m4" podStartSLOduration=48.688235606 podStartE2EDuration="48.688235606s" podCreationTimestamp="2025-09-13 00:10:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:11:22.686775358 +0000 UTC m=+53.967410578" watchObservedRunningTime="2025-09-13 00:11:22.688235606 +0000 UTC m=+53.968870828" Sep 13 00:11:22.913588 systemd-networkd[1221]: caliea38a8f3760: Gained IPv6LL Sep 13 00:11:23.265124 sshd[5049]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:23.279104 systemd[1]: sshd@7-164.90.159.5:22-139.178.68.195:36672.service: Deactivated successfully. Sep 13 00:11:23.283164 systemd-logind[1559]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:11:23.283925 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:11:23.290391 systemd-logind[1559]: Removed session 8. Sep 13 00:11:23.681254 kubelet[2684]: E0913 00:11:23.681200 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:11:23.682543 kubelet[2684]: E0913 00:11:23.682357 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:11:24.685450 kubelet[2684]: E0913 00:11:24.685378 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:11:25.105676 containerd[1590]: time="2025-09-13T00:11:25.104862471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:25.106569 containerd[1590]: time="2025-09-13T00:11:25.106523827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 13 00:11:25.107827 containerd[1590]: time="2025-09-13T00:11:25.107758605Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:25.111561 containerd[1590]: time="2025-09-13T00:11:25.111506785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:25.112981 containerd[1590]: time="2025-09-13T00:11:25.112916355Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 3.686176153s" Sep 13 00:11:25.113132 containerd[1590]: time="2025-09-13T00:11:25.112984837Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:11:25.115318 containerd[1590]: time="2025-09-13T00:11:25.115273028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 13 00:11:25.120341 containerd[1590]: time="2025-09-13T00:11:25.120289414Z" level=info msg="CreateContainer within sandbox \"1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:11:25.140836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1103857237.mount: Deactivated successfully. Sep 13 00:11:25.143784 containerd[1590]: time="2025-09-13T00:11:25.142682526Z" level=info msg="CreateContainer within sandbox \"1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b61467ebfd7bd00917db30a622760d85d250ea27939433299d755b1a0e78f3fa\"" Sep 13 00:11:25.145118 containerd[1590]: time="2025-09-13T00:11:25.144996533Z" level=info msg="StartContainer for \"b61467ebfd7bd00917db30a622760d85d250ea27939433299d755b1a0e78f3fa\"" Sep 13 00:11:25.252095 systemd[1]: run-containerd-runc-k8s.io-b61467ebfd7bd00917db30a622760d85d250ea27939433299d755b1a0e78f3fa-runc.1H9D1G.mount: Deactivated successfully. Sep 13 00:11:25.318839 containerd[1590]: time="2025-09-13T00:11:25.317628345Z" level=info msg="StartContainer for \"b61467ebfd7bd00917db30a622760d85d250ea27939433299d755b1a0e78f3fa\" returns successfully" Sep 13 00:11:25.707640 kubelet[2684]: I0913 00:11:25.707566 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75744756bc-gcpv4" podStartSLOduration=31.871547758 podStartE2EDuration="38.707539178s" podCreationTimestamp="2025-09-13 00:10:47 +0000 UTC" firstStartedPulling="2025-09-13 00:11:18.278845739 +0000 UTC m=+49.559480967" lastFinishedPulling="2025-09-13 00:11:25.114837179 +0000 UTC m=+56.395472387" observedRunningTime="2025-09-13 00:11:25.707259895 +0000 UTC m=+56.987895117" watchObservedRunningTime="2025-09-13 00:11:25.707539178 +0000 UTC m=+56.988174442" Sep 13 00:11:28.277310 systemd[1]: Started sshd@8-164.90.159.5:22-139.178.68.195:36682.service - OpenSSH per-connection server daemon (139.178.68.195:36682). Sep 13 00:11:28.447340 sshd[5133]: Accepted publickey for core from 139.178.68.195 port 36682 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:11:28.452137 sshd[5133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:28.465228 systemd-logind[1559]: New session 9 of user core. Sep 13 00:11:28.470172 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:11:29.272945 sshd[5133]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:29.281097 containerd[1590]: time="2025-09-13T00:11:29.280863162Z" level=info msg="StopPodSandbox for \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\"" Sep 13 00:11:29.285121 systemd[1]: sshd@8-164.90.159.5:22-139.178.68.195:36682.service: Deactivated successfully. Sep 13 00:11:29.291666 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:11:29.295781 systemd-logind[1559]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:11:29.302519 systemd-logind[1559]: Removed session 9. Sep 13 00:11:29.508504 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:29.507054 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:29.507429 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:29.846417 containerd[1590]: 2025-09-13 00:11:29.527 [WARNING][5167] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ed56a258-9735-4b0f-b601-d65b5338149b", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5", Pod:"coredns-7c65d6cfc9-bk5m4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea38a8f3760", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:29.846417 containerd[1590]: 2025-09-13 00:11:29.529 [INFO][5167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Sep 13 00:11:29.846417 containerd[1590]: 2025-09-13 00:11:29.529 [INFO][5167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" iface="eth0" netns="" Sep 13 00:11:29.846417 containerd[1590]: 2025-09-13 00:11:29.529 [INFO][5167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Sep 13 00:11:29.846417 containerd[1590]: 2025-09-13 00:11:29.529 [INFO][5167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Sep 13 00:11:29.846417 containerd[1590]: 2025-09-13 00:11:29.776 [INFO][5174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" HandleID="k8s-pod-network.7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:29.846417 containerd[1590]: 2025-09-13 00:11:29.776 [INFO][5174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:29.846417 containerd[1590]: 2025-09-13 00:11:29.776 [INFO][5174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:29.846417 containerd[1590]: 2025-09-13 00:11:29.804 [WARNING][5174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" HandleID="k8s-pod-network.7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:29.846417 containerd[1590]: 2025-09-13 00:11:29.804 [INFO][5174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" HandleID="k8s-pod-network.7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:29.846417 containerd[1590]: 2025-09-13 00:11:29.809 [INFO][5174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:29.846417 containerd[1590]: 2025-09-13 00:11:29.830 [INFO][5167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Sep 13 00:11:29.848367 containerd[1590]: time="2025-09-13T00:11:29.848037742Z" level=info msg="TearDown network for sandbox \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\" successfully" Sep 13 00:11:29.848587 containerd[1590]: time="2025-09-13T00:11:29.848529296Z" level=info msg="StopPodSandbox for \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\" returns successfully" Sep 13 00:11:29.911642 containerd[1590]: time="2025-09-13T00:11:29.911585822Z" level=info msg="RemovePodSandbox for \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\"" Sep 13 00:11:29.915353 containerd[1590]: time="2025-09-13T00:11:29.914907052Z" level=info msg="Forcibly stopping sandbox \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\"" Sep 13 00:11:30.204428 containerd[1590]: 2025-09-13 00:11:30.093 [WARNING][5189] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"ed56a258-9735-4b0f-b601-d65b5338149b", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"f23812f0c879e54395109c63251d2ac5f813a9a4019a1057f14ca1b7770395a5", Pod:"coredns-7c65d6cfc9-bk5m4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliea38a8f3760", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:30.204428 containerd[1590]: 2025-09-13 00:11:30.095 [INFO][5189] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Sep 13 00:11:30.204428 containerd[1590]: 2025-09-13 00:11:30.095 [INFO][5189] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" iface="eth0" netns="" Sep 13 00:11:30.204428 containerd[1590]: 2025-09-13 00:11:30.096 [INFO][5189] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Sep 13 00:11:30.204428 containerd[1590]: 2025-09-13 00:11:30.096 [INFO][5189] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Sep 13 00:11:30.204428 containerd[1590]: 2025-09-13 00:11:30.166 [INFO][5196] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" HandleID="k8s-pod-network.7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:30.204428 containerd[1590]: 2025-09-13 00:11:30.167 [INFO][5196] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:30.204428 containerd[1590]: 2025-09-13 00:11:30.167 [INFO][5196] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:30.204428 containerd[1590]: 2025-09-13 00:11:30.182 [WARNING][5196] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" HandleID="k8s-pod-network.7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:30.204428 containerd[1590]: 2025-09-13 00:11:30.183 [INFO][5196] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" HandleID="k8s-pod-network.7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--bk5m4-eth0" Sep 13 00:11:30.204428 containerd[1590]: 2025-09-13 00:11:30.187 [INFO][5196] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:30.204428 containerd[1590]: 2025-09-13 00:11:30.192 [INFO][5189] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8" Sep 13 00:11:30.207517 containerd[1590]: time="2025-09-13T00:11:30.203674833Z" level=info msg="TearDown network for sandbox \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\" successfully" Sep 13 00:11:30.283948 containerd[1590]: time="2025-09-13T00:11:30.283655408Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:30.306741 containerd[1590]: time="2025-09-13T00:11:30.306407133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:30.312313 containerd[1590]: time="2025-09-13T00:11:30.312227541Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:30.313795 containerd[1590]: time="2025-09-13T00:11:30.313244484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 13 00:11:30.329967 containerd[1590]: time="2025-09-13T00:11:30.329798271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:30.331889 containerd[1590]: time="2025-09-13T00:11:30.331784275Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.216447151s" Sep 13 00:11:30.332334 containerd[1590]: time="2025-09-13T00:11:30.332077740Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 13 00:11:30.334207 containerd[1590]: time="2025-09-13T00:11:30.333910557Z" level=info msg="RemovePodSandbox \"7692e338f175ed02efa3167b40fda7e9ec54ee52f4c6112269005604aa6283d8\" returns successfully" Sep 13 00:11:30.340950 containerd[1590]: time="2025-09-13T00:11:30.337474635Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 13 00:11:30.359871 containerd[1590]: time="2025-09-13T00:11:30.359302901Z" level=info msg="StopPodSandbox for \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\"" Sep 13 00:11:30.394016 containerd[1590]: time="2025-09-13T00:11:30.392219409Z" level=info msg="CreateContainer within sandbox \"a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 13 00:11:30.569643 containerd[1590]: time="2025-09-13T00:11:30.569549340Z" level=info msg="CreateContainer within sandbox \"a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4a7939f9eae6eb668435e7944fbd55bd9989376803ea43b3aa7f48ba9fb5d5ae\"" Sep 13 00:11:30.573148 containerd[1590]: time="2025-09-13T00:11:30.571844759Z" level=info msg="StartContainer for \"4a7939f9eae6eb668435e7944fbd55bd9989376803ea43b3aa7f48ba9fb5d5ae\"" Sep 13 00:11:30.619594 containerd[1590]: 2025-09-13 00:11:30.502 [WARNING][5212] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0", GenerateName:"calico-apiserver-75744756bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e4db298b-8af8-4bf0-9f46-9f2489a8fa88", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75744756bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692", Pod:"calico-apiserver-75744756bc-gcpv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie58d815b1e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:30.619594 containerd[1590]: 2025-09-13 00:11:30.503 [INFO][5212] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Sep 13 00:11:30.619594 containerd[1590]: 2025-09-13 00:11:30.503 [INFO][5212] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" iface="eth0" netns="" Sep 13 00:11:30.619594 containerd[1590]: 2025-09-13 00:11:30.503 [INFO][5212] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Sep 13 00:11:30.619594 containerd[1590]: 2025-09-13 00:11:30.503 [INFO][5212] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Sep 13 00:11:30.619594 containerd[1590]: 2025-09-13 00:11:30.581 [INFO][5220] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" HandleID="k8s-pod-network.bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:30.619594 containerd[1590]: 2025-09-13 00:11:30.581 [INFO][5220] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:30.619594 containerd[1590]: 2025-09-13 00:11:30.581 [INFO][5220] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:30.619594 containerd[1590]: 2025-09-13 00:11:30.595 [WARNING][5220] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" HandleID="k8s-pod-network.bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:30.619594 containerd[1590]: 2025-09-13 00:11:30.596 [INFO][5220] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" HandleID="k8s-pod-network.bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:30.619594 containerd[1590]: 2025-09-13 00:11:30.600 [INFO][5220] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:30.619594 containerd[1590]: 2025-09-13 00:11:30.604 [INFO][5212] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Sep 13 00:11:30.619594 containerd[1590]: time="2025-09-13T00:11:30.619119798Z" level=info msg="TearDown network for sandbox \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\" successfully" Sep 13 00:11:30.619594 containerd[1590]: time="2025-09-13T00:11:30.619156442Z" level=info msg="StopPodSandbox for \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\" returns successfully" Sep 13 00:11:30.621962 containerd[1590]: time="2025-09-13T00:11:30.621639272Z" level=info msg="RemovePodSandbox for \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\"" Sep 13 00:11:30.621962 containerd[1590]: time="2025-09-13T00:11:30.621743873Z" level=info msg="Forcibly stopping sandbox \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\"" Sep 13 00:11:30.788994 containerd[1590]: 2025-09-13 00:11:30.719 [WARNING][5235] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0", GenerateName:"calico-apiserver-75744756bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"e4db298b-8af8-4bf0-9f46-9f2489a8fa88", ResourceVersion:"1074", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75744756bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"1c4fd5a1e7cb74e3e1331e1ff6074585b4f53a81a9866d87e2421d2b25359692", Pod:"calico-apiserver-75744756bc-gcpv4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie58d815b1e0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:30.788994 containerd[1590]: 2025-09-13 00:11:30.721 [INFO][5235] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Sep 13 00:11:30.788994 containerd[1590]: 2025-09-13 00:11:30.721 [INFO][5235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" iface="eth0" netns="" Sep 13 00:11:30.788994 containerd[1590]: 2025-09-13 00:11:30.721 [INFO][5235] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Sep 13 00:11:30.788994 containerd[1590]: 2025-09-13 00:11:30.721 [INFO][5235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Sep 13 00:11:30.788994 containerd[1590]: 2025-09-13 00:11:30.769 [INFO][5250] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" HandleID="k8s-pod-network.bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:30.788994 containerd[1590]: 2025-09-13 00:11:30.769 [INFO][5250] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:30.788994 containerd[1590]: 2025-09-13 00:11:30.769 [INFO][5250] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:30.788994 containerd[1590]: 2025-09-13 00:11:30.778 [WARNING][5250] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" HandleID="k8s-pod-network.bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:30.788994 containerd[1590]: 2025-09-13 00:11:30.778 [INFO][5250] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" HandleID="k8s-pod-network.bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--gcpv4-eth0" Sep 13 00:11:30.788994 containerd[1590]: 2025-09-13 00:11:30.780 [INFO][5250] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:30.788994 containerd[1590]: 2025-09-13 00:11:30.784 [INFO][5235] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378" Sep 13 00:11:30.791590 containerd[1590]: time="2025-09-13T00:11:30.789447538Z" level=info msg="TearDown network for sandbox \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\" successfully" Sep 13 00:11:30.817772 containerd[1590]: time="2025-09-13T00:11:30.817408485Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:30.818107 containerd[1590]: time="2025-09-13T00:11:30.818063557Z" level=info msg="RemovePodSandbox \"bb3590c88af12b563a7b9fb43a400b7b979c38a32b0eec9a294c4d0f408b8378\" returns successfully" Sep 13 00:11:30.819389 containerd[1590]: time="2025-09-13T00:11:30.819355475Z" level=info msg="StopPodSandbox for \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\"" Sep 13 00:11:30.855397 containerd[1590]: time="2025-09-13T00:11:30.855224892Z" level=info msg="StartContainer for \"4a7939f9eae6eb668435e7944fbd55bd9989376803ea43b3aa7f48ba9fb5d5ae\" returns successfully" Sep 13 00:11:31.005378 containerd[1590]: 2025-09-13 00:11:30.911 [WARNING][5285] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab", Pod:"coredns-7c65d6cfc9-vzpdb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d88ce599fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:31.005378 containerd[1590]: 2025-09-13 00:11:30.911 [INFO][5285] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Sep 13 00:11:31.005378 containerd[1590]: 2025-09-13 00:11:30.911 [INFO][5285] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" iface="eth0" netns="" Sep 13 00:11:31.005378 containerd[1590]: 2025-09-13 00:11:30.911 [INFO][5285] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Sep 13 00:11:31.005378 containerd[1590]: 2025-09-13 00:11:30.911 [INFO][5285] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Sep 13 00:11:31.005378 containerd[1590]: 2025-09-13 00:11:30.980 [INFO][5303] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" HandleID="k8s-pod-network.c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:31.005378 containerd[1590]: 2025-09-13 00:11:30.981 [INFO][5303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:31.005378 containerd[1590]: 2025-09-13 00:11:30.981 [INFO][5303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:31.005378 containerd[1590]: 2025-09-13 00:11:30.992 [WARNING][5303] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" HandleID="k8s-pod-network.c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:31.005378 containerd[1590]: 2025-09-13 00:11:30.992 [INFO][5303] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" HandleID="k8s-pod-network.c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:31.005378 containerd[1590]: 2025-09-13 00:11:30.994 [INFO][5303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:31.005378 containerd[1590]: 2025-09-13 00:11:31.000 [INFO][5285] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Sep 13 00:11:31.009160 containerd[1590]: time="2025-09-13T00:11:31.007901200Z" level=info msg="TearDown network for sandbox \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\" successfully" Sep 13 00:11:31.009160 containerd[1590]: time="2025-09-13T00:11:31.008905187Z" level=info msg="StopPodSandbox for \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\" returns successfully" Sep 13 00:11:31.011972 containerd[1590]: time="2025-09-13T00:11:31.011517352Z" level=info msg="RemovePodSandbox for \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\"" Sep 13 00:11:31.011972 containerd[1590]: time="2025-09-13T00:11:31.011573603Z" level=info msg="Forcibly stopping sandbox \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\"" Sep 13 00:11:31.176883 containerd[1590]: 2025-09-13 00:11:31.108 [WARNING][5322] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"7578e0a6-6d43-463a-8d3b-dea5d7ff2fe5", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"9f4b4f6a2bc7ed0366b9c78b6defe0df984b379afd98b416af119be856124dab", Pod:"coredns-7c65d6cfc9-vzpdb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.109.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1d88ce599fc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:31.176883 containerd[1590]: 2025-09-13 00:11:31.109 [INFO][5322] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Sep 13 00:11:31.176883 containerd[1590]: 2025-09-13 00:11:31.109 [INFO][5322] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" iface="eth0" netns="" Sep 13 00:11:31.176883 containerd[1590]: 2025-09-13 00:11:31.109 [INFO][5322] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Sep 13 00:11:31.176883 containerd[1590]: 2025-09-13 00:11:31.109 [INFO][5322] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Sep 13 00:11:31.176883 containerd[1590]: 2025-09-13 00:11:31.154 [INFO][5329] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" HandleID="k8s-pod-network.c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:31.176883 containerd[1590]: 2025-09-13 00:11:31.154 [INFO][5329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:31.176883 containerd[1590]: 2025-09-13 00:11:31.154 [INFO][5329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:31.176883 containerd[1590]: 2025-09-13 00:11:31.166 [WARNING][5329] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" HandleID="k8s-pod-network.c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:31.176883 containerd[1590]: 2025-09-13 00:11:31.166 [INFO][5329] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" HandleID="k8s-pod-network.c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Workload="ci--4081.3.5--n--738365eea6-k8s-coredns--7c65d6cfc9--vzpdb-eth0" Sep 13 00:11:31.176883 containerd[1590]: 2025-09-13 00:11:31.170 [INFO][5329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:31.176883 containerd[1590]: 2025-09-13 00:11:31.173 [INFO][5322] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57" Sep 13 00:11:31.176883 containerd[1590]: time="2025-09-13T00:11:31.176165030Z" level=info msg="TearDown network for sandbox \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\" successfully" Sep 13 00:11:31.194469 containerd[1590]: time="2025-09-13T00:11:31.193757540Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:31.194469 containerd[1590]: time="2025-09-13T00:11:31.193968925Z" level=info msg="RemovePodSandbox \"c5219db7d3766fe704407e1e6f453b511b6a758119ad0c8f50e883d4d542ee57\" returns successfully" Sep 13 00:11:31.194886 containerd[1590]: time="2025-09-13T00:11:31.194689139Z" level=info msg="StopPodSandbox for \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\"" Sep 13 00:11:31.322029 containerd[1590]: 2025-09-13 00:11:31.252 [WARNING][5343] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"a2c2404d-4798-433c-8a7a-69c289c0678c", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e", Pod:"goldmane-7988f88666-nslpp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid324187a2a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:31.322029 containerd[1590]: 2025-09-13 00:11:31.253 [INFO][5343] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Sep 13 00:11:31.322029 containerd[1590]: 2025-09-13 00:11:31.253 [INFO][5343] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" iface="eth0" netns="" Sep 13 00:11:31.322029 containerd[1590]: 2025-09-13 00:11:31.253 [INFO][5343] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Sep 13 00:11:31.322029 containerd[1590]: 2025-09-13 00:11:31.253 [INFO][5343] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Sep 13 00:11:31.322029 containerd[1590]: 2025-09-13 00:11:31.300 [INFO][5351] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" HandleID="k8s-pod-network.01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Workload="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:31.322029 containerd[1590]: 2025-09-13 00:11:31.300 [INFO][5351] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:31.322029 containerd[1590]: 2025-09-13 00:11:31.300 [INFO][5351] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:31.322029 containerd[1590]: 2025-09-13 00:11:31.313 [WARNING][5351] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" HandleID="k8s-pod-network.01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Workload="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:31.322029 containerd[1590]: 2025-09-13 00:11:31.313 [INFO][5351] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" HandleID="k8s-pod-network.01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Workload="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:31.322029 containerd[1590]: 2025-09-13 00:11:31.316 [INFO][5351] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:31.322029 containerd[1590]: 2025-09-13 00:11:31.318 [INFO][5343] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Sep 13 00:11:31.324914 containerd[1590]: time="2025-09-13T00:11:31.322058545Z" level=info msg="TearDown network for sandbox \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\" successfully" Sep 13 00:11:31.324914 containerd[1590]: time="2025-09-13T00:11:31.322088172Z" level=info msg="StopPodSandbox for \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\" returns successfully" Sep 13 00:11:31.324914 containerd[1590]: time="2025-09-13T00:11:31.323115667Z" level=info msg="RemovePodSandbox for \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\"" Sep 13 00:11:31.324914 containerd[1590]: time="2025-09-13T00:11:31.323163654Z" level=info msg="Forcibly stopping sandbox \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\"" Sep 13 00:11:31.449280 containerd[1590]: 2025-09-13 00:11:31.388 [WARNING][5365] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"a2c2404d-4798-433c-8a7a-69c289c0678c", ResourceVersion:"968", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e", Pod:"goldmane-7988f88666-nslpp", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.109.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid324187a2a0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:31.449280 containerd[1590]: 2025-09-13 00:11:31.388 [INFO][5365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Sep 13 00:11:31.449280 containerd[1590]: 2025-09-13 00:11:31.388 [INFO][5365] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" iface="eth0" netns="" Sep 13 00:11:31.449280 containerd[1590]: 2025-09-13 00:11:31.388 [INFO][5365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Sep 13 00:11:31.449280 containerd[1590]: 2025-09-13 00:11:31.388 [INFO][5365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Sep 13 00:11:31.449280 containerd[1590]: 2025-09-13 00:11:31.427 [INFO][5372] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" HandleID="k8s-pod-network.01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Workload="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:31.449280 containerd[1590]: 2025-09-13 00:11:31.427 [INFO][5372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:31.449280 containerd[1590]: 2025-09-13 00:11:31.427 [INFO][5372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:31.449280 containerd[1590]: 2025-09-13 00:11:31.440 [WARNING][5372] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" HandleID="k8s-pod-network.01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Workload="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:31.449280 containerd[1590]: 2025-09-13 00:11:31.440 [INFO][5372] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" HandleID="k8s-pod-network.01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Workload="ci--4081.3.5--n--738365eea6-k8s-goldmane--7988f88666--nslpp-eth0" Sep 13 00:11:31.449280 containerd[1590]: 2025-09-13 00:11:31.443 [INFO][5372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:31.449280 containerd[1590]: 2025-09-13 00:11:31.446 [INFO][5365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e" Sep 13 00:11:31.449280 containerd[1590]: time="2025-09-13T00:11:31.449174663Z" level=info msg="TearDown network for sandbox \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\" successfully" Sep 13 00:11:31.455208 containerd[1590]: time="2025-09-13T00:11:31.455104552Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:31.455391 containerd[1590]: time="2025-09-13T00:11:31.455249783Z" level=info msg="RemovePodSandbox \"01432d505a97e4f110e9276688b37bb401aac6d38e7d64603011c06dfe1ba67e\" returns successfully" Sep 13 00:11:31.456079 containerd[1590]: time="2025-09-13T00:11:31.456041767Z" level=info msg="StopPodSandbox for \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\"" Sep 13 00:11:31.556473 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:31.553260 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:31.553314 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:31.589390 containerd[1590]: 2025-09-13 00:11:31.536 [WARNING][5386] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0", GenerateName:"calico-apiserver-75744756bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75744756bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0", Pod:"calico-apiserver-75744756bc-dm52h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5fcb903c89b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:31.589390 containerd[1590]: 2025-09-13 00:11:31.536 [INFO][5386] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Sep 13 00:11:31.589390 containerd[1590]: 2025-09-13 00:11:31.536 [INFO][5386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" iface="eth0" netns="" Sep 13 00:11:31.589390 containerd[1590]: 2025-09-13 00:11:31.536 [INFO][5386] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Sep 13 00:11:31.589390 containerd[1590]: 2025-09-13 00:11:31.536 [INFO][5386] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Sep 13 00:11:31.589390 containerd[1590]: 2025-09-13 00:11:31.572 [INFO][5395] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" HandleID="k8s-pod-network.ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:31.589390 containerd[1590]: 2025-09-13 00:11:31.573 [INFO][5395] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:31.589390 containerd[1590]: 2025-09-13 00:11:31.573 [INFO][5395] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:31.589390 containerd[1590]: 2025-09-13 00:11:31.583 [WARNING][5395] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" HandleID="k8s-pod-network.ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:31.589390 containerd[1590]: 2025-09-13 00:11:31.583 [INFO][5395] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" HandleID="k8s-pod-network.ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:31.589390 containerd[1590]: 2025-09-13 00:11:31.585 [INFO][5395] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:31.589390 containerd[1590]: 2025-09-13 00:11:31.587 [INFO][5386] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Sep 13 00:11:31.590164 containerd[1590]: time="2025-09-13T00:11:31.589505537Z" level=info msg="TearDown network for sandbox \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\" successfully" Sep 13 00:11:31.590164 containerd[1590]: time="2025-09-13T00:11:31.589543240Z" level=info msg="StopPodSandbox for \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\" returns successfully" Sep 13 00:11:31.591912 containerd[1590]: time="2025-09-13T00:11:31.591401953Z" level=info msg="RemovePodSandbox for \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\"" Sep 13 00:11:31.591912 containerd[1590]: time="2025-09-13T00:11:31.591455207Z" level=info msg="Forcibly stopping sandbox \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\"" Sep 13 00:11:31.711907 containerd[1590]: 2025-09-13 00:11:31.654 [WARNING][5409] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0", GenerateName:"calico-apiserver-75744756bc-", Namespace:"calico-apiserver", SelfLink:"", UID:"0df6e67f-81b5-45d1-8b3a-f7cc40e01d6d", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75744756bc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0", Pod:"calico-apiserver-75744756bc-dm52h", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.109.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5fcb903c89b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:31.711907 containerd[1590]: 2025-09-13 00:11:31.655 [INFO][5409] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Sep 13 00:11:31.711907 containerd[1590]: 2025-09-13 00:11:31.655 [INFO][5409] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" iface="eth0" netns="" Sep 13 00:11:31.711907 containerd[1590]: 2025-09-13 00:11:31.655 [INFO][5409] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Sep 13 00:11:31.711907 containerd[1590]: 2025-09-13 00:11:31.655 [INFO][5409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Sep 13 00:11:31.711907 containerd[1590]: 2025-09-13 00:11:31.691 [INFO][5418] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" HandleID="k8s-pod-network.ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:31.711907 containerd[1590]: 2025-09-13 00:11:31.691 [INFO][5418] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:31.711907 containerd[1590]: 2025-09-13 00:11:31.691 [INFO][5418] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:31.711907 containerd[1590]: 2025-09-13 00:11:31.700 [WARNING][5418] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" HandleID="k8s-pod-network.ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:31.711907 containerd[1590]: 2025-09-13 00:11:31.700 [INFO][5418] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" HandleID="k8s-pod-network.ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--apiserver--75744756bc--dm52h-eth0" Sep 13 00:11:31.711907 containerd[1590]: 2025-09-13 00:11:31.703 [INFO][5418] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:31.711907 containerd[1590]: 2025-09-13 00:11:31.706 [INFO][5409] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3" Sep 13 00:11:31.711907 containerd[1590]: time="2025-09-13T00:11:31.710616115Z" level=info msg="TearDown network for sandbox \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\" successfully" Sep 13 00:11:31.717838 containerd[1590]: time="2025-09-13T00:11:31.717779139Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:31.718642 containerd[1590]: time="2025-09-13T00:11:31.717869620Z" level=info msg="RemovePodSandbox \"ad60b14aa8bd91219168f3aab4113b30227b819ad2f0d10b39f394f03abed6c3\" returns successfully" Sep 13 00:11:31.718723 containerd[1590]: time="2025-09-13T00:11:31.718668062Z" level=info msg="StopPodSandbox for \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\"" Sep 13 00:11:31.782488 kubelet[2684]: I0913 00:11:31.781610 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c4785d957-fpstv" podStartSLOduration=27.66033435 podStartE2EDuration="38.776340136s" podCreationTimestamp="2025-09-13 00:10:53 +0000 UTC" firstStartedPulling="2025-09-13 00:11:19.218988703 +0000 UTC m=+50.499623898" lastFinishedPulling="2025-09-13 00:11:30.334994473 +0000 UTC m=+61.615629684" observedRunningTime="2025-09-13 00:11:31.774821781 +0000 UTC m=+63.055457015" watchObservedRunningTime="2025-09-13 00:11:31.776340136 +0000 UTC m=+63.056975361" Sep 13 00:11:31.933467 containerd[1590]: 2025-09-13 00:11:31.814 [WARNING][5432] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"04b956de-e0b5-4148-a086-60a45e92f38a", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93", Pod:"csi-node-driver-qsg52", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2b4b23d530", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:31.933467 containerd[1590]: 2025-09-13 00:11:31.820 [INFO][5432] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Sep 13 00:11:31.933467 containerd[1590]: 2025-09-13 00:11:31.820 [INFO][5432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" iface="eth0" netns="" Sep 13 00:11:31.933467 containerd[1590]: 2025-09-13 00:11:31.820 [INFO][5432] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Sep 13 00:11:31.933467 containerd[1590]: 2025-09-13 00:11:31.820 [INFO][5432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Sep 13 00:11:31.933467 containerd[1590]: 2025-09-13 00:11:31.899 [INFO][5459] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" HandleID="k8s-pod-network.f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Workload="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:31.933467 containerd[1590]: 2025-09-13 00:11:31.900 [INFO][5459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:31.933467 containerd[1590]: 2025-09-13 00:11:31.900 [INFO][5459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:31.933467 containerd[1590]: 2025-09-13 00:11:31.913 [WARNING][5459] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" HandleID="k8s-pod-network.f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Workload="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:31.933467 containerd[1590]: 2025-09-13 00:11:31.913 [INFO][5459] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" HandleID="k8s-pod-network.f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Workload="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:31.933467 containerd[1590]: 2025-09-13 00:11:31.916 [INFO][5459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:31.933467 containerd[1590]: 2025-09-13 00:11:31.929 [INFO][5432] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Sep 13 00:11:31.933467 containerd[1590]: time="2025-09-13T00:11:31.932510856Z" level=info msg="TearDown network for sandbox \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\" successfully" Sep 13 00:11:31.933467 containerd[1590]: time="2025-09-13T00:11:31.932540049Z" level=info msg="StopPodSandbox for \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\" returns successfully" Sep 13 00:11:31.940244 containerd[1590]: time="2025-09-13T00:11:31.938635032Z" level=info msg="RemovePodSandbox for \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\"" Sep 13 00:11:31.940244 containerd[1590]: time="2025-09-13T00:11:31.938681646Z" level=info msg="Forcibly stopping sandbox \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\"" Sep 13 00:11:32.109054 containerd[1590]: 2025-09-13 00:11:32.025 [WARNING][5476] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"04b956de-e0b5-4148-a086-60a45e92f38a", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93", Pod:"csi-node-driver-qsg52", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.109.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic2b4b23d530", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:32.109054 containerd[1590]: 2025-09-13 00:11:32.026 [INFO][5476] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Sep 13 00:11:32.109054 containerd[1590]: 2025-09-13 00:11:32.026 [INFO][5476] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" iface="eth0" netns="" Sep 13 00:11:32.109054 containerd[1590]: 2025-09-13 00:11:32.026 [INFO][5476] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Sep 13 00:11:32.109054 containerd[1590]: 2025-09-13 00:11:32.026 [INFO][5476] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Sep 13 00:11:32.109054 containerd[1590]: 2025-09-13 00:11:32.083 [INFO][5483] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" HandleID="k8s-pod-network.f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Workload="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:32.109054 containerd[1590]: 2025-09-13 00:11:32.084 [INFO][5483] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:32.109054 containerd[1590]: 2025-09-13 00:11:32.084 [INFO][5483] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:32.109054 containerd[1590]: 2025-09-13 00:11:32.098 [WARNING][5483] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" HandleID="k8s-pod-network.f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Workload="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:32.109054 containerd[1590]: 2025-09-13 00:11:32.098 [INFO][5483] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" HandleID="k8s-pod-network.f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Workload="ci--4081.3.5--n--738365eea6-k8s-csi--node--driver--qsg52-eth0" Sep 13 00:11:32.109054 containerd[1590]: 2025-09-13 00:11:32.102 [INFO][5483] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:32.109054 containerd[1590]: 2025-09-13 00:11:32.105 [INFO][5476] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc" Sep 13 00:11:32.109054 containerd[1590]: time="2025-09-13T00:11:32.108838031Z" level=info msg="TearDown network for sandbox \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\" successfully" Sep 13 00:11:32.113437 containerd[1590]: time="2025-09-13T00:11:32.113321204Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:32.113437 containerd[1590]: time="2025-09-13T00:11:32.113408085Z" level=info msg="RemovePodSandbox \"f905da5600cac244b1c61dbc36618a8b318818337e2646a46a2d8b56d41e59cc\" returns successfully" Sep 13 00:11:32.114592 containerd[1590]: time="2025-09-13T00:11:32.114473459Z" level=info msg="StopPodSandbox for \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\"" Sep 13 00:11:32.117609 containerd[1590]: time="2025-09-13T00:11:32.117546315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 13 00:11:32.118286 containerd[1590]: time="2025-09-13T00:11:32.118247183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:32.121266 containerd[1590]: time="2025-09-13T00:11:32.121181928Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:32.121683 containerd[1590]: time="2025-09-13T00:11:32.121638394Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.783995171s" Sep 13 00:11:32.121683 containerd[1590]: time="2025-09-13T00:11:32.121676896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 13 00:11:32.122181 containerd[1590]: time="2025-09-13T00:11:32.122147620Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:32.125781 containerd[1590]: time="2025-09-13T00:11:32.125573305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 13 00:11:32.133878 containerd[1590]: time="2025-09-13T00:11:32.133524987Z" level=info msg="CreateContainer within sandbox \"eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 13 00:11:32.181631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2852269960.mount: Deactivated successfully. Sep 13 00:11:32.187876 containerd[1590]: time="2025-09-13T00:11:32.187137390Z" level=info msg="CreateContainer within sandbox \"eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"c2b1a9edac59e9e4f5c058ceaaa9375093e6328c331b01abc1ec61dca53fa133\"" Sep 13 00:11:32.195734 containerd[1590]: time="2025-09-13T00:11:32.192630296Z" level=info msg="StartContainer for \"c2b1a9edac59e9e4f5c058ceaaa9375093e6328c331b01abc1ec61dca53fa133\"" Sep 13 00:11:32.410036 containerd[1590]: 2025-09-13 00:11:32.314 [WARNING][5497] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-whisker--7d994bd988--rfj2l-eth0" Sep 13 00:11:32.410036 containerd[1590]: 2025-09-13 00:11:32.330 [INFO][5497] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Sep 13 00:11:32.410036 containerd[1590]: 2025-09-13 00:11:32.330 [INFO][5497] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" iface="eth0" netns="" Sep 13 00:11:32.410036 containerd[1590]: 2025-09-13 00:11:32.330 [INFO][5497] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Sep 13 00:11:32.410036 containerd[1590]: 2025-09-13 00:11:32.330 [INFO][5497] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Sep 13 00:11:32.410036 containerd[1590]: 2025-09-13 00:11:32.389 [INFO][5528] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" HandleID="k8s-pod-network.66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--7d994bd988--rfj2l-eth0" Sep 13 00:11:32.410036 containerd[1590]: 2025-09-13 00:11:32.390 [INFO][5528] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:32.410036 containerd[1590]: 2025-09-13 00:11:32.390 [INFO][5528] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:32.410036 containerd[1590]: 2025-09-13 00:11:32.400 [WARNING][5528] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" HandleID="k8s-pod-network.66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--7d994bd988--rfj2l-eth0" Sep 13 00:11:32.410036 containerd[1590]: 2025-09-13 00:11:32.400 [INFO][5528] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" HandleID="k8s-pod-network.66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--7d994bd988--rfj2l-eth0" Sep 13 00:11:32.410036 containerd[1590]: 2025-09-13 00:11:32.404 [INFO][5528] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:32.410036 containerd[1590]: 2025-09-13 00:11:32.407 [INFO][5497] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Sep 13 00:11:32.410036 containerd[1590]: time="2025-09-13T00:11:32.409733369Z" level=info msg="TearDown network for sandbox \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\" successfully" Sep 13 00:11:32.410036 containerd[1590]: time="2025-09-13T00:11:32.409762803Z" level=info msg="StopPodSandbox for \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\" returns successfully" Sep 13 00:11:32.412627 containerd[1590]: time="2025-09-13T00:11:32.410971615Z" level=info msg="RemovePodSandbox for \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\"" Sep 13 00:11:32.412627 containerd[1590]: time="2025-09-13T00:11:32.411021499Z" level=info msg="Forcibly stopping sandbox \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\"" Sep 13 00:11:32.442582 containerd[1590]: time="2025-09-13T00:11:32.441964017Z" level=info msg="StartContainer for \"c2b1a9edac59e9e4f5c058ceaaa9375093e6328c331b01abc1ec61dca53fa133\" returns successfully" Sep 13 00:11:32.550482 containerd[1590]: time="2025-09-13T00:11:32.550245048Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:32.552538 containerd[1590]: time="2025-09-13T00:11:32.552292426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 13 00:11:32.557103 containerd[1590]: time="2025-09-13T00:11:32.557051539Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 429.607332ms" Sep 13 00:11:32.557852 containerd[1590]: time="2025-09-13T00:11:32.557718217Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 13 00:11:32.560061 containerd[1590]: time="2025-09-13T00:11:32.559367688Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 13 00:11:32.563965 containerd[1590]: time="2025-09-13T00:11:32.563910935Z" level=info msg="CreateContainer within sandbox \"5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 13 00:11:32.568796 containerd[1590]: 2025-09-13 00:11:32.496 [WARNING][5545] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" WorkloadEndpoint="ci--4081.3.5--n--738365eea6-k8s-whisker--7d994bd988--rfj2l-eth0" Sep 13 00:11:32.568796 containerd[1590]: 2025-09-13 00:11:32.496 [INFO][5545] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Sep 13 00:11:32.568796 containerd[1590]: 2025-09-13 00:11:32.497 [INFO][5545] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" iface="eth0" netns="" Sep 13 00:11:32.568796 containerd[1590]: 2025-09-13 00:11:32.497 [INFO][5545] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Sep 13 00:11:32.568796 containerd[1590]: 2025-09-13 00:11:32.497 [INFO][5545] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Sep 13 00:11:32.568796 containerd[1590]: 2025-09-13 00:11:32.537 [INFO][5563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" HandleID="k8s-pod-network.66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--7d994bd988--rfj2l-eth0" Sep 13 00:11:32.568796 containerd[1590]: 2025-09-13 00:11:32.537 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:32.568796 containerd[1590]: 2025-09-13 00:11:32.537 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:32.568796 containerd[1590]: 2025-09-13 00:11:32.550 [WARNING][5563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" HandleID="k8s-pod-network.66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--7d994bd988--rfj2l-eth0" Sep 13 00:11:32.568796 containerd[1590]: 2025-09-13 00:11:32.550 [INFO][5563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" HandleID="k8s-pod-network.66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Workload="ci--4081.3.5--n--738365eea6-k8s-whisker--7d994bd988--rfj2l-eth0" Sep 13 00:11:32.568796 containerd[1590]: 2025-09-13 00:11:32.557 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:32.568796 containerd[1590]: 2025-09-13 00:11:32.565 [INFO][5545] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826" Sep 13 00:11:32.569600 containerd[1590]: time="2025-09-13T00:11:32.568842217Z" level=info msg="TearDown network for sandbox \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\" successfully" Sep 13 00:11:32.584788 containerd[1590]: time="2025-09-13T00:11:32.584204081Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:32.584788 containerd[1590]: time="2025-09-13T00:11:32.584301870Z" level=info msg="RemovePodSandbox \"66d81fa408066ff2a7456323ac0e51567cf7699e466d7065b53c56287fe8f826\" returns successfully" Sep 13 00:11:32.585624 containerd[1590]: time="2025-09-13T00:11:32.585199125Z" level=info msg="StopPodSandbox for \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\"" Sep 13 00:11:32.630118 containerd[1590]: time="2025-09-13T00:11:32.630067952Z" level=info msg="CreateContainer within sandbox \"5cf7a808e0242b23b0fea3bc54f3f50dcbec3c1ee8c88d84584475b540192af0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fc418a5de674c572f2a5efb764ac7f9fb10c856f5a4fd269ac4331f787f0a1a9\"" Sep 13 00:11:32.631744 containerd[1590]: time="2025-09-13T00:11:32.631679013Z" level=info msg="StartContainer for \"fc418a5de674c572f2a5efb764ac7f9fb10c856f5a4fd269ac4331f787f0a1a9\"" Sep 13 00:11:32.793087 containerd[1590]: time="2025-09-13T00:11:32.792881669Z" level=info msg="StartContainer for \"fc418a5de674c572f2a5efb764ac7f9fb10c856f5a4fd269ac4331f787f0a1a9\" returns successfully" Sep 13 00:11:32.817167 containerd[1590]: 2025-09-13 00:11:32.725 [WARNING][5580] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0", GenerateName:"calico-kube-controllers-5c4785d957-", Namespace:"calico-system", SelfLink:"", UID:"a98a040e-2ea2-4095-a76f-86a4e06c7abd", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c4785d957", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a", Pod:"calico-kube-controllers-5c4785d957-fpstv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib8f0ddd9204", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:32.817167 containerd[1590]: 2025-09-13 00:11:32.726 [INFO][5580] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Sep 13 00:11:32.817167 containerd[1590]: 2025-09-13 00:11:32.726 [INFO][5580] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" iface="eth0" netns="" Sep 13 00:11:32.817167 containerd[1590]: 2025-09-13 00:11:32.726 [INFO][5580] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Sep 13 00:11:32.817167 containerd[1590]: 2025-09-13 00:11:32.726 [INFO][5580] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Sep 13 00:11:32.817167 containerd[1590]: 2025-09-13 00:11:32.790 [INFO][5611] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" HandleID="k8s-pod-network.f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:32.817167 containerd[1590]: 2025-09-13 00:11:32.790 [INFO][5611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:32.817167 containerd[1590]: 2025-09-13 00:11:32.791 [INFO][5611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:32.817167 containerd[1590]: 2025-09-13 00:11:32.806 [WARNING][5611] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" HandleID="k8s-pod-network.f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:32.817167 containerd[1590]: 2025-09-13 00:11:32.806 [INFO][5611] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" HandleID="k8s-pod-network.f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:32.817167 containerd[1590]: 2025-09-13 00:11:32.810 [INFO][5611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:32.817167 containerd[1590]: 2025-09-13 00:11:32.813 [INFO][5580] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Sep 13 00:11:32.818097 containerd[1590]: time="2025-09-13T00:11:32.817271745Z" level=info msg="TearDown network for sandbox \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\" successfully" Sep 13 00:11:32.818097 containerd[1590]: time="2025-09-13T00:11:32.817312109Z" level=info msg="StopPodSandbox for \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\" returns successfully" Sep 13 00:11:32.819019 containerd[1590]: time="2025-09-13T00:11:32.818957769Z" level=info msg="RemovePodSandbox for \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\"" Sep 13 00:11:32.819086 containerd[1590]: time="2025-09-13T00:11:32.819032689Z" level=info msg="Forcibly stopping sandbox \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\"" Sep 13 00:11:32.978994 containerd[1590]: 2025-09-13 00:11:32.896 [WARNING][5634] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0", GenerateName:"calico-kube-controllers-5c4785d957-", Namespace:"calico-system", SelfLink:"", UID:"a98a040e-2ea2-4095-a76f-86a4e06c7abd", ResourceVersion:"1113", Generation:0, CreationTimestamp:time.Date(2025, time.September, 13, 0, 10, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c4785d957", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.5-n-738365eea6", ContainerID:"a2f8dc69e6398ac791ea5219e147b6a522e1706257fefb2db7f2f5891771469a", Pod:"calico-kube-controllers-5c4785d957-fpstv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.109.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib8f0ddd9204", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 13 00:11:32.978994 containerd[1590]: 2025-09-13 00:11:32.897 [INFO][5634] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Sep 13 00:11:32.978994 containerd[1590]: 2025-09-13 00:11:32.897 [INFO][5634] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" iface="eth0" netns="" Sep 13 00:11:32.978994 containerd[1590]: 2025-09-13 00:11:32.897 [INFO][5634] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Sep 13 00:11:32.978994 containerd[1590]: 2025-09-13 00:11:32.897 [INFO][5634] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Sep 13 00:11:32.978994 containerd[1590]: 2025-09-13 00:11:32.953 [INFO][5642] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" HandleID="k8s-pod-network.f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:32.978994 containerd[1590]: 2025-09-13 00:11:32.953 [INFO][5642] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 13 00:11:32.978994 containerd[1590]: 2025-09-13 00:11:32.953 [INFO][5642] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 13 00:11:32.978994 containerd[1590]: 2025-09-13 00:11:32.963 [WARNING][5642] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" HandleID="k8s-pod-network.f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:32.978994 containerd[1590]: 2025-09-13 00:11:32.964 [INFO][5642] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" HandleID="k8s-pod-network.f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Workload="ci--4081.3.5--n--738365eea6-k8s-calico--kube--controllers--5c4785d957--fpstv-eth0" Sep 13 00:11:32.978994 containerd[1590]: 2025-09-13 00:11:32.968 [INFO][5642] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 13 00:11:32.978994 containerd[1590]: 2025-09-13 00:11:32.971 [INFO][5634] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1" Sep 13 00:11:32.981637 containerd[1590]: time="2025-09-13T00:11:32.979050145Z" level=info msg="TearDown network for sandbox \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\" successfully" Sep 13 00:11:32.987617 containerd[1590]: time="2025-09-13T00:11:32.987312735Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 13 00:11:32.987617 containerd[1590]: time="2025-09-13T00:11:32.987434843Z" level=info msg="RemovePodSandbox \"f0eb85c63355b47b5da93b9626d6bfe1b1910606eb3d43e74b668faa00b476a1\" returns successfully" Sep 13 00:11:34.288422 systemd[1]: Started sshd@9-164.90.159.5:22-139.178.68.195:56058.service - OpenSSH per-connection server daemon (139.178.68.195:56058). Sep 13 00:11:34.624108 systemd[1]: run-containerd-runc-k8s.io-4a7939f9eae6eb668435e7944fbd55bd9989376803ea43b3aa7f48ba9fb5d5ae-runc.nn06jw.mount: Deactivated successfully. Sep 13 00:11:34.676846 sshd[5654]: Accepted publickey for core from 139.178.68.195 port 56058 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:11:34.698320 sshd[5654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:34.719920 systemd-logind[1559]: New session 10 of user core. Sep 13 00:11:34.723118 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:11:34.908920 kubelet[2684]: I0913 00:11:34.907850 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:11:35.524043 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:35.525420 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:35.525490 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:35.713587 sshd[5654]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:35.725633 systemd[1]: Started sshd@10-164.90.159.5:22-139.178.68.195:56072.service - OpenSSH per-connection server daemon (139.178.68.195:56072). Sep 13 00:11:35.732988 systemd[1]: sshd@9-164.90.159.5:22-139.178.68.195:56058.service: Deactivated successfully. Sep 13 00:11:35.743488 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:11:35.743924 systemd-logind[1559]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:11:35.752131 systemd-logind[1559]: Removed session 10. Sep 13 00:11:35.832782 sshd[5692]: Accepted publickey for core from 139.178.68.195 port 56072 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:11:35.835675 sshd[5692]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:35.846296 systemd-logind[1559]: New session 11 of user core. Sep 13 00:11:35.852326 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:11:36.200080 sshd[5692]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:36.221894 systemd[1]: Started sshd@11-164.90.159.5:22-139.178.68.195:56082.service - OpenSSH per-connection server daemon (139.178.68.195:56082). Sep 13 00:11:36.232239 systemd[1]: sshd@10-164.90.159.5:22-139.178.68.195:56072.service: Deactivated successfully. Sep 13 00:11:36.252751 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:11:36.260740 systemd-logind[1559]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:11:36.272667 systemd-logind[1559]: Removed session 11. Sep 13 00:11:36.424623 sshd[5704]: Accepted publickey for core from 139.178.68.195 port 56082 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:11:36.431490 sshd[5704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:36.446081 systemd-logind[1559]: New session 12 of user core. Sep 13 00:11:36.452178 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:11:36.745541 sshd[5704]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:36.754949 systemd[1]: sshd@11-164.90.159.5:22-139.178.68.195:56082.service: Deactivated successfully. Sep 13 00:11:36.760471 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:11:36.761975 systemd-logind[1559]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:11:36.765470 systemd-logind[1559]: Removed session 12. Sep 13 00:11:36.937585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3938608681.mount: Deactivated successfully. Sep 13 00:11:37.571175 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:37.568837 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:37.568848 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:37.827186 containerd[1590]: time="2025-09-13T00:11:37.811667666Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:37.873370 containerd[1590]: time="2025-09-13T00:11:37.826803329Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 13 00:11:37.904661 containerd[1590]: time="2025-09-13T00:11:37.904421726Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:37.917113 containerd[1590]: time="2025-09-13T00:11:37.917023443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:37.920374 containerd[1590]: time="2025-09-13T00:11:37.919976799Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.360553647s" Sep 13 00:11:37.920374 containerd[1590]: time="2025-09-13T00:11:37.920062069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 13 00:11:38.030897 containerd[1590]: time="2025-09-13T00:11:38.029859147Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 13 00:11:38.031755 containerd[1590]: time="2025-09-13T00:11:38.031482875Z" level=info msg="CreateContainer within sandbox \"aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 13 00:11:38.193187 containerd[1590]: time="2025-09-13T00:11:38.192963855Z" level=info msg="CreateContainer within sandbox \"aeea6726ff40334504b97ad990ab2e0485217104c2cc8f944ea7063527c2165e\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"95a73c95137368a287f49e7f31d8c8cf4f7aae7ea84ce1a6597da354e921dcd5\"" Sep 13 00:11:38.207774 containerd[1590]: time="2025-09-13T00:11:38.207639488Z" level=info msg="StartContainer for \"95a73c95137368a287f49e7f31d8c8cf4f7aae7ea84ce1a6597da354e921dcd5\"" Sep 13 00:11:38.587767 containerd[1590]: time="2025-09-13T00:11:38.586203874Z" level=info msg="StartContainer for \"95a73c95137368a287f49e7f31d8c8cf4f7aae7ea84ce1a6597da354e921dcd5\" returns successfully" Sep 13 00:11:39.386187 kubelet[2684]: I0913 00:11:39.383314 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-nslpp" podStartSLOduration=30.253083518 podStartE2EDuration="47.346217344s" podCreationTimestamp="2025-09-13 00:10:52 +0000 UTC" firstStartedPulling="2025-09-13 00:11:20.897622709 +0000 UTC m=+52.178257921" lastFinishedPulling="2025-09-13 00:11:37.990756532 +0000 UTC m=+69.271391747" observedRunningTime="2025-09-13 00:11:39.297041548 +0000 UTC m=+70.577676828" watchObservedRunningTime="2025-09-13 00:11:39.346217344 +0000 UTC m=+70.626852565" Sep 13 00:11:39.391591 kubelet[2684]: I0913 00:11:39.390343 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75744756bc-dm52h" podStartSLOduration=40.39116032 podStartE2EDuration="52.390309109s" podCreationTimestamp="2025-09-13 00:10:47 +0000 UTC" firstStartedPulling="2025-09-13 00:11:20.559878844 +0000 UTC m=+51.840514066" lastFinishedPulling="2025-09-13 00:11:32.559027657 +0000 UTC m=+63.839662855" observedRunningTime="2025-09-13 00:11:33.825117134 +0000 UTC m=+65.105752367" watchObservedRunningTime="2025-09-13 00:11:39.390309109 +0000 UTC m=+70.670944335" Sep 13 00:11:39.631744 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:39.617360 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:39.617424 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:40.165008 containerd[1590]: time="2025-09-13T00:11:40.164761854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:40.165008 containerd[1590]: time="2025-09-13T00:11:40.164873707Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 13 00:11:40.168906 containerd[1590]: time="2025-09-13T00:11:40.168809078Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:40.171081 containerd[1590]: time="2025-09-13T00:11:40.170542882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.140618794s" Sep 13 00:11:40.171081 containerd[1590]: time="2025-09-13T00:11:40.170615181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 13 00:11:40.173438 containerd[1590]: time="2025-09-13T00:11:40.171665394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:40.291430 containerd[1590]: time="2025-09-13T00:11:40.291339479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 13 00:11:40.410074 containerd[1590]: time="2025-09-13T00:11:40.409861422Z" level=info msg="CreateContainer within sandbox \"7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 13 00:11:40.559862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount162513134.mount: Deactivated successfully. Sep 13 00:11:40.576207 containerd[1590]: time="2025-09-13T00:11:40.576146115Z" level=info msg="CreateContainer within sandbox \"7975a910b373f376bc13ba3f6a0bc5c0eb22c0ca062ca2a6be945eb0ac408c93\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3b6afea18616196b511414c64bed10f7a88caa0d8f1a4c63c9613f60052b986e\"" Sep 13 00:11:40.656386 systemd[1]: run-containerd-runc-k8s.io-4f459de11107551deebbc6ff30442fe2ee36a74faa8ffee59db8166f7268916c-runc.c5pkiN.mount: Deactivated successfully. Sep 13 00:11:40.683629 containerd[1590]: time="2025-09-13T00:11:40.682575039Z" level=info msg="StartContainer for \"3b6afea18616196b511414c64bed10f7a88caa0d8f1a4c63c9613f60052b986e\"" Sep 13 00:11:40.924301 containerd[1590]: time="2025-09-13T00:11:40.923777862Z" level=info msg="StartContainer for \"3b6afea18616196b511414c64bed10f7a88caa0d8f1a4c63c9613f60052b986e\" returns successfully" Sep 13 00:11:41.255180 kubelet[2684]: I0913 00:11:41.252434 2684 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 13 00:11:41.255180 kubelet[2684]: I0913 00:11:41.255069 2684 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 13 00:11:41.495548 kubelet[2684]: I0913 00:11:41.490913 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-qsg52" podStartSLOduration=26.37599374 podStartE2EDuration="48.484478893s" podCreationTimestamp="2025-09-13 00:10:53 +0000 UTC" firstStartedPulling="2025-09-13 00:11:18.157242165 +0000 UTC m=+49.437877368" lastFinishedPulling="2025-09-13 00:11:40.265727303 +0000 UTC m=+71.546362521" observedRunningTime="2025-09-13 00:11:41.452277938 +0000 UTC m=+72.732913156" watchObservedRunningTime="2025-09-13 00:11:41.484478893 +0000 UTC m=+72.765114123" Sep 13 00:11:41.667120 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:41.665296 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:41.665354 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:41.763078 systemd[1]: Started sshd@12-164.90.159.5:22-139.178.68.195:39132.service - OpenSSH per-connection server daemon (139.178.68.195:39132). Sep 13 00:11:41.913519 sshd[5884]: Accepted publickey for core from 139.178.68.195 port 39132 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:11:41.921878 sshd[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:41.939884 systemd-logind[1559]: New session 13 of user core. Sep 13 00:11:41.946252 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:11:42.847616 sshd[5884]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:42.854280 systemd[1]: sshd@12-164.90.159.5:22-139.178.68.195:39132.service: Deactivated successfully. Sep 13 00:11:42.863832 systemd-logind[1559]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:11:42.865252 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:11:42.869659 systemd-logind[1559]: Removed session 13. Sep 13 00:11:43.567046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1816985015.mount: Deactivated successfully. Sep 13 00:11:43.605531 containerd[1590]: time="2025-09-13T00:11:43.605477470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:43.607316 containerd[1590]: time="2025-09-13T00:11:43.607174628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 13 00:11:43.609453 containerd[1590]: time="2025-09-13T00:11:43.609382343Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:43.618012 containerd[1590]: time="2025-09-13T00:11:43.617322084Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:11:43.619140 containerd[1590]: time="2025-09-13T00:11:43.619036242Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.327619761s" Sep 13 00:11:43.619347 containerd[1590]: time="2025-09-13T00:11:43.619318982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 13 00:11:43.716183 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:43.713062 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:43.713117 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:43.725350 containerd[1590]: time="2025-09-13T00:11:43.723157016Z" level=info msg="CreateContainer within sandbox \"eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 13 00:11:43.773259 containerd[1590]: time="2025-09-13T00:11:43.773189031Z" level=info msg="CreateContainer within sandbox \"eba88c4363663a94033db6a2359e179436cc6352ad6b602a2338e9f9d48e6b6d\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"f55b036cc8d338ad8562a98c76194d94d1cef10b59c3101716f10fb69c3fa38e\"" Sep 13 00:11:43.775417 containerd[1590]: time="2025-09-13T00:11:43.775235716Z" level=info msg="StartContainer for \"f55b036cc8d338ad8562a98c76194d94d1cef10b59c3101716f10fb69c3fa38e\"" Sep 13 00:11:44.128155 containerd[1590]: time="2025-09-13T00:11:44.128073971Z" level=info msg="StartContainer for \"f55b036cc8d338ad8562a98c76194d94d1cef10b59c3101716f10fb69c3fa38e\" returns successfully" Sep 13 00:11:44.750763 kubelet[2684]: I0913 00:11:44.749981 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-cbf85fcc5-wtk7f" podStartSLOduration=5.495179728 podStartE2EDuration="29.73693831s" podCreationTimestamp="2025-09-13 00:11:15 +0000 UTC" firstStartedPulling="2025-09-13 00:11:19.398212145 +0000 UTC m=+50.678847355" lastFinishedPulling="2025-09-13 00:11:43.639970741 +0000 UTC m=+74.920605937" observedRunningTime="2025-09-13 00:11:44.724580883 +0000 UTC m=+76.005216094" watchObservedRunningTime="2025-09-13 00:11:44.73693831 +0000 UTC m=+76.017573525" Sep 13 00:11:45.761278 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:45.763762 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:45.761289 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:47.859242 systemd[1]: Started sshd@13-164.90.159.5:22-139.178.68.195:39142.service - OpenSSH per-connection server daemon (139.178.68.195:39142). Sep 13 00:11:47.993068 sshd[5944]: Accepted publickey for core from 139.178.68.195 port 39142 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:11:47.994764 sshd[5944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:48.001933 systemd-logind[1559]: New session 14 of user core. Sep 13 00:11:48.009274 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:11:48.660939 sshd[5944]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:48.671470 systemd[1]: sshd@13-164.90.159.5:22-139.178.68.195:39142.service: Deactivated successfully. Sep 13 00:11:48.683242 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:11:48.684974 systemd-logind[1559]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:11:48.689020 systemd-logind[1559]: Removed session 14. Sep 13 00:11:49.537040 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:49.539906 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:49.537049 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:51.585485 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:51.587495 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:51.585495 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:53.681829 systemd[1]: Started sshd@14-164.90.159.5:22-139.178.68.195:47958.service - OpenSSH per-connection server daemon (139.178.68.195:47958). Sep 13 00:11:53.807089 sshd[5958]: Accepted publickey for core from 139.178.68.195 port 47958 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:11:53.816223 sshd[5958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:53.833348 systemd-logind[1559]: New session 15 of user core. Sep 13 00:11:53.843629 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:11:54.243171 sshd[5958]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:54.252146 systemd[1]: Started sshd@15-164.90.159.5:22-139.178.68.195:47970.service - OpenSSH per-connection server daemon (139.178.68.195:47970). Sep 13 00:11:54.260004 systemd[1]: sshd@14-164.90.159.5:22-139.178.68.195:47958.service: Deactivated successfully. Sep 13 00:11:54.267794 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:11:54.270137 systemd-logind[1559]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:11:54.271795 systemd-logind[1559]: Removed session 15. Sep 13 00:11:54.337334 sshd[5969]: Accepted publickey for core from 139.178.68.195 port 47970 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:11:54.341686 sshd[5969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:54.358906 systemd-logind[1559]: New session 16 of user core. Sep 13 00:11:54.361334 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:11:54.909601 sshd[5969]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:54.938038 systemd[1]: Started sshd@16-164.90.159.5:22-139.178.68.195:47978.service - OpenSSH per-connection server daemon (139.178.68.195:47978). Sep 13 00:11:54.938875 systemd[1]: sshd@15-164.90.159.5:22-139.178.68.195:47970.service: Deactivated successfully. Sep 13 00:11:54.965282 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:11:54.987359 systemd-logind[1559]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:11:54.993930 systemd-logind[1559]: Removed session 16. Sep 13 00:11:55.068775 kubelet[2684]: E0913 00:11:55.068674 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:11:55.127512 sshd[5981]: Accepted publickey for core from 139.178.68.195 port 47978 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:11:55.131299 sshd[5981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:55.155559 systemd-logind[1559]: New session 17 of user core. Sep 13 00:11:55.164857 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:11:55.556892 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:55.553076 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:55.553086 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:57.604099 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:57.603382 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:57.603395 systemd-resolved[1473]: Flushed all caches. Sep 13 00:11:58.055145 kubelet[2684]: I0913 00:11:57.996826 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 13 00:11:58.130589 sshd[5981]: pam_unix(sshd:session): session closed for user core Sep 13 00:11:58.151794 systemd[1]: Started sshd@17-164.90.159.5:22-139.178.68.195:47984.service - OpenSSH per-connection server daemon (139.178.68.195:47984). Sep 13 00:11:58.158861 systemd[1]: sshd@16-164.90.159.5:22-139.178.68.195:47978.service: Deactivated successfully. Sep 13 00:11:58.187069 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:11:58.191216 systemd-logind[1559]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:11:58.212533 systemd-logind[1559]: Removed session 17. Sep 13 00:11:58.452285 sshd[5999]: Accepted publickey for core from 139.178.68.195 port 47984 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:11:58.458146 sshd[5999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:11:58.471963 systemd-logind[1559]: New session 18 of user core. Sep 13 00:11:58.478249 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:11:59.658858 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:11:59.659921 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:11:59.659941 systemd-resolved[1473]: Flushed all caches. Sep 13 00:12:00.004768 kubelet[2684]: E0913 00:12:00.003789 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:12:00.370050 sshd[5999]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:00.389838 systemd[1]: Started sshd@18-164.90.159.5:22-139.178.68.195:60078.service - OpenSSH per-connection server daemon (139.178.68.195:60078). Sep 13 00:12:00.411651 systemd[1]: sshd@17-164.90.159.5:22-139.178.68.195:47984.service: Deactivated successfully. Sep 13 00:12:00.437449 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:12:00.441328 systemd-logind[1559]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:12:00.452049 systemd-logind[1559]: Removed session 18. Sep 13 00:12:00.661744 sshd[6029]: Accepted publickey for core from 139.178.68.195 port 60078 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:12:00.668226 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:00.685183 systemd-logind[1559]: New session 19 of user core. Sep 13 00:12:00.701692 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:12:00.949519 kubelet[2684]: E0913 00:12:00.949139 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:12:01.210308 sshd[6029]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:01.222634 systemd[1]: sshd@18-164.90.159.5:22-139.178.68.195:60078.service: Deactivated successfully. Sep 13 00:12:01.247938 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:12:01.252434 systemd-logind[1559]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:12:01.257690 systemd-logind[1559]: Removed session 19. Sep 13 00:12:01.701107 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:12:01.715017 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:12:01.715041 systemd-resolved[1473]: Flushed all caches. Sep 13 00:12:05.924373 kubelet[2684]: E0913 00:12:05.924210 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:12:06.224777 systemd[1]: Started sshd@19-164.90.159.5:22-139.178.68.195:60086.service - OpenSSH per-connection server daemon (139.178.68.195:60086). Sep 13 00:12:06.344448 sshd[6111]: Accepted publickey for core from 139.178.68.195 port 60086 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:12:06.348835 sshd[6111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:06.355120 systemd-logind[1559]: New session 20 of user core. Sep 13 00:12:06.363299 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:12:06.867128 sshd[6111]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:06.874518 systemd[1]: sshd@19-164.90.159.5:22-139.178.68.195:60086.service: Deactivated successfully. Sep 13 00:12:06.881845 systemd-logind[1559]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:12:06.882408 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:12:06.887874 systemd-logind[1559]: Removed session 20. Sep 13 00:12:07.521109 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:12:07.523388 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:12:07.521118 systemd-resolved[1473]: Flushed all caches. Sep 13 00:12:07.925435 kubelet[2684]: E0913 00:12:07.925270 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Sep 13 00:12:11.883431 systemd[1]: Started sshd@20-164.90.159.5:22-139.178.68.195:45836.service - OpenSSH per-connection server daemon (139.178.68.195:45836). Sep 13 00:12:12.032737 sshd[6154]: Accepted publickey for core from 139.178.68.195 port 45836 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:12:12.036853 sshd[6154]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:12.052084 systemd-logind[1559]: New session 21 of user core. Sep 13 00:12:12.057161 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:12:12.659072 sshd[6154]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:12.664202 systemd[1]: sshd@20-164.90.159.5:22-139.178.68.195:45836.service: Deactivated successfully. Sep 13 00:12:12.672260 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:12:12.688242 systemd-logind[1559]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:12:12.692345 systemd-logind[1559]: Removed session 21. Sep 13 00:12:13.539362 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:12:13.538791 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:12:13.538800 systemd-resolved[1473]: Flushed all caches. Sep 13 00:12:15.492780 systemd[1]: Started sshd@21-164.90.159.5:22-185.156.73.233:26170.service - OpenSSH per-connection server daemon (185.156.73.233:26170). Sep 13 00:12:17.679063 systemd[1]: Started sshd@22-164.90.159.5:22-139.178.68.195:45840.service - OpenSSH per-connection server daemon (139.178.68.195:45840). Sep 13 00:12:17.881874 sshd[6170]: Accepted publickey for core from 139.178.68.195 port 45840 ssh2: RSA SHA256:i1Ftf+dHap467vAMGrpprHOe/YDo4Q7mKXTNrA2FlO4 Sep 13 00:12:17.883604 sshd[6170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:12:17.907649 systemd-logind[1559]: New session 22 of user core. Sep 13 00:12:17.915290 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:12:18.401170 sshd[6170]: pam_unix(sshd:session): session closed for user core Sep 13 00:12:18.410377 systemd[1]: sshd@22-164.90.159.5:22-139.178.68.195:45840.service: Deactivated successfully. Sep 13 00:12:18.418997 systemd-logind[1559]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:12:18.420150 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:12:18.423025 systemd-logind[1559]: Removed session 22. Sep 13 00:12:19.342306 sshd[6168]: Connection closed by authenticating user root 185.156.73.233 port 26170 [preauth] Sep 13 00:12:19.346991 systemd[1]: sshd@21-164.90.159.5:22-185.156.73.233:26170.service: Deactivated successfully. Sep 13 00:12:19.557937 systemd-journald[1140]: Under memory pressure, flushing caches. Sep 13 00:12:19.557795 systemd-resolved[1473]: Under memory pressure, flushing caches. Sep 13 00:12:19.557804 systemd-resolved[1473]: Flushed all caches.