Sep 12 17:38:42.110236 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:38:42.110271 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:38:42.110289 kernel: BIOS-provided physical RAM map: Sep 12 17:38:42.110298 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 17:38:42.110305 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 17:38:42.110312 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 17:38:42.110319 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 12 17:38:42.110326 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 12 17:38:42.110332 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:38:42.110342 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 17:38:42.110349 kernel: NX (Execute Disable) protection: active Sep 12 17:38:42.110356 kernel: APIC: Static calls initialized Sep 12 17:38:42.110370 kernel: SMBIOS 2.8 present. Sep 12 17:38:42.110377 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 12 17:38:42.110385 kernel: Hypervisor detected: KVM Sep 12 17:38:42.110396 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:38:42.110407 kernel: kvm-clock: using sched offset of 3291551587 cycles Sep 12 17:38:42.110415 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:38:42.110423 kernel: tsc: Detected 1995.310 MHz processor Sep 12 17:38:42.110431 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:38:42.110439 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:38:42.110447 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 12 17:38:42.110455 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 17:38:42.110467 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:38:42.110486 kernel: ACPI: Early table checksum verification disabled Sep 12 17:38:42.110496 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 12 17:38:42.110507 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:42.110518 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:42.110530 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:42.110542 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 12 17:38:42.110553 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:42.110565 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:42.110576 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:42.110590 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:38:42.110602 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 12 17:38:42.110614 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 12 17:38:42.110625 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 12 17:38:42.110637 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 12 17:38:42.110646 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 12 17:38:42.110654 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 12 17:38:42.110667 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 12 17:38:42.110677 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:38:42.110684 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:38:42.110692 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 17:38:42.110700 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 12 17:38:42.110715 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 12 17:38:42.110729 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 12 17:38:42.110749 kernel: Zone ranges: Sep 12 17:38:42.110760 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:38:42.110772 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 12 17:38:42.110783 kernel: Normal empty Sep 12 17:38:42.110795 kernel: Movable zone start for each node Sep 12 17:38:42.110806 kernel: Early memory node ranges Sep 12 17:38:42.110817 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 17:38:42.110828 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 12 17:38:42.110839 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 12 17:38:42.110856 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:38:42.110868 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 17:38:42.110884 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 12 17:38:42.110896 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:38:42.110907 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:38:42.110918 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:38:42.110929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:38:42.110940 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:38:42.110951 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:38:42.110967 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:38:42.110977 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:38:42.110989 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:38:42.111001 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:38:42.111013 kernel: TSC deadline timer available Sep 12 17:38:42.111026 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:38:42.111040 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:38:42.111054 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 12 17:38:42.111098 kernel: Booting paravirtualized kernel on KVM Sep 12 17:38:42.111111 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:38:42.111129 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:38:42.111141 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:38:42.111153 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:38:42.111166 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:38:42.111176 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 12 17:38:42.111187 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:38:42.111196 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:38:42.111203 kernel: random: crng init done Sep 12 17:38:42.111214 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:38:42.111222 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:38:42.111230 kernel: Fallback order for Node 0: 0 Sep 12 17:38:42.111238 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 12 17:38:42.111246 kernel: Policy zone: DMA32 Sep 12 17:38:42.111253 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:38:42.111261 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125148K reserved, 0K cma-reserved) Sep 12 17:38:42.111269 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:38:42.111280 kernel: Kernel/User page tables isolation: enabled Sep 12 17:38:42.111287 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:38:42.111295 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:38:42.111302 kernel: Dynamic Preempt: voluntary Sep 12 17:38:42.111309 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:38:42.111318 kernel: rcu: RCU event tracing is enabled. Sep 12 17:38:42.111326 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:38:42.111334 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:38:42.111342 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:38:42.111349 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:38:42.111361 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:38:42.111368 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:38:42.111376 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:38:42.111384 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:38:42.111396 kernel: Console: colour VGA+ 80x25 Sep 12 17:38:42.111404 kernel: printk: console [tty0] enabled Sep 12 17:38:42.111412 kernel: printk: console [ttyS0] enabled Sep 12 17:38:42.111420 kernel: ACPI: Core revision 20230628 Sep 12 17:38:42.111428 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:38:42.111439 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:38:42.111447 kernel: x2apic enabled Sep 12 17:38:42.111454 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:38:42.111462 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:38:42.111469 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985bfd9acd, max_idle_ns: 881590799062 ns Sep 12 17:38:42.111477 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995310) Sep 12 17:38:42.111484 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 12 17:38:42.111492 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 12 17:38:42.111512 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:38:42.111520 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:38:42.111529 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:38:42.111540 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 12 17:38:42.111548 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:38:42.111556 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:38:42.111564 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 17:38:42.111572 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:38:42.111580 kernel: active return thunk: its_return_thunk Sep 12 17:38:42.111595 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:38:42.111604 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:38:42.111612 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:38:42.111620 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:38:42.111634 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:38:42.111649 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 17:38:42.111662 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:38:42.111676 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:38:42.111692 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:38:42.111700 kernel: landlock: Up and running. Sep 12 17:38:42.111708 kernel: SELinux: Initializing. Sep 12 17:38:42.111722 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:38:42.111735 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:38:42.111750 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 12 17:38:42.111766 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:38:42.111781 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:38:42.111796 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:38:42.111815 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 12 17:38:42.111829 kernel: signal: max sigframe size: 1776 Sep 12 17:38:42.111839 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:38:42.111848 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:38:42.111856 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:38:42.111864 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:38:42.111873 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:38:42.111881 kernel: .... node #0, CPUs: #1 Sep 12 17:38:42.111895 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:38:42.111911 kernel: smpboot: Max logical packages: 1 Sep 12 17:38:42.111924 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) Sep 12 17:38:42.111938 kernel: devtmpfs: initialized Sep 12 17:38:42.111953 kernel: x86/mm: Memory block size: 128MB Sep 12 17:38:42.111965 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:38:42.111974 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:38:42.111983 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:38:42.111998 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:38:42.112012 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:38:42.112029 kernel: audit: type=2000 audit(1757698720.280:1): state=initialized audit_enabled=0 res=1 Sep 12 17:38:42.112040 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:38:42.112053 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:38:42.114109 kernel: cpuidle: using governor menu Sep 12 17:38:42.114156 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:38:42.114169 kernel: dca service started, version 1.12.1 Sep 12 17:38:42.114180 kernel: PCI: Using configuration type 1 for base access Sep 12 17:38:42.114195 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:38:42.114210 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:38:42.114228 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:38:42.114236 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:38:42.114245 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:38:42.114254 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:38:42.114263 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:38:42.114277 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:38:42.114286 kernel: ACPI: Interpreter enabled Sep 12 17:38:42.114300 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:38:42.114315 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:38:42.114334 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:38:42.114349 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:38:42.114364 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 17:38:42.114373 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:38:42.114693 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:38:42.114811 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:38:42.114933 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:38:42.114958 kernel: acpiphp: Slot [3] registered Sep 12 17:38:42.114967 kernel: acpiphp: Slot [4] registered Sep 12 17:38:42.114976 kernel: acpiphp: Slot [5] registered Sep 12 17:38:42.114984 kernel: acpiphp: Slot [6] registered Sep 12 17:38:42.114993 kernel: acpiphp: Slot [7] registered Sep 12 17:38:42.115001 kernel: acpiphp: Slot [8] registered Sep 12 17:38:42.115009 kernel: acpiphp: Slot [9] registered Sep 12 17:38:42.115018 kernel: acpiphp: Slot [10] registered Sep 12 17:38:42.115026 kernel: acpiphp: Slot [11] registered Sep 12 17:38:42.115037 kernel: acpiphp: Slot [12] registered Sep 12 17:38:42.115049 kernel: acpiphp: Slot [13] registered Sep 12 17:38:42.115060 kernel: acpiphp: Slot [14] registered Sep 12 17:38:42.117111 kernel: acpiphp: Slot [15] registered Sep 12 17:38:42.117140 kernel: acpiphp: Slot [16] registered Sep 12 17:38:42.117150 kernel: acpiphp: Slot [17] registered Sep 12 17:38:42.117160 kernel: acpiphp: Slot [18] registered Sep 12 17:38:42.117168 kernel: acpiphp: Slot [19] registered Sep 12 17:38:42.117195 kernel: acpiphp: Slot [20] registered Sep 12 17:38:42.117208 kernel: acpiphp: Slot [21] registered Sep 12 17:38:42.117229 kernel: acpiphp: Slot [22] registered Sep 12 17:38:42.117243 kernel: acpiphp: Slot [23] registered Sep 12 17:38:42.117256 kernel: acpiphp: Slot [24] registered Sep 12 17:38:42.117269 kernel: acpiphp: Slot [25] registered Sep 12 17:38:42.117282 kernel: acpiphp: Slot [26] registered Sep 12 17:38:42.117291 kernel: acpiphp: Slot [27] registered Sep 12 17:38:42.117300 kernel: acpiphp: Slot [28] registered Sep 12 17:38:42.117308 kernel: acpiphp: Slot [29] registered Sep 12 17:38:42.117316 kernel: acpiphp: Slot [30] registered Sep 12 17:38:42.117329 kernel: acpiphp: Slot [31] registered Sep 12 17:38:42.117338 kernel: PCI host bridge to bus 0000:00 Sep 12 17:38:42.117567 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:38:42.117719 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:38:42.117829 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:38:42.117927 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 17:38:42.118015 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 12 17:38:42.120200 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:38:42.120373 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 17:38:42.120494 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 12 17:38:42.120630 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 12 17:38:42.120744 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 12 17:38:42.120897 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 12 17:38:42.121041 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 12 17:38:42.121280 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 12 17:38:42.121422 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 12 17:38:42.121595 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 12 17:38:42.121713 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 12 17:38:42.121874 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 12 17:38:42.122021 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 12 17:38:42.124305 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 12 17:38:42.124481 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 12 17:38:42.124595 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 12 17:38:42.124712 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 12 17:38:42.124818 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 12 17:38:42.124925 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 12 17:38:42.125030 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:38:42.125247 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:38:42.125382 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 12 17:38:42.125543 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 12 17:38:42.125689 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 12 17:38:42.125846 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:38:42.125954 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 12 17:38:42.126063 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 12 17:38:42.128349 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 12 17:38:42.128491 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 12 17:38:42.128602 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 12 17:38:42.128717 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 12 17:38:42.128850 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 12 17:38:42.128991 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:38:42.129218 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 12 17:38:42.129394 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 12 17:38:42.129546 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 12 17:38:42.129686 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:38:42.129901 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 12 17:38:42.130193 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 12 17:38:42.130383 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 12 17:38:42.130595 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 12 17:38:42.130741 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 12 17:38:42.130839 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 12 17:38:42.130850 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:38:42.130859 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:38:42.130869 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:38:42.130877 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:38:42.130889 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:38:42.130898 kernel: iommu: Default domain type: Translated Sep 12 17:38:42.130906 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:38:42.130915 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:38:42.130923 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:38:42.130932 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 17:38:42.130941 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 12 17:38:42.131056 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 12 17:38:42.135324 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 12 17:38:42.135457 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:38:42.135468 kernel: vgaarb: loaded Sep 12 17:38:42.135478 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:38:42.135487 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:38:42.135496 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:38:42.135504 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:38:42.135513 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:38:42.135521 kernel: pnp: PnP ACPI init Sep 12 17:38:42.135530 kernel: pnp: PnP ACPI: found 4 devices Sep 12 17:38:42.135542 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:38:42.135551 kernel: NET: Registered PF_INET protocol family Sep 12 17:38:42.135559 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:38:42.135568 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:38:42.135577 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:38:42.135585 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:38:42.135594 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:38:42.135602 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:38:42.135611 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:38:42.135622 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:38:42.135630 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:38:42.135639 kernel: NET: Registered PF_XDP protocol family Sep 12 17:38:42.135771 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:38:42.135865 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:38:42.135951 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:38:42.136038 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 17:38:42.136141 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 12 17:38:42.136253 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 12 17:38:42.136374 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:38:42.136387 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 17:38:42.136499 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 35016 usecs Sep 12 17:38:42.136510 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:38:42.136519 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:38:42.136528 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985bfd9acd, max_idle_ns: 881590799062 ns Sep 12 17:38:42.136536 kernel: Initialise system trusted keyrings Sep 12 17:38:42.136552 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:38:42.136565 kernel: Key type asymmetric registered Sep 12 17:38:42.136578 kernel: Asymmetric key parser 'x509' registered Sep 12 17:38:42.136592 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:38:42.136606 kernel: io scheduler mq-deadline registered Sep 12 17:38:42.136620 kernel: io scheduler kyber registered Sep 12 17:38:42.136628 kernel: io scheduler bfq registered Sep 12 17:38:42.136637 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:38:42.136646 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 12 17:38:42.136654 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 12 17:38:42.136666 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 12 17:38:42.136675 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:38:42.136683 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:38:42.136698 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:38:42.136712 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:38:42.136723 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:38:42.136878 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 12 17:38:42.136892 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:38:42.137019 kernel: rtc_cmos 00:03: registered as rtc0 Sep 12 17:38:42.138693 kernel: rtc_cmos 00:03: setting system clock to 2025-09-12T17:38:41 UTC (1757698721) Sep 12 17:38:42.138847 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 12 17:38:42.138865 kernel: intel_pstate: CPU model not supported Sep 12 17:38:42.138879 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:38:42.138893 kernel: Segment Routing with IPv6 Sep 12 17:38:42.138907 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:38:42.138921 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:38:42.138941 kernel: Key type dns_resolver registered Sep 12 17:38:42.138951 kernel: IPI shorthand broadcast: enabled Sep 12 17:38:42.138960 kernel: sched_clock: Marking stable (1166003719, 174325514)->(1484218150, -143888917) Sep 12 17:38:42.138971 kernel: registered taskstats version 1 Sep 12 17:38:42.138984 kernel: Loading compiled-in X.509 certificates Sep 12 17:38:42.138998 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:38:42.139012 kernel: Key type .fscrypt registered Sep 12 17:38:42.139026 kernel: Key type fscrypt-provisioning registered Sep 12 17:38:42.139034 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:38:42.139046 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:38:42.139054 kernel: ima: No architecture policies found Sep 12 17:38:42.139063 kernel: clk: Disabling unused clocks Sep 12 17:38:42.139087 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:38:42.139100 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:38:42.139142 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:38:42.139160 kernel: Run /init as init process Sep 12 17:38:42.139171 kernel: with arguments: Sep 12 17:38:42.139180 kernel: /init Sep 12 17:38:42.139191 kernel: with environment: Sep 12 17:38:42.139204 kernel: HOME=/ Sep 12 17:38:42.139219 kernel: TERM=linux Sep 12 17:38:42.139231 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:38:42.139245 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:38:42.139256 systemd[1]: Detected virtualization kvm. Sep 12 17:38:42.139266 systemd[1]: Detected architecture x86-64. Sep 12 17:38:42.139275 systemd[1]: Running in initrd. Sep 12 17:38:42.139287 systemd[1]: No hostname configured, using default hostname. Sep 12 17:38:42.139295 systemd[1]: Hostname set to . Sep 12 17:38:42.139304 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:38:42.139313 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:38:42.139323 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:38:42.139340 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:38:42.139357 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:38:42.139370 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:38:42.139382 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:38:42.139391 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:38:42.139402 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:38:42.139411 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:38:42.139420 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:38:42.139429 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:38:42.139441 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:38:42.139450 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:38:42.139459 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:38:42.139471 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:38:42.139480 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:38:42.139489 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:38:42.139501 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:38:42.139510 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:38:42.139520 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:38:42.139536 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:38:42.139552 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:38:42.139567 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:38:42.139576 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:38:42.139586 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:38:42.139598 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:38:42.139608 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:38:42.139617 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:38:42.139629 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:38:42.139690 systemd-journald[183]: Collecting audit messages is disabled. Sep 12 17:38:42.139731 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:42.139744 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:38:42.139759 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:38:42.139772 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:38:42.139786 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:38:42.139805 systemd-journald[183]: Journal started Sep 12 17:38:42.139839 systemd-journald[183]: Runtime Journal (/run/log/journal/b732778b5661428fa99444f701e963a6) is 4.9M, max 39.3M, 34.4M free. Sep 12 17:38:42.144001 systemd-modules-load[184]: Inserted module 'overlay' Sep 12 17:38:42.188858 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:38:42.188906 kernel: Bridge firewalling registered Sep 12 17:38:42.188919 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:38:42.179465 systemd-modules-load[184]: Inserted module 'br_netfilter' Sep 12 17:38:42.199546 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:38:42.200615 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:42.210438 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:38:42.213364 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:38:42.229611 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:38:42.231243 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:38:42.247395 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:38:42.248509 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:42.252089 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:38:42.252988 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:38:42.257462 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:38:42.263341 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:38:42.267186 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:38:42.277886 dracut-cmdline[216]: dracut-dracut-053 Sep 12 17:38:42.283583 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:38:42.315025 systemd-resolved[218]: Positive Trust Anchors: Sep 12 17:38:42.315043 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:38:42.315899 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:38:42.322887 systemd-resolved[218]: Defaulting to hostname 'linux'. Sep 12 17:38:42.325605 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:38:42.326340 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:38:42.391160 kernel: SCSI subsystem initialized Sep 12 17:38:42.403145 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:38:42.418180 kernel: iscsi: registered transport (tcp) Sep 12 17:38:42.444506 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:38:42.444602 kernel: QLogic iSCSI HBA Driver Sep 12 17:38:42.502050 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:38:42.507460 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:38:42.544756 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:38:42.544861 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:38:42.546986 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:38:42.596144 kernel: raid6: avx2x4 gen() 28264 MB/s Sep 12 17:38:42.613148 kernel: raid6: avx2x2 gen() 27591 MB/s Sep 12 17:38:42.631240 kernel: raid6: avx2x1 gen() 21298 MB/s Sep 12 17:38:42.631331 kernel: raid6: using algorithm avx2x4 gen() 28264 MB/s Sep 12 17:38:42.649459 kernel: raid6: .... xor() 9679 MB/s, rmw enabled Sep 12 17:38:42.649566 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:38:42.676140 kernel: xor: automatically using best checksumming function avx Sep 12 17:38:42.858141 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:38:42.872212 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:38:42.879430 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:38:42.896304 systemd-udevd[402]: Using default interface naming scheme 'v255'. Sep 12 17:38:42.901681 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:38:42.909449 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:38:42.937672 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Sep 12 17:38:42.990886 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:38:42.998392 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:38:43.087129 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:38:43.098642 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:38:43.126831 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:38:43.130650 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:38:43.131325 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:38:43.131956 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:38:43.140435 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:38:43.175258 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:38:43.193112 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 12 17:38:43.203621 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 12 17:38:43.207124 kernel: scsi host0: Virtio SCSI HBA Sep 12 17:38:43.220113 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:38:43.230573 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:38:43.230694 kernel: GPT:9289727 != 125829119 Sep 12 17:38:43.230714 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:38:43.230732 kernel: GPT:9289727 != 125829119 Sep 12 17:38:43.230770 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:38:43.230785 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:38:43.267878 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:38:43.269418 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:43.271319 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:38:43.272004 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:38:43.275466 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:43.277280 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:43.291882 kernel: ACPI: bus type USB registered Sep 12 17:38:43.292581 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:43.306958 kernel: usbcore: registered new interface driver usbfs Sep 12 17:38:43.307012 kernel: usbcore: registered new interface driver hub Sep 12 17:38:43.307033 kernel: usbcore: registered new device driver usb Sep 12 17:38:43.312125 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 12 17:38:43.318120 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:38:43.320511 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 12 17:38:43.326106 kernel: AES CTR mode by8 optimization enabled Sep 12 17:38:43.394106 kernel: libata version 3.00 loaded. Sep 12 17:38:43.399638 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:43.410408 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 12 17:38:43.415424 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:38:43.429576 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (454) Sep 12 17:38:43.429622 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (462) Sep 12 17:38:43.438555 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:38:43.447985 kernel: scsi host1: ata_piix Sep 12 17:38:43.454130 kernel: scsi host2: ata_piix Sep 12 17:38:43.457367 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 12 17:38:43.457442 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 12 17:38:43.466550 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:38:43.469211 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:43.480793 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 12 17:38:43.483451 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 12 17:38:43.483663 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 12 17:38:43.485132 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 12 17:38:43.486457 kernel: hub 1-0:1.0: USB hub found Sep 12 17:38:43.487421 kernel: hub 1-0:1.0: 2 ports detected Sep 12 17:38:43.493109 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:38:43.494046 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:38:43.502791 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:38:43.519435 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:38:43.527975 disk-uuid[550]: Primary Header is updated. Sep 12 17:38:43.527975 disk-uuid[550]: Secondary Entries is updated. Sep 12 17:38:43.527975 disk-uuid[550]: Secondary Header is updated. Sep 12 17:38:43.533690 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:38:43.538675 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:38:44.574110 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:38:44.575011 disk-uuid[551]: The operation has completed successfully. Sep 12 17:38:44.628873 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:38:44.629022 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:38:44.645536 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:38:44.658143 sh[565]: Success Sep 12 17:38:44.675218 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:38:44.742473 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:38:44.751230 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:38:44.753731 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:38:44.785668 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:38:44.785752 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:44.787563 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:38:44.789837 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:38:44.789913 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:38:44.799547 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:38:44.800868 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:38:44.807400 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:38:44.812336 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:38:44.827516 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:44.827607 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:44.829350 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:38:44.835135 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:38:44.851147 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:44.850805 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:38:44.860839 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:38:44.867378 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:38:44.976321 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:38:44.983492 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:38:45.020867 systemd-networkd[748]: lo: Link UP Sep 12 17:38:45.020880 systemd-networkd[748]: lo: Gained carrier Sep 12 17:38:45.023428 systemd-networkd[748]: Enumeration completed Sep 12 17:38:45.023591 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:38:45.024783 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 17:38:45.024787 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 12 17:38:45.026442 systemd[1]: Reached target network.target - Network. Sep 12 17:38:45.028052 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:45.028056 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:38:45.029503 systemd-networkd[748]: eth0: Link UP Sep 12 17:38:45.029525 systemd-networkd[748]: eth0: Gained carrier Sep 12 17:38:45.029535 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 17:38:45.035539 systemd-networkd[748]: eth1: Link UP Sep 12 17:38:45.035544 systemd-networkd[748]: eth1: Gained carrier Sep 12 17:38:45.035561 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:38:45.056210 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.20/20 acquired from 169.254.169.253 Sep 12 17:38:45.059230 systemd-networkd[748]: eth0: DHCPv4 address 144.126.222.162/20, gateway 144.126.208.1 acquired from 169.254.169.253 Sep 12 17:38:45.059590 ignition[655]: Ignition 2.19.0 Sep 12 17:38:45.059600 ignition[655]: Stage: fetch-offline Sep 12 17:38:45.059686 ignition[655]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:45.059700 ignition[655]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:38:45.059820 ignition[655]: parsed url from cmdline: "" Sep 12 17:38:45.064199 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:38:45.059824 ignition[655]: no config URL provided Sep 12 17:38:45.059830 ignition[655]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:38:45.059839 ignition[655]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:38:45.059847 ignition[655]: failed to fetch config: resource requires networking Sep 12 17:38:45.060144 ignition[655]: Ignition finished successfully Sep 12 17:38:45.074753 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:38:45.101692 ignition[756]: Ignition 2.19.0 Sep 12 17:38:45.102770 ignition[756]: Stage: fetch Sep 12 17:38:45.103616 ignition[756]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:45.103632 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:38:45.103809 ignition[756]: parsed url from cmdline: "" Sep 12 17:38:45.103814 ignition[756]: no config URL provided Sep 12 17:38:45.103822 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:38:45.103834 ignition[756]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:38:45.103862 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 12 17:38:45.136361 ignition[756]: GET result: OK Sep 12 17:38:45.137416 ignition[756]: parsing config with SHA512: c1876f87a9561b98928b0448e7cf528cb96c597b4fb8c81a2fed41d1ead51c3cdc21022e1b116a6c66664d248d885a73015dd679995955d543e2ba4cb0454790 Sep 12 17:38:45.143600 unknown[756]: fetched base config from "system" Sep 12 17:38:45.144310 unknown[756]: fetched base config from "system" Sep 12 17:38:45.144858 unknown[756]: fetched user config from "digitalocean" Sep 12 17:38:45.146190 ignition[756]: fetch: fetch complete Sep 12 17:38:45.146197 ignition[756]: fetch: fetch passed Sep 12 17:38:45.146273 ignition[756]: Ignition finished successfully Sep 12 17:38:45.150291 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:38:45.157464 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:38:45.191284 ignition[763]: Ignition 2.19.0 Sep 12 17:38:45.191300 ignition[763]: Stage: kargs Sep 12 17:38:45.191557 ignition[763]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:45.191571 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:38:45.194687 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:38:45.192919 ignition[763]: kargs: kargs passed Sep 12 17:38:45.193008 ignition[763]: Ignition finished successfully Sep 12 17:38:45.205372 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:38:45.223619 ignition[769]: Ignition 2.19.0 Sep 12 17:38:45.223632 ignition[769]: Stage: disks Sep 12 17:38:45.223840 ignition[769]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:45.223861 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:38:45.225002 ignition[769]: disks: disks passed Sep 12 17:38:45.227253 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:38:45.225090 ignition[769]: Ignition finished successfully Sep 12 17:38:45.234374 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:38:45.235604 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:38:45.236763 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:38:45.238012 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:38:45.239341 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:38:45.246458 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:38:45.265121 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:38:45.270436 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:38:45.280380 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:38:45.389321 kernel: EXT4-fs (vda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:38:45.390190 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:38:45.392030 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:38:45.402389 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:38:45.405980 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:38:45.410310 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Sep 12 17:38:45.414134 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (785) Sep 12 17:38:45.419677 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:45.419774 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:45.421204 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:38:45.425435 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:38:45.428537 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:38:45.431252 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:38:45.428822 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:38:45.435559 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:38:45.436274 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:38:45.445434 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:38:45.518736 coreos-metadata[787]: Sep 12 17:38:45.518 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:38:45.526392 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:38:45.532355 coreos-metadata[787]: Sep 12 17:38:45.532 INFO Fetch successful Sep 12 17:38:45.540104 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:38:45.541354 coreos-metadata[788]: Sep 12 17:38:45.541 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:38:45.545293 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 12 17:38:45.545815 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Sep 12 17:38:45.550009 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:38:45.555825 coreos-metadata[788]: Sep 12 17:38:45.554 INFO Fetch successful Sep 12 17:38:45.558920 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:38:45.563981 coreos-metadata[788]: Sep 12 17:38:45.563 INFO wrote hostname ci-4081.3.6-a-756b4d7dc2 to /sysroot/etc/hostname Sep 12 17:38:45.566027 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:38:45.689942 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:38:45.694324 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:38:45.698324 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:38:45.712146 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:45.738966 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:38:45.746642 ignition[908]: INFO : Ignition 2.19.0 Sep 12 17:38:45.746642 ignition[908]: INFO : Stage: mount Sep 12 17:38:45.749303 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:45.749303 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:38:45.749303 ignition[908]: INFO : mount: mount passed Sep 12 17:38:45.749303 ignition[908]: INFO : Ignition finished successfully Sep 12 17:38:45.750705 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:38:45.759417 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:38:45.783783 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:38:45.790452 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:38:45.802123 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (919) Sep 12 17:38:45.804106 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:38:45.806423 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:38:45.806505 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:38:45.812141 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:38:45.813660 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:38:45.848444 ignition[935]: INFO : Ignition 2.19.0 Sep 12 17:38:45.849595 ignition[935]: INFO : Stage: files Sep 12 17:38:45.850477 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:45.850477 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:38:45.852081 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:38:45.853903 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:38:45.853903 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:38:45.858928 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:38:45.859959 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:38:45.859959 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:38:45.859913 unknown[935]: wrote ssh authorized keys file for user: core Sep 12 17:38:45.863314 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:38:45.864383 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 17:38:45.904792 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:38:46.090192 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 17:38:46.090192 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:38:46.090192 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:38:46.090192 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:38:46.095795 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:38:46.095795 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:38:46.095795 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:38:46.095795 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:38:46.095795 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:38:46.095795 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:38:46.095795 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:38:46.095795 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:38:46.095795 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:38:46.095795 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:38:46.095795 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 17:38:46.132317 systemd-networkd[748]: eth0: Gained IPv6LL Sep 12 17:38:46.195373 systemd-networkd[748]: eth1: Gained IPv6LL Sep 12 17:38:46.533755 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:38:46.963061 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 17:38:46.963061 ignition[935]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:38:46.966382 ignition[935]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:38:46.966382 ignition[935]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:38:46.966382 ignition[935]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:38:46.966382 ignition[935]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:38:46.966382 ignition[935]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:38:46.966382 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:38:46.966382 ignition[935]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:38:46.966382 ignition[935]: INFO : files: files passed Sep 12 17:38:46.966382 ignition[935]: INFO : Ignition finished successfully Sep 12 17:38:46.966707 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:38:46.978357 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:38:46.982834 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:38:46.985088 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:38:46.985259 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:38:47.020311 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:47.020311 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:47.023011 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:38:47.026119 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:38:47.027456 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:38:47.042450 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:38:47.081333 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:38:47.081528 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:38:47.083377 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:38:47.084537 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:38:47.085938 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:38:47.092494 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:38:47.115739 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:38:47.122454 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:38:47.145674 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:38:47.147440 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:38:47.149250 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:38:47.150182 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:38:47.150405 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:38:47.152204 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:38:47.153133 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:38:47.154744 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:38:47.156241 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:38:47.157660 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:38:47.159269 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:38:47.160716 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:38:47.162408 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:38:47.163823 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:38:47.165332 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:38:47.166581 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:38:47.166815 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:38:47.168368 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:38:47.169865 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:38:47.171167 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:38:47.171321 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:38:47.172494 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:38:47.172708 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:38:47.174450 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:38:47.174615 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:38:47.175671 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:38:47.175886 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:38:47.177466 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:38:47.177707 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:38:47.184576 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:38:47.185764 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:38:47.186180 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:38:47.199505 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:38:47.202649 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:38:47.202985 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:38:47.206443 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:38:47.206947 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:38:47.217490 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:38:47.217654 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:38:47.238989 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:38:47.252487 ignition[988]: INFO : Ignition 2.19.0 Sep 12 17:38:47.252487 ignition[988]: INFO : Stage: umount Sep 12 17:38:47.252487 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:38:47.252487 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:38:47.252487 ignition[988]: INFO : umount: umount passed Sep 12 17:38:47.252487 ignition[988]: INFO : Ignition finished successfully Sep 12 17:38:47.253583 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:38:47.254262 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:38:47.255472 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:38:47.255552 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:38:47.256680 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:38:47.256747 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:38:47.263385 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:38:47.263590 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:38:47.289355 systemd[1]: Stopped target network.target - Network. Sep 12 17:38:47.302208 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:38:47.302367 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:38:47.305343 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:38:47.307422 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:38:47.311236 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:38:47.312349 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:38:47.313708 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:38:47.315098 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:38:47.315180 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:38:47.316260 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:38:47.316316 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:38:47.317415 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:38:47.317507 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:38:47.318604 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:38:47.318665 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:38:47.319965 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:38:47.321402 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:38:47.322985 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:38:47.323172 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:38:47.325620 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:38:47.325763 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:38:47.325883 systemd-networkd[748]: eth0: DHCPv6 lease lost Sep 12 17:38:47.329263 systemd-networkd[748]: eth1: DHCPv6 lease lost Sep 12 17:38:47.331364 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:38:47.331548 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:38:47.335650 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:38:47.336403 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:38:47.339087 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:38:47.339193 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:38:47.348354 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:38:47.349172 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:38:47.349359 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:38:47.350333 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:38:47.350415 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:38:47.351643 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:38:47.351710 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:38:47.352912 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:38:47.352972 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:38:47.354775 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:38:47.372589 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:38:47.373694 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:38:47.375363 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:38:47.375422 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:38:47.376412 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:38:47.376467 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:38:47.377962 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:38:47.378047 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:38:47.380062 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:38:47.380175 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:38:47.381444 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:38:47.381495 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:38:47.389409 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:38:47.390431 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:38:47.390507 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:38:47.393107 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:38:47.393180 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:47.397894 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:38:47.398069 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:38:47.399125 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:38:47.399270 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:38:47.401158 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:38:47.409602 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:38:47.422772 systemd[1]: Switching root. Sep 12 17:38:47.484877 systemd-journald[183]: Journal stopped Sep 12 17:38:48.670275 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Sep 12 17:38:48.670350 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:38:48.670369 kernel: SELinux: policy capability open_perms=1 Sep 12 17:38:48.670381 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:38:48.670392 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:38:48.670403 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:38:48.670421 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:38:48.670432 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:38:48.670460 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:38:48.670471 kernel: audit: type=1403 audit(1757698727.659:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:38:48.670489 systemd[1]: Successfully loaded SELinux policy in 42.749ms. Sep 12 17:38:48.670519 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.245ms. Sep 12 17:38:48.670532 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:38:48.670555 systemd[1]: Detected virtualization kvm. Sep 12 17:38:48.670567 systemd[1]: Detected architecture x86-64. Sep 12 17:38:48.670583 systemd[1]: Detected first boot. Sep 12 17:38:48.670595 systemd[1]: Hostname set to . Sep 12 17:38:48.670608 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:38:48.670625 zram_generator::config[1030]: No configuration found. Sep 12 17:38:48.670642 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:38:48.670654 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:38:48.670665 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:38:48.670677 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:38:48.670689 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:38:48.670701 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:38:48.670713 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:38:48.670725 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:38:48.670739 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:38:48.670751 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:38:48.670762 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:38:48.670773 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:38:48.670784 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:38:48.670796 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:38:48.670807 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:38:48.670825 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:38:48.670837 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:38:48.670851 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:38:48.670863 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:38:48.670875 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:38:48.670887 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:38:48.670898 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:38:48.670910 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:38:48.670924 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:38:48.670936 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:38:48.670948 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:38:48.670960 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:38:48.670971 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:38:48.670988 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:38:48.670999 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:38:48.671011 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:38:48.671023 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:38:48.671034 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:38:48.671048 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:38:48.671060 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:38:48.671090 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:38:48.671102 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:38:48.671121 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:48.671142 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:38:48.671169 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:38:48.671182 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:38:48.671199 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:38:48.671211 systemd[1]: Reached target machines.target - Containers. Sep 12 17:38:48.671222 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:38:48.671234 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:48.671245 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:38:48.671257 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:38:48.671269 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:48.671282 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:38:48.671294 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:48.671308 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:38:48.671319 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:38:48.671331 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:38:48.671343 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:38:48.671355 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:38:48.671367 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:38:48.671379 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:38:48.671390 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:38:48.671411 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:38:48.671423 kernel: fuse: init (API version 7.39) Sep 12 17:38:48.671434 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:38:48.671470 systemd-journald[1103]: Collecting audit messages is disabled. Sep 12 17:38:48.671498 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:38:48.671511 systemd-journald[1103]: Journal started Sep 12 17:38:48.671538 systemd-journald[1103]: Runtime Journal (/run/log/journal/b732778b5661428fa99444f701e963a6) is 4.9M, max 39.3M, 34.4M free. Sep 12 17:38:48.380754 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:38:48.403859 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:38:48.404400 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:38:48.677131 kernel: loop: module loaded Sep 12 17:38:48.682126 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:38:48.685133 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:38:48.687154 systemd[1]: Stopped verity-setup.service. Sep 12 17:38:48.691196 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:48.698392 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:38:48.704028 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:38:48.706505 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:38:48.707811 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:38:48.709674 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:38:48.710321 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:38:48.713309 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:38:48.715576 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:38:48.716677 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:38:48.716850 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:38:48.718680 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:48.718831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:48.720491 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:48.720655 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:48.721666 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:38:48.721827 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:38:48.729730 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:38:48.731228 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:38:48.732620 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:38:48.734196 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:38:48.735734 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:38:48.764333 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:38:48.771246 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:38:48.783572 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:38:48.784406 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:38:48.784459 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:38:48.786694 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:38:48.795701 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:38:48.810239 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:38:48.811175 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:48.813334 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:38:48.817550 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:38:48.819201 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:38:48.831403 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:38:48.832115 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:38:48.833422 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:38:48.835914 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:38:48.838927 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:38:48.843898 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:38:48.846508 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:38:48.871259 systemd-journald[1103]: Time spent on flushing to /var/log/journal/b732778b5661428fa99444f701e963a6 is 72.162ms for 981 entries. Sep 12 17:38:48.871259 systemd-journald[1103]: System Journal (/var/log/journal/b732778b5661428fa99444f701e963a6) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:38:48.977355 systemd-journald[1103]: Received client request to flush runtime journal. Sep 12 17:38:48.977418 kernel: ACPI: bus type drm_connector registered Sep 12 17:38:48.977461 kernel: loop0: detected capacity change from 0 to 140768 Sep 12 17:38:48.977483 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:38:48.871930 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:38:48.877527 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:38:48.911848 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:38:48.912043 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:38:48.923586 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:38:48.930798 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:38:48.943284 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:38:48.965040 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:38:48.979052 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:38:48.981522 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:38:48.994641 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:38:49.011951 kernel: loop1: detected capacity change from 0 to 224512 Sep 12 17:38:49.014228 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:38:49.014924 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:38:49.017263 udevadm[1150]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 17:38:49.062285 kernel: loop2: detected capacity change from 0 to 142488 Sep 12 17:38:49.072220 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:38:49.087287 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:38:49.112937 kernel: loop3: detected capacity change from 0 to 8 Sep 12 17:38:49.139524 kernel: loop4: detected capacity change from 0 to 140768 Sep 12 17:38:49.144001 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Sep 12 17:38:49.144022 systemd-tmpfiles[1171]: ACLs are not supported, ignoring. Sep 12 17:38:49.160532 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:38:49.190466 kernel: loop5: detected capacity change from 0 to 224512 Sep 12 17:38:49.214220 kernel: loop6: detected capacity change from 0 to 142488 Sep 12 17:38:49.263634 kernel: loop7: detected capacity change from 0 to 8 Sep 12 17:38:49.265445 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 12 17:38:49.266042 (sd-merge)[1175]: Merged extensions into '/usr'. Sep 12 17:38:49.277564 systemd[1]: Reloading requested from client PID 1141 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:38:49.277590 systemd[1]: Reloading... Sep 12 17:38:49.502109 zram_generator::config[1206]: No configuration found. Sep 12 17:38:49.624981 ldconfig[1136]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:38:49.638183 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:38:49.692709 systemd[1]: Reloading finished in 414 ms. Sep 12 17:38:49.740002 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:38:49.741173 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:38:49.752415 systemd[1]: Starting ensure-sysext.service... Sep 12 17:38:49.756523 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:38:49.777257 systemd[1]: Reloading requested from client PID 1246 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:38:49.777283 systemd[1]: Reloading... Sep 12 17:38:49.825519 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:38:49.825905 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:38:49.826846 systemd-tmpfiles[1247]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:38:49.829403 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Sep 12 17:38:49.829622 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Sep 12 17:38:49.834965 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:38:49.835154 systemd-tmpfiles[1247]: Skipping /boot Sep 12 17:38:49.851771 systemd-tmpfiles[1247]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:38:49.852573 systemd-tmpfiles[1247]: Skipping /boot Sep 12 17:38:49.910111 zram_generator::config[1274]: No configuration found. Sep 12 17:38:50.066134 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:38:50.115991 systemd[1]: Reloading finished in 338 ms. Sep 12 17:38:50.134265 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:38:50.139871 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:38:50.154715 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:38:50.159403 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:38:50.166388 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:38:50.170975 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:38:50.182365 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:38:50.188294 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:38:50.194997 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:50.197318 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:50.201510 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:50.210360 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:50.219570 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:38:50.220518 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:50.220658 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:50.224837 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:50.225085 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:50.225338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:50.235198 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:38:50.236796 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:50.240572 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:50.240793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:50.247476 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:38:50.249825 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:50.250063 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:50.255738 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:38:50.255926 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:38:50.258328 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:38:50.259183 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:38:50.260876 systemd[1]: Finished ensure-sysext.service. Sep 12 17:38:50.274644 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:38:50.279882 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:38:50.285589 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:50.285782 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:50.292460 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:50.293228 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:50.297772 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:38:50.297898 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:38:50.300512 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Sep 12 17:38:50.306448 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:38:50.307125 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:38:50.310193 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:38:50.322366 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:38:50.327514 augenrules[1355]: No rules Sep 12 17:38:50.330149 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:38:50.349481 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:38:50.359349 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:38:50.360510 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:38:50.369113 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:38:50.462293 systemd-networkd[1365]: lo: Link UP Sep 12 17:38:50.462303 systemd-networkd[1365]: lo: Gained carrier Sep 12 17:38:50.463174 systemd-networkd[1365]: Enumeration completed Sep 12 17:38:50.463326 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:38:50.474309 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:38:50.562019 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:38:50.570287 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 12 17:38:50.570865 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:50.571000 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:38:50.578462 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:38:50.582768 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:38:50.591284 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:38:50.591978 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:38:50.592019 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:38:50.592036 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:38:50.592293 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:38:50.593052 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:38:50.596725 systemd-resolved[1327]: Positive Trust Anchors: Sep 12 17:38:50.597125 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:38:50.597238 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:38:50.604605 systemd-resolved[1327]: Using system hostname 'ci-4081.3.6-a-756b4d7dc2'. Sep 12 17:38:50.608301 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:38:50.609288 systemd[1]: Reached target network.target - Network. Sep 12 17:38:50.611218 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:38:50.625144 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1380) Sep 12 17:38:50.628856 kernel: ISO 9660 Extensions: RRIP_1991A Sep 12 17:38:50.634659 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 12 17:38:50.637582 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:38:50.638643 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:38:50.657841 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:38:50.658050 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:38:50.659301 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:38:50.659576 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:38:50.660904 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:38:50.660959 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:38:50.695568 systemd-networkd[1365]: eth0: Configuring with /run/systemd/network/10-66:98:6b:77:1b:21.network. Sep 12 17:38:50.700448 systemd-networkd[1365]: eth0: Link UP Sep 12 17:38:50.700702 systemd-networkd[1365]: eth0: Gained carrier Sep 12 17:38:50.701129 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:38:50.707114 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:38:50.708270 systemd-timesyncd[1351]: Network configuration changed, trying to establish connection. Sep 12 17:38:50.749429 systemd-networkd[1365]: eth1: Configuring with /run/systemd/network/10-62:ce:a4:48:0d:c1.network. Sep 12 17:38:50.750372 systemd-networkd[1365]: eth1: Link UP Sep 12 17:38:50.750378 systemd-networkd[1365]: eth1: Gained carrier Sep 12 17:38:50.752809 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 12 17:38:50.815395 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:38:50.836339 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:38:50.837102 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:38:50.873429 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:50.876117 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:38:50.879917 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:38:50.979230 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 12 17:38:50.979346 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 12 17:38:50.984278 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:38:50.985398 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 12 17:38:50.985442 kernel: [drm] features: -context_init Sep 12 17:38:50.988101 kernel: [drm] number of scanouts: 1 Sep 12 17:38:50.989101 kernel: [drm] number of cap sets: 0 Sep 12 17:38:50.993109 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 12 17:38:51.005099 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 12 17:38:51.005184 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:38:51.005238 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 12 17:38:51.031942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:38:51.032253 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:51.044838 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:38:51.065054 systemd-timesyncd[1351]: Contacted time server 172.232.15.202:123 (0.flatcar.pool.ntp.org). Sep 12 17:38:51.065149 systemd-timesyncd[1351]: Initial clock synchronization to Fri 2025-09-12 17:38:51.168165 UTC. Sep 12 17:38:51.069343 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:38:51.082143 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:38:51.093874 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:38:51.102439 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:38:51.116753 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:38:51.149572 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:38:51.151839 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:38:51.152015 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:38:51.152571 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:38:51.154272 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:38:51.154945 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:38:51.155131 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:38:51.155205 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:38:51.155270 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:38:51.155304 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:38:51.155354 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:38:51.157350 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:38:51.159674 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:38:51.165478 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:38:51.167809 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:38:51.168545 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:38:51.169657 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:38:51.171313 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:38:51.171815 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:38:51.171852 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:38:51.173860 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:38:51.179370 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:38:51.186159 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:38:51.190434 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:38:51.196691 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:38:51.202331 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:38:51.204002 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:38:51.207500 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:38:51.212740 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:38:51.222293 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:38:51.226753 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:38:51.241965 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:38:51.244673 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:38:51.247319 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:38:51.248255 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:38:51.263247 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:38:51.265458 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:38:51.272592 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:38:51.273900 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:38:51.283995 jq[1437]: false Sep 12 17:38:51.286426 coreos-metadata[1435]: Sep 12 17:38:51.284 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:38:51.288839 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:38:51.290540 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:38:51.294426 coreos-metadata[1435]: Sep 12 17:38:51.294 INFO Fetch successful Sep 12 17:38:51.300009 jq[1448]: true Sep 12 17:38:51.334913 extend-filesystems[1440]: Found loop4 Sep 12 17:38:51.334913 extend-filesystems[1440]: Found loop5 Sep 12 17:38:51.338020 extend-filesystems[1440]: Found loop6 Sep 12 17:38:51.338020 extend-filesystems[1440]: Found loop7 Sep 12 17:38:51.338020 extend-filesystems[1440]: Found vda Sep 12 17:38:51.338020 extend-filesystems[1440]: Found vda1 Sep 12 17:38:51.338020 extend-filesystems[1440]: Found vda2 Sep 12 17:38:51.338020 extend-filesystems[1440]: Found vda3 Sep 12 17:38:51.338020 extend-filesystems[1440]: Found usr Sep 12 17:38:51.338020 extend-filesystems[1440]: Found vda4 Sep 12 17:38:51.338020 extend-filesystems[1440]: Found vda6 Sep 12 17:38:51.338020 extend-filesystems[1440]: Found vda7 Sep 12 17:38:51.338020 extend-filesystems[1440]: Found vda9 Sep 12 17:38:51.338020 extend-filesystems[1440]: Checking size of /dev/vda9 Sep 12 17:38:51.390757 jq[1465]: true Sep 12 17:38:51.350712 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:38:51.390995 tar[1450]: linux-amd64/LICENSE Sep 12 17:38:51.390995 tar[1450]: linux-amd64/helm Sep 12 17:38:51.388620 dbus-daemon[1436]: [system] SELinux support is enabled Sep 12 17:38:51.408332 update_engine[1447]: I20250912 17:38:51.376451 1447 main.cc:92] Flatcar Update Engine starting Sep 12 17:38:51.350919 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:38:51.393460 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:38:51.404673 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:38:51.414727 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:38:51.422581 update_engine[1447]: I20250912 17:38:51.409325 1447 update_check_scheduler.cc:74] Next update check in 6m45s Sep 12 17:38:51.422615 extend-filesystems[1440]: Resized partition /dev/vda9 Sep 12 17:38:51.414770 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:38:51.427404 extend-filesystems[1480]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:38:51.417456 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:38:51.417570 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 12 17:38:51.417598 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:38:51.420873 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:38:51.440429 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:38:51.442439 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:38:51.452152 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 12 17:38:51.449969 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:38:51.562585 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1364) Sep 12 17:38:51.612640 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 12 17:38:51.612787 systemd-logind[1445]: New seat seat0. Sep 12 17:38:51.639950 systemd-logind[1445]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:38:51.640001 systemd-logind[1445]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:38:51.640333 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:38:51.644831 extend-filesystems[1480]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:38:51.644831 extend-filesystems[1480]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 12 17:38:51.644831 extend-filesystems[1480]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 12 17:38:51.659518 extend-filesystems[1440]: Resized filesystem in /dev/vda9 Sep 12 17:38:51.659518 extend-filesystems[1440]: Found vdb Sep 12 17:38:51.665135 bash[1498]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:38:51.646233 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:38:51.647334 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:38:51.653495 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:38:51.677158 systemd[1]: Starting sshkeys.service... Sep 12 17:38:51.711488 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:38:51.726939 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:38:51.849661 coreos-metadata[1507]: Sep 12 17:38:51.849 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:38:51.868320 coreos-metadata[1507]: Sep 12 17:38:51.867 INFO Fetch successful Sep 12 17:38:51.871133 sshd_keygen[1476]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:38:51.870952 locksmithd[1483]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:38:51.888054 unknown[1507]: wrote ssh authorized keys file for user: core Sep 12 17:38:51.891325 systemd-networkd[1365]: eth1: Gained IPv6LL Sep 12 17:38:51.894788 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:38:51.909739 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:38:51.923790 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:38:51.930939 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:38:51.937457 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:38:51.953667 update-ssh-keys[1520]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:38:51.949963 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:38:51.953603 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:38:51.959958 systemd[1]: Finished sshkeys.service. Sep 12 17:38:51.972732 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:38:51.972931 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:38:51.982508 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:38:52.032458 containerd[1471]: time="2025-09-12T17:38:52.031466453Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:38:52.036580 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:38:52.042115 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:38:52.055012 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:38:52.066152 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:38:52.068877 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:38:52.092291 containerd[1471]: time="2025-09-12T17:38:52.091908634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:52.097984 containerd[1471]: time="2025-09-12T17:38:52.097912778Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:52.097984 containerd[1471]: time="2025-09-12T17:38:52.097974809Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:38:52.097984 containerd[1471]: time="2025-09-12T17:38:52.098000875Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:38:52.098222 containerd[1471]: time="2025-09-12T17:38:52.098204600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:38:52.098266 containerd[1471]: time="2025-09-12T17:38:52.098226670Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:52.098301 containerd[1471]: time="2025-09-12T17:38:52.098284312Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:52.098322 containerd[1471]: time="2025-09-12T17:38:52.098302159Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:52.098578 containerd[1471]: time="2025-09-12T17:38:52.098552267Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:52.098578 containerd[1471]: time="2025-09-12T17:38:52.098575160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:52.098641 containerd[1471]: time="2025-09-12T17:38:52.098589335Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:52.098641 containerd[1471]: time="2025-09-12T17:38:52.098599917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:52.098701 containerd[1471]: time="2025-09-12T17:38:52.098678336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:52.098918 containerd[1471]: time="2025-09-12T17:38:52.098900513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:38:52.099355 containerd[1471]: time="2025-09-12T17:38:52.099017836Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:38:52.099355 containerd[1471]: time="2025-09-12T17:38:52.099036124Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:38:52.101287 containerd[1471]: time="2025-09-12T17:38:52.101246021Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:38:52.101363 containerd[1471]: time="2025-09-12T17:38:52.101329164Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:38:52.119422 containerd[1471]: time="2025-09-12T17:38:52.119298280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:38:52.119422 containerd[1471]: time="2025-09-12T17:38:52.119379378Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:38:52.119422 containerd[1471]: time="2025-09-12T17:38:52.119400541Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:38:52.119422 containerd[1471]: time="2025-09-12T17:38:52.119417134Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:38:52.119600 containerd[1471]: time="2025-09-12T17:38:52.119433656Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:38:52.119696 containerd[1471]: time="2025-09-12T17:38:52.119637225Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:38:52.120161 containerd[1471]: time="2025-09-12T17:38:52.119886876Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:38:52.120161 containerd[1471]: time="2025-09-12T17:38:52.120003296Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:38:52.120161 containerd[1471]: time="2025-09-12T17:38:52.120017636Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:38:52.120161 containerd[1471]: time="2025-09-12T17:38:52.120031587Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:38:52.120161 containerd[1471]: time="2025-09-12T17:38:52.120047130Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:38:52.120161 containerd[1471]: time="2025-09-12T17:38:52.120059803Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:38:52.120161 containerd[1471]: time="2025-09-12T17:38:52.120073757Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:38:52.120512 containerd[1471]: time="2025-09-12T17:38:52.120189405Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:38:52.120512 containerd[1471]: time="2025-09-12T17:38:52.120231559Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:38:52.120512 containerd[1471]: time="2025-09-12T17:38:52.120246215Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:38:52.120512 containerd[1471]: time="2025-09-12T17:38:52.120258375Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:38:52.120512 containerd[1471]: time="2025-09-12T17:38:52.120270898Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:38:52.120512 containerd[1471]: time="2025-09-12T17:38:52.120307241Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.120512 containerd[1471]: time="2025-09-12T17:38:52.120336317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.120512 containerd[1471]: time="2025-09-12T17:38:52.120350871Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.120512 containerd[1471]: time="2025-09-12T17:38:52.120380839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.120512 containerd[1471]: time="2025-09-12T17:38:52.120393945Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.120512 containerd[1471]: time="2025-09-12T17:38:52.120407042Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.120512 containerd[1471]: time="2025-09-12T17:38:52.120420997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.120512 containerd[1471]: time="2025-09-12T17:38:52.120435714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.120512 containerd[1471]: time="2025-09-12T17:38:52.120461916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.120812 containerd[1471]: time="2025-09-12T17:38:52.120476280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.120812 containerd[1471]: time="2025-09-12T17:38:52.120489062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.120812 containerd[1471]: time="2025-09-12T17:38:52.120504602Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.122848 containerd[1471]: time="2025-09-12T17:38:52.120517854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.122848 containerd[1471]: time="2025-09-12T17:38:52.121968713Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:38:52.122848 containerd[1471]: time="2025-09-12T17:38:52.122047185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.122848 containerd[1471]: time="2025-09-12T17:38:52.122098382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.122848 containerd[1471]: time="2025-09-12T17:38:52.122141864Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:38:52.122848 containerd[1471]: time="2025-09-12T17:38:52.122281091Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:38:52.122848 containerd[1471]: time="2025-09-12T17:38:52.122472620Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:38:52.122848 containerd[1471]: time="2025-09-12T17:38:52.122498793Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:38:52.122848 containerd[1471]: time="2025-09-12T17:38:52.122558097Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:38:52.122848 containerd[1471]: time="2025-09-12T17:38:52.122586550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.122848 containerd[1471]: time="2025-09-12T17:38:52.122664448Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:38:52.122848 containerd[1471]: time="2025-09-12T17:38:52.122683861Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:38:52.122848 containerd[1471]: time="2025-09-12T17:38:52.122743433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:38:52.125303 containerd[1471]: time="2025-09-12T17:38:52.124753151Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:38:52.125303 containerd[1471]: time="2025-09-12T17:38:52.124909675Z" level=info msg="Connect containerd service" Sep 12 17:38:52.125303 containerd[1471]: time="2025-09-12T17:38:52.124990362Z" level=info msg="using legacy CRI server" Sep 12 17:38:52.125303 containerd[1471]: time="2025-09-12T17:38:52.125017338Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:38:52.125303 containerd[1471]: time="2025-09-12T17:38:52.125300558Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:38:52.129473 containerd[1471]: time="2025-09-12T17:38:52.128865380Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:38:52.129473 containerd[1471]: time="2025-09-12T17:38:52.129199339Z" level=info msg="Start subscribing containerd event" Sep 12 17:38:52.129473 containerd[1471]: time="2025-09-12T17:38:52.129256615Z" level=info msg="Start recovering state" Sep 12 17:38:52.129697 containerd[1471]: time="2025-09-12T17:38:52.129643507Z" level=info msg="Start event monitor" Sep 12 17:38:52.129742 containerd[1471]: time="2025-09-12T17:38:52.129699323Z" level=info msg="Start snapshots syncer" Sep 12 17:38:52.129742 containerd[1471]: time="2025-09-12T17:38:52.129711817Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:38:52.129742 containerd[1471]: time="2025-09-12T17:38:52.129720797Z" level=info msg="Start streaming server" Sep 12 17:38:52.130374 containerd[1471]: time="2025-09-12T17:38:52.129961945Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:38:52.130374 containerd[1471]: time="2025-09-12T17:38:52.130034622Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:38:52.133067 containerd[1471]: time="2025-09-12T17:38:52.131000777Z" level=info msg="containerd successfully booted in 0.104313s" Sep 12 17:38:52.131163 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:38:52.340383 systemd-networkd[1365]: eth0: Gained IPv6LL Sep 12 17:38:52.604792 tar[1450]: linux-amd64/README.md Sep 12 17:38:52.621552 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:38:53.208535 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:38:53.209815 (kubelet)[1559]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:38:53.212076 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:38:53.214774 systemd[1]: Startup finished in 1.404s (kernel) + 5.847s (initrd) + 5.597s (userspace) = 12.849s. Sep 12 17:38:53.927781 kubelet[1559]: E0912 17:38:53.927704 1559 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:38:53.930103 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:38:53.930403 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:38:53.931140 systemd[1]: kubelet.service: Consumed 1.483s CPU time. Sep 12 17:38:55.391831 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:38:55.397573 systemd[1]: Started sshd@0-144.126.222.162:22-147.75.109.163:57056.service - OpenSSH per-connection server daemon (147.75.109.163:57056). Sep 12 17:38:55.481726 sshd[1571]: Accepted publickey for core from 147.75.109.163 port 57056 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:55.484819 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:55.496734 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:38:55.502455 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:38:55.505384 systemd-logind[1445]: New session 1 of user core. Sep 12 17:38:55.526754 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:38:55.532622 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:38:55.543484 (systemd)[1575]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:38:55.669124 systemd[1575]: Queued start job for default target default.target. Sep 12 17:38:55.678617 systemd[1575]: Created slice app.slice - User Application Slice. Sep 12 17:38:55.678678 systemd[1575]: Reached target paths.target - Paths. Sep 12 17:38:55.678697 systemd[1575]: Reached target timers.target - Timers. Sep 12 17:38:55.680733 systemd[1575]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:38:55.697579 systemd[1575]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:38:55.697676 systemd[1575]: Reached target sockets.target - Sockets. Sep 12 17:38:55.697697 systemd[1575]: Reached target basic.target - Basic System. Sep 12 17:38:55.697762 systemd[1575]: Reached target default.target - Main User Target. Sep 12 17:38:55.697798 systemd[1575]: Startup finished in 144ms. Sep 12 17:38:55.698475 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:38:55.709395 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:38:55.787407 systemd[1]: Started sshd@1-144.126.222.162:22-147.75.109.163:57062.service - OpenSSH per-connection server daemon (147.75.109.163:57062). Sep 12 17:38:55.847052 sshd[1586]: Accepted publickey for core from 147.75.109.163 port 57062 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:55.849674 sshd[1586]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:55.857439 systemd-logind[1445]: New session 2 of user core. Sep 12 17:38:55.864526 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:38:55.933026 sshd[1586]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:55.947289 systemd[1]: sshd@1-144.126.222.162:22-147.75.109.163:57062.service: Deactivated successfully. Sep 12 17:38:55.949654 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:38:55.952443 systemd-logind[1445]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:38:55.958760 systemd[1]: Started sshd@2-144.126.222.162:22-147.75.109.163:57068.service - OpenSSH per-connection server daemon (147.75.109.163:57068). Sep 12 17:38:55.961990 systemd-logind[1445]: Removed session 2. Sep 12 17:38:56.008627 sshd[1593]: Accepted publickey for core from 147.75.109.163 port 57068 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:56.010758 sshd[1593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:56.019241 systemd-logind[1445]: New session 3 of user core. Sep 12 17:38:56.024452 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:38:56.083912 sshd[1593]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:56.101187 systemd[1]: sshd@2-144.126.222.162:22-147.75.109.163:57068.service: Deactivated successfully. Sep 12 17:38:56.104241 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:38:56.106337 systemd-logind[1445]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:38:56.112730 systemd[1]: Started sshd@3-144.126.222.162:22-147.75.109.163:57080.service - OpenSSH per-connection server daemon (147.75.109.163:57080). Sep 12 17:38:56.114975 systemd-logind[1445]: Removed session 3. Sep 12 17:38:56.171916 sshd[1600]: Accepted publickey for core from 147.75.109.163 port 57080 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:56.174297 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:56.181564 systemd-logind[1445]: New session 4 of user core. Sep 12 17:38:56.189494 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:38:56.258176 sshd[1600]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:56.275541 systemd[1]: sshd@3-144.126.222.162:22-147.75.109.163:57080.service: Deactivated successfully. Sep 12 17:38:56.278323 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:38:56.280742 systemd-logind[1445]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:38:56.287622 systemd[1]: Started sshd@4-144.126.222.162:22-147.75.109.163:57096.service - OpenSSH per-connection server daemon (147.75.109.163:57096). Sep 12 17:38:56.289494 systemd-logind[1445]: Removed session 4. Sep 12 17:38:56.337256 sshd[1607]: Accepted publickey for core from 147.75.109.163 port 57096 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:56.339601 sshd[1607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:56.346831 systemd-logind[1445]: New session 5 of user core. Sep 12 17:38:56.353440 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:38:56.438468 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:38:56.438991 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:38:56.457267 sudo[1610]: pam_unix(sudo:session): session closed for user root Sep 12 17:38:56.462616 sshd[1607]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:56.473700 systemd[1]: sshd@4-144.126.222.162:22-147.75.109.163:57096.service: Deactivated successfully. Sep 12 17:38:56.477540 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:38:56.479963 systemd-logind[1445]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:38:56.484521 systemd[1]: Started sshd@5-144.126.222.162:22-147.75.109.163:57108.service - OpenSSH per-connection server daemon (147.75.109.163:57108). Sep 12 17:38:56.486834 systemd-logind[1445]: Removed session 5. Sep 12 17:38:56.560463 sshd[1615]: Accepted publickey for core from 147.75.109.163 port 57108 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:56.562595 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:56.569933 systemd-logind[1445]: New session 6 of user core. Sep 12 17:38:56.577495 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:38:56.644886 sudo[1619]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:38:56.645395 sudo[1619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:38:56.651726 sudo[1619]: pam_unix(sudo:session): session closed for user root Sep 12 17:38:56.660056 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:38:56.660520 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:38:56.685068 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:38:56.688154 auditctl[1622]: No rules Sep 12 17:38:56.688221 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:38:56.688559 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:38:56.693591 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:38:56.743912 augenrules[1640]: No rules Sep 12 17:38:56.745656 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:38:56.747625 sudo[1618]: pam_unix(sudo:session): session closed for user root Sep 12 17:38:56.751803 sshd[1615]: pam_unix(sshd:session): session closed for user core Sep 12 17:38:56.766946 systemd[1]: sshd@5-144.126.222.162:22-147.75.109.163:57108.service: Deactivated successfully. Sep 12 17:38:56.768895 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:38:56.771395 systemd-logind[1445]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:38:56.779596 systemd[1]: Started sshd@6-144.126.222.162:22-147.75.109.163:57116.service - OpenSSH per-connection server daemon (147.75.109.163:57116). Sep 12 17:38:56.782264 systemd-logind[1445]: Removed session 6. Sep 12 17:38:56.826828 sshd[1648]: Accepted publickey for core from 147.75.109.163 port 57116 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:38:56.829216 sshd[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:38:56.836130 systemd-logind[1445]: New session 7 of user core. Sep 12 17:38:56.843497 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:38:56.906076 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:38:56.906720 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:38:57.454774 (dockerd)[1667]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:38:57.454848 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:38:57.934156 dockerd[1667]: time="2025-09-12T17:38:57.933468510Z" level=info msg="Starting up" Sep 12 17:38:58.109653 dockerd[1667]: time="2025-09-12T17:38:58.109155165Z" level=info msg="Loading containers: start." Sep 12 17:38:58.259253 kernel: Initializing XFRM netlink socket Sep 12 17:38:58.382908 systemd-networkd[1365]: docker0: Link UP Sep 12 17:38:58.399814 dockerd[1667]: time="2025-09-12T17:38:58.399747285Z" level=info msg="Loading containers: done." Sep 12 17:38:58.420831 dockerd[1667]: time="2025-09-12T17:38:58.420745217Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:38:58.421067 dockerd[1667]: time="2025-09-12T17:38:58.420897488Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:38:58.421229 dockerd[1667]: time="2025-09-12T17:38:58.421152135Z" level=info msg="Daemon has completed initialization" Sep 12 17:38:58.482457 dockerd[1667]: time="2025-09-12T17:38:58.481915901Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:38:58.482317 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:38:59.516706 containerd[1471]: time="2025-09-12T17:38:59.516641164Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:39:00.185133 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3223327784.mount: Deactivated successfully. Sep 12 17:39:01.939782 containerd[1471]: time="2025-09-12T17:39:01.939176533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:01.945198 containerd[1471]: time="2025-09-12T17:39:01.943300893Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837916" Sep 12 17:39:01.945198 containerd[1471]: time="2025-09-12T17:39:01.944888229Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:01.955131 containerd[1471]: time="2025-09-12T17:39:01.954391571Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:01.959496 containerd[1471]: time="2025-09-12T17:39:01.959036464Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 2.442338006s" Sep 12 17:39:01.959496 containerd[1471]: time="2025-09-12T17:39:01.959136166Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 17:39:01.960708 containerd[1471]: time="2025-09-12T17:39:01.960385483Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:39:03.853635 containerd[1471]: time="2025-09-12T17:39:03.853528613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:03.855560 containerd[1471]: time="2025-09-12T17:39:03.855099113Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787027" Sep 12 17:39:03.856391 containerd[1471]: time="2025-09-12T17:39:03.856333653Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:03.861557 containerd[1471]: time="2025-09-12T17:39:03.861488702Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:03.863307 containerd[1471]: time="2025-09-12T17:39:03.863252027Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 1.902812262s" Sep 12 17:39:03.863307 containerd[1471]: time="2025-09-12T17:39:03.863306119Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 17:39:03.864377 containerd[1471]: time="2025-09-12T17:39:03.864339386Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:39:04.180948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:39:04.192457 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:04.414398 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:04.414625 (kubelet)[1882]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:39:04.496969 kubelet[1882]: E0912 17:39:04.496709 1882 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:39:04.501701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:39:04.501895 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:39:05.579540 containerd[1471]: time="2025-09-12T17:39:05.579452697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:05.581106 containerd[1471]: time="2025-09-12T17:39:05.580972524Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176289" Sep 12 17:39:05.582906 containerd[1471]: time="2025-09-12T17:39:05.582820040Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:05.589117 containerd[1471]: time="2025-09-12T17:39:05.587440936Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:05.589521 containerd[1471]: time="2025-09-12T17:39:05.589467237Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 1.725082306s" Sep 12 17:39:05.589650 containerd[1471]: time="2025-09-12T17:39:05.589627753Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 17:39:05.590616 containerd[1471]: time="2025-09-12T17:39:05.590552612Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:39:05.720750 systemd-resolved[1327]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 12 17:39:06.880241 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452484843.mount: Deactivated successfully. Sep 12 17:39:07.693680 containerd[1471]: time="2025-09-12T17:39:07.693601931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:07.694956 containerd[1471]: time="2025-09-12T17:39:07.694707401Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924206" Sep 12 17:39:07.696068 containerd[1471]: time="2025-09-12T17:39:07.695740359Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:07.698351 containerd[1471]: time="2025-09-12T17:39:07.698305125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:07.699435 containerd[1471]: time="2025-09-12T17:39:07.699387335Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.108607127s" Sep 12 17:39:07.699510 containerd[1471]: time="2025-09-12T17:39:07.699441454Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 17:39:07.700882 containerd[1471]: time="2025-09-12T17:39:07.700254642Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:39:08.263848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1085761140.mount: Deactivated successfully. Sep 12 17:39:08.787461 systemd-resolved[1327]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 12 17:39:09.471193 containerd[1471]: time="2025-09-12T17:39:09.470274937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:09.472621 containerd[1471]: time="2025-09-12T17:39:09.472225845Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:39:09.475114 containerd[1471]: time="2025-09-12T17:39:09.473430431Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:09.477516 containerd[1471]: time="2025-09-12T17:39:09.477472338Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:09.480043 containerd[1471]: time="2025-09-12T17:39:09.479974674Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.779674461s" Sep 12 17:39:09.480206 containerd[1471]: time="2025-09-12T17:39:09.480047816Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:39:09.480738 containerd[1471]: time="2025-09-12T17:39:09.480705114Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:39:10.101922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3833450517.mount: Deactivated successfully. Sep 12 17:39:10.109437 containerd[1471]: time="2025-09-12T17:39:10.109338467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:10.109791 containerd[1471]: time="2025-09-12T17:39:10.109721334Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:39:10.111367 containerd[1471]: time="2025-09-12T17:39:10.111287076Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:10.115103 containerd[1471]: time="2025-09-12T17:39:10.115005922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:10.117115 containerd[1471]: time="2025-09-12T17:39:10.116531029Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 635.776443ms" Sep 12 17:39:10.117115 containerd[1471]: time="2025-09-12T17:39:10.116594672Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:39:10.118103 containerd[1471]: time="2025-09-12T17:39:10.118032590Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:39:10.775163 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196099244.mount: Deactivated successfully. Sep 12 17:39:13.023640 containerd[1471]: time="2025-09-12T17:39:13.023556873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:13.026205 containerd[1471]: time="2025-09-12T17:39:13.026125057Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682056" Sep 12 17:39:13.027415 containerd[1471]: time="2025-09-12T17:39:13.027314324Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:13.033130 containerd[1471]: time="2025-09-12T17:39:13.032701811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:13.035548 containerd[1471]: time="2025-09-12T17:39:13.035060315Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.916955558s" Sep 12 17:39:13.035548 containerd[1471]: time="2025-09-12T17:39:13.035157161Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 17:39:14.520857 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:39:14.529992 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:14.690316 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:14.703042 (kubelet)[2039]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:39:14.790643 kubelet[2039]: E0912 17:39:14.790412 2039 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:39:14.794946 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:39:14.795150 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:39:17.568366 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:17.576744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:17.627098 systemd[1]: Reloading requested from client PID 2054 ('systemctl') (unit session-7.scope)... Sep 12 17:39:17.627124 systemd[1]: Reloading... Sep 12 17:39:17.794114 zram_generator::config[2093]: No configuration found. Sep 12 17:39:17.948917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:39:18.051636 systemd[1]: Reloading finished in 423 ms. Sep 12 17:39:18.117227 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:18.119615 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:18.126183 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:39:18.126522 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:18.132659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:18.313559 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:18.324569 (kubelet)[2149]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:39:18.382784 kubelet[2149]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:18.382784 kubelet[2149]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:39:18.382784 kubelet[2149]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:18.383493 kubelet[2149]: I0912 17:39:18.382844 2149 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:39:18.977205 kubelet[2149]: I0912 17:39:18.976752 2149 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:39:18.977205 kubelet[2149]: I0912 17:39:18.976813 2149 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:39:18.977963 kubelet[2149]: I0912 17:39:18.977644 2149 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:39:19.010392 kubelet[2149]: I0912 17:39:19.010340 2149 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:39:19.012221 kubelet[2149]: E0912 17:39:19.012121 2149 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://144.126.222.162:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 144.126.222.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:19.021178 kubelet[2149]: E0912 17:39:19.019614 2149 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:39:19.021178 kubelet[2149]: I0912 17:39:19.019650 2149 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:39:19.023773 kubelet[2149]: I0912 17:39:19.023726 2149 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:39:19.028671 kubelet[2149]: I0912 17:39:19.028585 2149 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:39:19.029186 kubelet[2149]: I0912 17:39:19.028834 2149 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-a-756b4d7dc2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:39:19.029373 kubelet[2149]: I0912 17:39:19.029361 2149 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:39:19.029434 kubelet[2149]: I0912 17:39:19.029426 2149 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:39:19.029623 kubelet[2149]: I0912 17:39:19.029613 2149 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:19.035956 kubelet[2149]: I0912 17:39:19.035895 2149 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:39:19.038210 kubelet[2149]: I0912 17:39:19.037900 2149 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:39:19.038210 kubelet[2149]: I0912 17:39:19.037951 2149 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:39:19.038210 kubelet[2149]: I0912 17:39:19.037965 2149 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:39:19.043059 kubelet[2149]: W0912 17:39:19.042983 2149 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://144.126.222.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-756b4d7dc2&limit=500&resourceVersion=0": dial tcp 144.126.222.162:6443: connect: connection refused Sep 12 17:39:19.043385 kubelet[2149]: E0912 17:39:19.043351 2149 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://144.126.222.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-756b4d7dc2&limit=500&resourceVersion=0\": dial tcp 144.126.222.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:19.045007 kubelet[2149]: I0912 17:39:19.044971 2149 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:39:19.048937 kubelet[2149]: I0912 17:39:19.048891 2149 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:39:19.051791 kubelet[2149]: W0912 17:39:19.051744 2149 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:39:19.060541 kubelet[2149]: I0912 17:39:19.060503 2149 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:39:19.060786 kubelet[2149]: I0912 17:39:19.060768 2149 server.go:1287] "Started kubelet" Sep 12 17:39:19.062331 kubelet[2149]: W0912 17:39:19.061624 2149 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://144.126.222.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 144.126.222.162:6443: connect: connection refused Sep 12 17:39:19.062331 kubelet[2149]: E0912 17:39:19.061684 2149 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://144.126.222.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 144.126.222.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:19.062331 kubelet[2149]: I0912 17:39:19.061728 2149 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:39:19.067061 kubelet[2149]: I0912 17:39:19.066953 2149 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:39:19.067780 kubelet[2149]: I0912 17:39:19.067749 2149 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:39:19.068908 kubelet[2149]: I0912 17:39:19.068789 2149 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:39:19.072026 kubelet[2149]: I0912 17:39:19.071988 2149 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:39:19.075800 kubelet[2149]: E0912 17:39:19.073623 2149 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://144.126.222.162:6443/api/v1/namespaces/default/events\": dial tcp 144.126.222.162:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-a-756b4d7dc2.186499b56bf58626 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-a-756b4d7dc2,UID:ci-4081.3.6-a-756b4d7dc2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-a-756b4d7dc2,},FirstTimestamp:2025-09-12 17:39:19.060719142 +0000 UTC m=+0.731006848,LastTimestamp:2025-09-12 17:39:19.060719142 +0000 UTC m=+0.731006848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-a-756b4d7dc2,}" Sep 12 17:39:19.083130 kubelet[2149]: I0912 17:39:19.082716 2149 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:39:19.083659 kubelet[2149]: I0912 17:39:19.083628 2149 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:39:19.084143 kubelet[2149]: E0912 17:39:19.084093 2149 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-756b4d7dc2\" not found" Sep 12 17:39:19.084832 kubelet[2149]: I0912 17:39:19.084738 2149 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:39:19.085030 kubelet[2149]: I0912 17:39:19.085015 2149 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:39:19.085830 kubelet[2149]: W0912 17:39:19.085768 2149 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://144.126.222.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.222.162:6443: connect: connection refused Sep 12 17:39:19.086618 kubelet[2149]: E0912 17:39:19.086104 2149 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://144.126.222.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 144.126.222.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:19.086618 kubelet[2149]: E0912 17:39:19.086470 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.222.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-756b4d7dc2?timeout=10s\": dial tcp 144.126.222.162:6443: connect: connection refused" interval="200ms" Sep 12 17:39:19.087458 kubelet[2149]: I0912 17:39:19.087431 2149 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:39:19.087656 kubelet[2149]: I0912 17:39:19.087629 2149 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:39:19.091157 kubelet[2149]: E0912 17:39:19.091124 2149 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:39:19.092222 kubelet[2149]: I0912 17:39:19.092194 2149 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:39:19.121216 kubelet[2149]: I0912 17:39:19.121145 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:39:19.125927 kubelet[2149]: I0912 17:39:19.125383 2149 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:39:19.125927 kubelet[2149]: I0912 17:39:19.125434 2149 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:39:19.125927 kubelet[2149]: I0912 17:39:19.125464 2149 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:39:19.125927 kubelet[2149]: I0912 17:39:19.125474 2149 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:39:19.125927 kubelet[2149]: E0912 17:39:19.125553 2149 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:39:19.130034 kubelet[2149]: W0912 17:39:19.129420 2149 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://144.126.222.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.222.162:6443: connect: connection refused Sep 12 17:39:19.130034 kubelet[2149]: E0912 17:39:19.129512 2149 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://144.126.222.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 144.126.222.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:19.132286 kubelet[2149]: I0912 17:39:19.132258 2149 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:39:19.132641 kubelet[2149]: I0912 17:39:19.132622 2149 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:39:19.133063 kubelet[2149]: I0912 17:39:19.132762 2149 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:19.140692 kubelet[2149]: I0912 17:39:19.140635 2149 policy_none.go:49] "None policy: Start" Sep 12 17:39:19.140911 kubelet[2149]: I0912 17:39:19.140897 2149 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:39:19.140972 kubelet[2149]: I0912 17:39:19.140965 2149 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:39:19.150343 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:39:19.166966 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:39:19.174637 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:39:19.180627 kubelet[2149]: I0912 17:39:19.180575 2149 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:39:19.180913 kubelet[2149]: I0912 17:39:19.180883 2149 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:39:19.180967 kubelet[2149]: I0912 17:39:19.180912 2149 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:39:19.185333 kubelet[2149]: I0912 17:39:19.185269 2149 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:39:19.186837 kubelet[2149]: E0912 17:39:19.186588 2149 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:39:19.186837 kubelet[2149]: E0912 17:39:19.186679 2149 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-a-756b4d7dc2\" not found" Sep 12 17:39:19.242316 systemd[1]: Created slice kubepods-burstable-poda9b8474f0aa9b40aef4c0ca04e1e657e.slice - libcontainer container kubepods-burstable-poda9b8474f0aa9b40aef4c0ca04e1e657e.slice. Sep 12 17:39:19.258602 kubelet[2149]: E0912 17:39:19.258559 2149 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-756b4d7dc2\" not found" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.260209 systemd[1]: Created slice kubepods-burstable-pod62ab3d3fa53d7412e04eeac439a0bbbf.slice - libcontainer container kubepods-burstable-pod62ab3d3fa53d7412e04eeac439a0bbbf.slice. Sep 12 17:39:19.263760 kubelet[2149]: E0912 17:39:19.263701 2149 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-756b4d7dc2\" not found" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.276676 systemd[1]: Created slice kubepods-burstable-podd431ba5f3db31ef9802bc835530d6dc5.slice - libcontainer container kubepods-burstable-podd431ba5f3db31ef9802bc835530d6dc5.slice. Sep 12 17:39:19.279857 kubelet[2149]: E0912 17:39:19.279619 2149 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-756b4d7dc2\" not found" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.283565 kubelet[2149]: I0912 17:39:19.283348 2149 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.284177 kubelet[2149]: E0912 17:39:19.284133 2149 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://144.126.222.162:6443/api/v1/nodes\": dial tcp 144.126.222.162:6443: connect: connection refused" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.287129 kubelet[2149]: I0912 17:39:19.286774 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d431ba5f3db31ef9802bc835530d6dc5-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-a-756b4d7dc2\" (UID: \"d431ba5f3db31ef9802bc835530d6dc5\") " pod="kube-system/kube-scheduler-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.287129 kubelet[2149]: I0912 17:39:19.286819 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9b8474f0aa9b40aef4c0ca04e1e657e-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-756b4d7dc2\" (UID: \"a9b8474f0aa9b40aef4c0ca04e1e657e\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.287129 kubelet[2149]: I0912 17:39:19.286840 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a9b8474f0aa9b40aef4c0ca04e1e657e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-a-756b4d7dc2\" (UID: \"a9b8474f0aa9b40aef4c0ca04e1e657e\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.287129 kubelet[2149]: I0912 17:39:19.286860 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9b8474f0aa9b40aef4c0ca04e1e657e-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-756b4d7dc2\" (UID: \"a9b8474f0aa9b40aef4c0ca04e1e657e\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.287129 kubelet[2149]: I0912 17:39:19.286888 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9b8474f0aa9b40aef4c0ca04e1e657e-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-a-756b4d7dc2\" (UID: \"a9b8474f0aa9b40aef4c0ca04e1e657e\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.287417 kubelet[2149]: I0912 17:39:19.286917 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9b8474f0aa9b40aef4c0ca04e1e657e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-a-756b4d7dc2\" (UID: \"a9b8474f0aa9b40aef4c0ca04e1e657e\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.287417 kubelet[2149]: I0912 17:39:19.286941 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62ab3d3fa53d7412e04eeac439a0bbbf-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-a-756b4d7dc2\" (UID: \"62ab3d3fa53d7412e04eeac439a0bbbf\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.287417 kubelet[2149]: I0912 17:39:19.286968 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62ab3d3fa53d7412e04eeac439a0bbbf-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-a-756b4d7dc2\" (UID: \"62ab3d3fa53d7412e04eeac439a0bbbf\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.287417 kubelet[2149]: I0912 17:39:19.287000 2149 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62ab3d3fa53d7412e04eeac439a0bbbf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-a-756b4d7dc2\" (UID: \"62ab3d3fa53d7412e04eeac439a0bbbf\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.288187 kubelet[2149]: E0912 17:39:19.288044 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.222.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-756b4d7dc2?timeout=10s\": dial tcp 144.126.222.162:6443: connect: connection refused" interval="400ms" Sep 12 17:39:19.486258 kubelet[2149]: I0912 17:39:19.485933 2149 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.486744 kubelet[2149]: E0912 17:39:19.486365 2149 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://144.126.222.162:6443/api/v1/nodes\": dial tcp 144.126.222.162:6443: connect: connection refused" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.560096 kubelet[2149]: E0912 17:39:19.559992 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:19.560958 containerd[1471]: time="2025-09-12T17:39:19.560899394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-a-756b4d7dc2,Uid:a9b8474f0aa9b40aef4c0ca04e1e657e,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:19.563150 systemd-resolved[1327]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Sep 12 17:39:19.565147 kubelet[2149]: E0912 17:39:19.565112 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:19.571525 containerd[1471]: time="2025-09-12T17:39:19.571119209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-a-756b4d7dc2,Uid:62ab3d3fa53d7412e04eeac439a0bbbf,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:19.580727 kubelet[2149]: E0912 17:39:19.580316 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:19.581334 containerd[1471]: time="2025-09-12T17:39:19.581198813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-a-756b4d7dc2,Uid:d431ba5f3db31ef9802bc835530d6dc5,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:19.689572 kubelet[2149]: E0912 17:39:19.689491 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.222.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-756b4d7dc2?timeout=10s\": dial tcp 144.126.222.162:6443: connect: connection refused" interval="800ms" Sep 12 17:39:19.888630 kubelet[2149]: I0912 17:39:19.888481 2149 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:19.888965 kubelet[2149]: E0912 17:39:19.888917 2149 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://144.126.222.162:6443/api/v1/nodes\": dial tcp 144.126.222.162:6443: connect: connection refused" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:20.078821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3492957945.mount: Deactivated successfully. Sep 12 17:39:20.083068 kubelet[2149]: W0912 17:39:20.082958 2149 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://144.126.222.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 144.126.222.162:6443: connect: connection refused Sep 12 17:39:20.083068 kubelet[2149]: E0912 17:39:20.083027 2149 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://144.126.222.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 144.126.222.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:20.084384 containerd[1471]: time="2025-09-12T17:39:20.084323161Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:20.086447 containerd[1471]: time="2025-09-12T17:39:20.086382415Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:39:20.089115 containerd[1471]: time="2025-09-12T17:39:20.088835985Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:20.090823 containerd[1471]: time="2025-09-12T17:39:20.090639431Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:39:20.091745 containerd[1471]: time="2025-09-12T17:39:20.091704674Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:20.093119 containerd[1471]: time="2025-09-12T17:39:20.092514914Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:39:20.093119 containerd[1471]: time="2025-09-12T17:39:20.092577926Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:20.094827 containerd[1471]: time="2025-09-12T17:39:20.094563363Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 512.933794ms" Sep 12 17:39:20.095431 containerd[1471]: time="2025-09-12T17:39:20.095386866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:39:20.098734 containerd[1471]: time="2025-09-12T17:39:20.097796058Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 536.815424ms" Sep 12 17:39:20.101110 containerd[1471]: time="2025-09-12T17:39:20.100945120Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 529.703574ms" Sep 12 17:39:20.253233 kubelet[2149]: W0912 17:39:20.251590 2149 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://144.126.222.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 144.126.222.162:6443: connect: connection refused Sep 12 17:39:20.253233 kubelet[2149]: E0912 17:39:20.251668 2149 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://144.126.222.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 144.126.222.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:20.292935 containerd[1471]: time="2025-09-12T17:39:20.292786625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:20.293714 containerd[1471]: time="2025-09-12T17:39:20.293318471Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:20.293714 containerd[1471]: time="2025-09-12T17:39:20.293413036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:20.293873 containerd[1471]: time="2025-09-12T17:39:20.293694811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:20.299486 containerd[1471]: time="2025-09-12T17:39:20.299217425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:20.299486 containerd[1471]: time="2025-09-12T17:39:20.299285925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:20.299486 containerd[1471]: time="2025-09-12T17:39:20.299297914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:20.299486 containerd[1471]: time="2025-09-12T17:39:20.299415927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:20.303108 containerd[1471]: time="2025-09-12T17:39:20.301148184Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:20.303249 containerd[1471]: time="2025-09-12T17:39:20.301276337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:20.303249 containerd[1471]: time="2025-09-12T17:39:20.301320047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:20.303249 containerd[1471]: time="2025-09-12T17:39:20.301478518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:20.334362 systemd[1]: Started cri-containerd-36afbb952483ea684e27d3c96dd83c504cd8dafcd8da5d4940b2f0b08e01c6d6.scope - libcontainer container 36afbb952483ea684e27d3c96dd83c504cd8dafcd8da5d4940b2f0b08e01c6d6. Sep 12 17:39:20.340702 systemd[1]: Started cri-containerd-22c8a4c4356fa36aedd1e74719d3cf914353f2e290c820b69626847dce6e7dd9.scope - libcontainer container 22c8a4c4356fa36aedd1e74719d3cf914353f2e290c820b69626847dce6e7dd9. Sep 12 17:39:20.365148 systemd[1]: Started cri-containerd-aefb48657369188cfe421ee9daeca7768662bc322eea16827f3c3d5f7f1caee3.scope - libcontainer container aefb48657369188cfe421ee9daeca7768662bc322eea16827f3c3d5f7f1caee3. Sep 12 17:39:20.391149 kubelet[2149]: W0912 17:39:20.390834 2149 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://144.126.222.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-756b4d7dc2&limit=500&resourceVersion=0": dial tcp 144.126.222.162:6443: connect: connection refused Sep 12 17:39:20.391149 kubelet[2149]: E0912 17:39:20.390952 2149 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://144.126.222.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-a-756b4d7dc2&limit=500&resourceVersion=0\": dial tcp 144.126.222.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:20.446509 containerd[1471]: time="2025-09-12T17:39:20.446455514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-a-756b4d7dc2,Uid:62ab3d3fa53d7412e04eeac439a0bbbf,Namespace:kube-system,Attempt:0,} returns sandbox id \"36afbb952483ea684e27d3c96dd83c504cd8dafcd8da5d4940b2f0b08e01c6d6\"" Sep 12 17:39:20.452464 kubelet[2149]: E0912 17:39:20.451737 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:20.465413 containerd[1471]: time="2025-09-12T17:39:20.465367444Z" level=info msg="CreateContainer within sandbox \"36afbb952483ea684e27d3c96dd83c504cd8dafcd8da5d4940b2f0b08e01c6d6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:39:20.488340 kubelet[2149]: W0912 17:39:20.488245 2149 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://144.126.222.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 144.126.222.162:6443: connect: connection refused Sep 12 17:39:20.488340 kubelet[2149]: E0912 17:39:20.488340 2149 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://144.126.222.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 144.126.222.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:39:20.491400 kubelet[2149]: E0912 17:39:20.491317 2149 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://144.126.222.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-a-756b4d7dc2?timeout=10s\": dial tcp 144.126.222.162:6443: connect: connection refused" interval="1.6s" Sep 12 17:39:20.494944 containerd[1471]: time="2025-09-12T17:39:20.494392983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-a-756b4d7dc2,Uid:a9b8474f0aa9b40aef4c0ca04e1e657e,Namespace:kube-system,Attempt:0,} returns sandbox id \"aefb48657369188cfe421ee9daeca7768662bc322eea16827f3c3d5f7f1caee3\"" Sep 12 17:39:20.494944 containerd[1471]: time="2025-09-12T17:39:20.494643103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-a-756b4d7dc2,Uid:d431ba5f3db31ef9802bc835530d6dc5,Namespace:kube-system,Attempt:0,} returns sandbox id \"22c8a4c4356fa36aedd1e74719d3cf914353f2e290c820b69626847dce6e7dd9\"" Sep 12 17:39:20.495715 kubelet[2149]: E0912 17:39:20.495638 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:20.496175 kubelet[2149]: E0912 17:39:20.496023 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:20.496247 containerd[1471]: time="2025-09-12T17:39:20.496213321Z" level=info msg="CreateContainer within sandbox \"36afbb952483ea684e27d3c96dd83c504cd8dafcd8da5d4940b2f0b08e01c6d6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e1b4821ca51308b60181a2132ace7e78b2881b308a09b34ca79a988d3dc30752\"" Sep 12 17:39:20.502113 containerd[1471]: time="2025-09-12T17:39:20.500595063Z" level=info msg="StartContainer for \"e1b4821ca51308b60181a2132ace7e78b2881b308a09b34ca79a988d3dc30752\"" Sep 12 17:39:20.506244 containerd[1471]: time="2025-09-12T17:39:20.505326828Z" level=info msg="CreateContainer within sandbox \"22c8a4c4356fa36aedd1e74719d3cf914353f2e290c820b69626847dce6e7dd9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:39:20.508113 containerd[1471]: time="2025-09-12T17:39:20.508032117Z" level=info msg="CreateContainer within sandbox \"aefb48657369188cfe421ee9daeca7768662bc322eea16827f3c3d5f7f1caee3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:39:20.530801 containerd[1471]: time="2025-09-12T17:39:20.530753076Z" level=info msg="CreateContainer within sandbox \"aefb48657369188cfe421ee9daeca7768662bc322eea16827f3c3d5f7f1caee3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"788a612e6d38bf7d3ad2224073f709974356ed318ac23e6ff0c6378c25b89712\"" Sep 12 17:39:20.533127 containerd[1471]: time="2025-09-12T17:39:20.532230006Z" level=info msg="StartContainer for \"788a612e6d38bf7d3ad2224073f709974356ed318ac23e6ff0c6378c25b89712\"" Sep 12 17:39:20.539392 containerd[1471]: time="2025-09-12T17:39:20.539341496Z" level=info msg="CreateContainer within sandbox \"22c8a4c4356fa36aedd1e74719d3cf914353f2e290c820b69626847dce6e7dd9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5271dab914bc1d682936abad43988528592896454a566a3a6ee3b8c1b65fdb89\"" Sep 12 17:39:20.540222 containerd[1471]: time="2025-09-12T17:39:20.540190107Z" level=info msg="StartContainer for \"5271dab914bc1d682936abad43988528592896454a566a3a6ee3b8c1b65fdb89\"" Sep 12 17:39:20.565365 systemd[1]: Started cri-containerd-e1b4821ca51308b60181a2132ace7e78b2881b308a09b34ca79a988d3dc30752.scope - libcontainer container e1b4821ca51308b60181a2132ace7e78b2881b308a09b34ca79a988d3dc30752. Sep 12 17:39:20.596387 systemd[1]: Started cri-containerd-788a612e6d38bf7d3ad2224073f709974356ed318ac23e6ff0c6378c25b89712.scope - libcontainer container 788a612e6d38bf7d3ad2224073f709974356ed318ac23e6ff0c6378c25b89712. Sep 12 17:39:20.605420 systemd[1]: Started cri-containerd-5271dab914bc1d682936abad43988528592896454a566a3a6ee3b8c1b65fdb89.scope - libcontainer container 5271dab914bc1d682936abad43988528592896454a566a3a6ee3b8c1b65fdb89. Sep 12 17:39:20.685707 containerd[1471]: time="2025-09-12T17:39:20.685383094Z" level=info msg="StartContainer for \"788a612e6d38bf7d3ad2224073f709974356ed318ac23e6ff0c6378c25b89712\" returns successfully" Sep 12 17:39:20.688337 containerd[1471]: time="2025-09-12T17:39:20.685788427Z" level=info msg="StartContainer for \"e1b4821ca51308b60181a2132ace7e78b2881b308a09b34ca79a988d3dc30752\" returns successfully" Sep 12 17:39:20.690698 kubelet[2149]: I0912 17:39:20.690664 2149 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:20.691362 kubelet[2149]: E0912 17:39:20.691331 2149 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://144.126.222.162:6443/api/v1/nodes\": dial tcp 144.126.222.162:6443: connect: connection refused" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:20.745753 containerd[1471]: time="2025-09-12T17:39:20.745682210Z" level=info msg="StartContainer for \"5271dab914bc1d682936abad43988528592896454a566a3a6ee3b8c1b65fdb89\" returns successfully" Sep 12 17:39:21.157049 kubelet[2149]: E0912 17:39:21.157014 2149 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-756b4d7dc2\" not found" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:21.157464 kubelet[2149]: E0912 17:39:21.157228 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:21.161950 kubelet[2149]: E0912 17:39:21.161907 2149 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-756b4d7dc2\" not found" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:21.162852 kubelet[2149]: E0912 17:39:21.162733 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:21.164674 kubelet[2149]: E0912 17:39:21.164652 2149 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-756b4d7dc2\" not found" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:21.165106 kubelet[2149]: E0912 17:39:21.164921 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:22.169147 kubelet[2149]: E0912 17:39:22.167662 2149 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-756b4d7dc2\" not found" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:22.169147 kubelet[2149]: E0912 17:39:22.167876 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:22.170388 kubelet[2149]: E0912 17:39:22.170165 2149 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081.3.6-a-756b4d7dc2\" not found" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:22.170614 kubelet[2149]: E0912 17:39:22.170590 2149 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:22.294741 kubelet[2149]: I0912 17:39:22.293731 2149 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:22.969916 kubelet[2149]: E0912 17:39:22.969853 2149 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-a-756b4d7dc2\" not found" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:23.065406 kubelet[2149]: I0912 17:39:23.065355 2149 apiserver.go:52] "Watching apiserver" Sep 12 17:39:23.074529 kubelet[2149]: I0912 17:39:23.074465 2149 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:23.085797 kubelet[2149]: I0912 17:39:23.085472 2149 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:23.152355 kubelet[2149]: E0912 17:39:23.152251 2149 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.6-a-756b4d7dc2.186499b56bf58626 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-a-756b4d7dc2,UID:ci-4081.3.6-a-756b4d7dc2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-a-756b4d7dc2,},FirstTimestamp:2025-09-12 17:39:19.060719142 +0000 UTC m=+0.731006848,LastTimestamp:2025-09-12 17:39:19.060719142 +0000 UTC m=+0.731006848,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-a-756b4d7dc2,}" Sep 12 17:39:23.158099 kubelet[2149]: E0912 17:39:23.157316 2149 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-a-756b4d7dc2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:23.158099 kubelet[2149]: I0912 17:39:23.157351 2149 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:23.168068 kubelet[2149]: E0912 17:39:23.167667 2149 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081.3.6-a-756b4d7dc2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:23.168068 kubelet[2149]: I0912 17:39:23.167702 2149 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:23.177633 kubelet[2149]: E0912 17:39:23.177578 2149 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081.3.6-a-756b4d7dc2\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:23.185870 kubelet[2149]: I0912 17:39:23.185777 2149 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:39:25.282133 systemd[1]: Reloading requested from client PID 2422 ('systemctl') (unit session-7.scope)... Sep 12 17:39:25.282162 systemd[1]: Reloading... Sep 12 17:39:25.437131 zram_generator::config[2462]: No configuration found. Sep 12 17:39:25.639119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:39:25.755906 systemd[1]: Reloading finished in 473 ms. Sep 12 17:39:25.796230 kubelet[2149]: I0912 17:39:25.795931 2149 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:25.811556 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:25.831167 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:39:25.831618 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:25.831708 systemd[1]: kubelet.service: Consumed 1.285s CPU time, 125.6M memory peak, 0B memory swap peak. Sep 12 17:39:25.843619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:39:26.030401 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:39:26.034224 (kubelet)[2512]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:39:26.136116 kubelet[2512]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:26.136116 kubelet[2512]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:39:26.136116 kubelet[2512]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:39:26.136116 kubelet[2512]: I0912 17:39:26.135723 2512 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:39:26.148262 kubelet[2512]: I0912 17:39:26.148142 2512 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:39:26.148642 kubelet[2512]: I0912 17:39:26.148530 2512 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:39:26.149157 kubelet[2512]: I0912 17:39:26.149134 2512 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:39:26.153909 kubelet[2512]: I0912 17:39:26.153361 2512 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:39:26.161841 kubelet[2512]: I0912 17:39:26.161775 2512 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:39:26.168716 kubelet[2512]: E0912 17:39:26.168200 2512 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:39:26.168716 kubelet[2512]: I0912 17:39:26.168744 2512 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:39:26.176574 kubelet[2512]: I0912 17:39:26.176133 2512 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:39:26.176574 kubelet[2512]: I0912 17:39:26.176479 2512 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:39:26.178309 kubelet[2512]: I0912 17:39:26.176547 2512 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-a-756b4d7dc2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:39:26.178309 kubelet[2512]: I0912 17:39:26.178324 2512 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:39:26.178682 kubelet[2512]: I0912 17:39:26.178345 2512 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:39:26.178682 kubelet[2512]: I0912 17:39:26.178425 2512 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:26.178682 kubelet[2512]: I0912 17:39:26.178651 2512 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:39:26.178682 kubelet[2512]: I0912 17:39:26.178674 2512 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:39:26.178968 kubelet[2512]: I0912 17:39:26.178815 2512 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:39:26.178968 kubelet[2512]: I0912 17:39:26.178836 2512 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:39:26.184234 kubelet[2512]: I0912 17:39:26.184193 2512 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:39:26.184757 kubelet[2512]: I0912 17:39:26.184686 2512 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:39:26.186796 kubelet[2512]: I0912 17:39:26.186743 2512 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:39:26.186796 kubelet[2512]: I0912 17:39:26.186800 2512 server.go:1287] "Started kubelet" Sep 12 17:39:26.193190 kubelet[2512]: I0912 17:39:26.192908 2512 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:39:26.203908 kubelet[2512]: I0912 17:39:26.203839 2512 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:39:26.208395 kubelet[2512]: I0912 17:39:26.208217 2512 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:39:26.208774 kubelet[2512]: I0912 17:39:26.208693 2512 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:39:26.209091 kubelet[2512]: I0912 17:39:26.209029 2512 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:39:26.216846 kubelet[2512]: I0912 17:39:26.216796 2512 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:39:26.219215 kubelet[2512]: E0912 17:39:26.217920 2512 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081.3.6-a-756b4d7dc2\" not found" Sep 12 17:39:26.219215 kubelet[2512]: I0912 17:39:26.218633 2512 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:39:26.221376 kubelet[2512]: I0912 17:39:26.221340 2512 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:39:26.223635 kubelet[2512]: I0912 17:39:26.223587 2512 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:39:26.228512 kubelet[2512]: I0912 17:39:26.228277 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:39:26.231736 kubelet[2512]: I0912 17:39:26.231221 2512 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:39:26.231736 kubelet[2512]: I0912 17:39:26.231280 2512 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:39:26.231736 kubelet[2512]: I0912 17:39:26.231310 2512 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:39:26.231736 kubelet[2512]: I0912 17:39:26.231331 2512 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:39:26.231736 kubelet[2512]: E0912 17:39:26.231415 2512 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:39:26.238516 kubelet[2512]: I0912 17:39:26.238463 2512 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:39:26.238821 kubelet[2512]: I0912 17:39:26.238629 2512 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:39:26.248001 kubelet[2512]: E0912 17:39:26.247916 2512 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:39:26.250015 kubelet[2512]: I0912 17:39:26.249965 2512 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:39:26.334272 kubelet[2512]: E0912 17:39:26.332329 2512 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:39:26.355720 kubelet[2512]: I0912 17:39:26.355679 2512 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:39:26.358056 kubelet[2512]: I0912 17:39:26.355988 2512 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:39:26.358056 kubelet[2512]: I0912 17:39:26.356027 2512 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:39:26.358056 kubelet[2512]: I0912 17:39:26.356430 2512 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:39:26.358056 kubelet[2512]: I0912 17:39:26.356448 2512 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:39:26.358056 kubelet[2512]: I0912 17:39:26.356475 2512 policy_none.go:49] "None policy: Start" Sep 12 17:39:26.358056 kubelet[2512]: I0912 17:39:26.356491 2512 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:39:26.358056 kubelet[2512]: I0912 17:39:26.356507 2512 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:39:26.358056 kubelet[2512]: I0912 17:39:26.356691 2512 state_mem.go:75] "Updated machine memory state" Sep 12 17:39:26.364413 kubelet[2512]: I0912 17:39:26.364375 2512 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:39:26.364864 kubelet[2512]: I0912 17:39:26.364843 2512 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:39:26.365029 kubelet[2512]: I0912 17:39:26.364984 2512 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:39:26.368902 kubelet[2512]: I0912 17:39:26.368868 2512 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:39:26.372646 kubelet[2512]: E0912 17:39:26.372611 2512 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:39:26.481243 kubelet[2512]: I0912 17:39:26.480086 2512 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.494822 kubelet[2512]: I0912 17:39:26.494510 2512 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.494822 kubelet[2512]: I0912 17:39:26.494608 2512 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.533734 kubelet[2512]: I0912 17:39:26.533351 2512 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.537101 kubelet[2512]: I0912 17:39:26.536035 2512 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.537101 kubelet[2512]: I0912 17:39:26.535791 2512 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.550671 kubelet[2512]: W0912 17:39:26.549412 2512 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:26.550671 kubelet[2512]: E0912 17:39:26.550182 2512 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-a-756b4d7dc2\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.551827 kubelet[2512]: W0912 17:39:26.551786 2512 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:26.554521 kubelet[2512]: W0912 17:39:26.553651 2512 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:26.626956 kubelet[2512]: I0912 17:39:26.626114 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/62ab3d3fa53d7412e04eeac439a0bbbf-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-a-756b4d7dc2\" (UID: \"62ab3d3fa53d7412e04eeac439a0bbbf\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.626956 kubelet[2512]: I0912 17:39:26.626188 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a9b8474f0aa9b40aef4c0ca04e1e657e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-a-756b4d7dc2\" (UID: \"a9b8474f0aa9b40aef4c0ca04e1e657e\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.626956 kubelet[2512]: I0912 17:39:26.626229 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a9b8474f0aa9b40aef4c0ca04e1e657e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-a-756b4d7dc2\" (UID: \"a9b8474f0aa9b40aef4c0ca04e1e657e\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.626956 kubelet[2512]: I0912 17:39:26.626263 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d431ba5f3db31ef9802bc835530d6dc5-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-a-756b4d7dc2\" (UID: \"d431ba5f3db31ef9802bc835530d6dc5\") " pod="kube-system/kube-scheduler-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.626956 kubelet[2512]: I0912 17:39:26.626310 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/62ab3d3fa53d7412e04eeac439a0bbbf-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-a-756b4d7dc2\" (UID: \"62ab3d3fa53d7412e04eeac439a0bbbf\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.627373 kubelet[2512]: I0912 17:39:26.626364 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/62ab3d3fa53d7412e04eeac439a0bbbf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-a-756b4d7dc2\" (UID: \"62ab3d3fa53d7412e04eeac439a0bbbf\") " pod="kube-system/kube-apiserver-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.627373 kubelet[2512]: I0912 17:39:26.626402 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a9b8474f0aa9b40aef4c0ca04e1e657e-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-756b4d7dc2\" (UID: \"a9b8474f0aa9b40aef4c0ca04e1e657e\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.627373 kubelet[2512]: I0912 17:39:26.626429 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a9b8474f0aa9b40aef4c0ca04e1e657e-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-a-756b4d7dc2\" (UID: \"a9b8474f0aa9b40aef4c0ca04e1e657e\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.627373 kubelet[2512]: I0912 17:39:26.626460 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9b8474f0aa9b40aef4c0ca04e1e657e-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-a-756b4d7dc2\" (UID: \"a9b8474f0aa9b40aef4c0ca04e1e657e\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:26.850921 kubelet[2512]: E0912 17:39:26.850852 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:26.854139 kubelet[2512]: E0912 17:39:26.853927 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:26.855850 kubelet[2512]: E0912 17:39:26.854156 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:27.181087 kubelet[2512]: I0912 17:39:27.181000 2512 apiserver.go:52] "Watching apiserver" Sep 12 17:39:27.219272 kubelet[2512]: I0912 17:39:27.218812 2512 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:39:27.314046 kubelet[2512]: E0912 17:39:27.313607 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:27.322462 kubelet[2512]: I0912 17:39:27.320511 2512 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:27.324556 kubelet[2512]: E0912 17:39:27.321068 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:27.350648 kubelet[2512]: W0912 17:39:27.350611 2512 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:39:27.351306 kubelet[2512]: E0912 17:39:27.350996 2512 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081.3.6-a-756b4d7dc2\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.6-a-756b4d7dc2" Sep 12 17:39:27.351306 kubelet[2512]: E0912 17:39:27.351231 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:27.452819 kubelet[2512]: I0912 17:39:27.452547 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-a-756b4d7dc2" podStartSLOduration=2.452524833 podStartE2EDuration="2.452524833s" podCreationTimestamp="2025-09-12 17:39:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:27.42283332 +0000 UTC m=+1.379060248" watchObservedRunningTime="2025-09-12 17:39:27.452524833 +0000 UTC m=+1.408751756" Sep 12 17:39:27.499521 kubelet[2512]: I0912 17:39:27.499333 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-a-756b4d7dc2" podStartSLOduration=1.499314123 podStartE2EDuration="1.499314123s" podCreationTimestamp="2025-09-12 17:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:27.498431384 +0000 UTC m=+1.454658312" watchObservedRunningTime="2025-09-12 17:39:27.499314123 +0000 UTC m=+1.455541046" Sep 12 17:39:27.499521 kubelet[2512]: I0912 17:39:27.499419 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-a-756b4d7dc2" podStartSLOduration=1.4994134350000001 podStartE2EDuration="1.499413435s" podCreationTimestamp="2025-09-12 17:39:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:27.453156388 +0000 UTC m=+1.409383307" watchObservedRunningTime="2025-09-12 17:39:27.499413435 +0000 UTC m=+1.455640362" Sep 12 17:39:28.315102 kubelet[2512]: E0912 17:39:28.314638 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:28.316762 kubelet[2512]: E0912 17:39:28.316639 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:29.317279 kubelet[2512]: E0912 17:39:29.316723 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:30.125912 kubelet[2512]: I0912 17:39:30.125246 2512 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:39:30.126102 containerd[1471]: time="2025-09-12T17:39:30.125756614Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:39:30.127240 kubelet[2512]: I0912 17:39:30.126698 2512 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:39:30.513824 kubelet[2512]: E0912 17:39:30.513321 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:30.785469 systemd[1]: Created slice kubepods-besteffort-podde6e9065_5160_45a8_900a_cf94cb23c81c.slice - libcontainer container kubepods-besteffort-podde6e9065_5160_45a8_900a_cf94cb23c81c.slice. Sep 12 17:39:30.855518 kubelet[2512]: I0912 17:39:30.855145 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx2sk\" (UniqueName: \"kubernetes.io/projected/de6e9065-5160-45a8-900a-cf94cb23c81c-kube-api-access-dx2sk\") pod \"kube-proxy-n9tct\" (UID: \"de6e9065-5160-45a8-900a-cf94cb23c81c\") " pod="kube-system/kube-proxy-n9tct" Sep 12 17:39:30.855518 kubelet[2512]: I0912 17:39:30.855355 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/de6e9065-5160-45a8-900a-cf94cb23c81c-kube-proxy\") pod \"kube-proxy-n9tct\" (UID: \"de6e9065-5160-45a8-900a-cf94cb23c81c\") " pod="kube-system/kube-proxy-n9tct" Sep 12 17:39:30.855518 kubelet[2512]: I0912 17:39:30.855414 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de6e9065-5160-45a8-900a-cf94cb23c81c-lib-modules\") pod \"kube-proxy-n9tct\" (UID: \"de6e9065-5160-45a8-900a-cf94cb23c81c\") " pod="kube-system/kube-proxy-n9tct" Sep 12 17:39:30.855518 kubelet[2512]: I0912 17:39:30.855463 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de6e9065-5160-45a8-900a-cf94cb23c81c-xtables-lock\") pod \"kube-proxy-n9tct\" (UID: \"de6e9065-5160-45a8-900a-cf94cb23c81c\") " pod="kube-system/kube-proxy-n9tct" Sep 12 17:39:31.100500 kubelet[2512]: E0912 17:39:31.099563 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:31.102696 containerd[1471]: time="2025-09-12T17:39:31.102616463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n9tct,Uid:de6e9065-5160-45a8-900a-cf94cb23c81c,Namespace:kube-system,Attempt:0,}" Sep 12 17:39:31.180589 containerd[1471]: time="2025-09-12T17:39:31.179186544Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:31.180589 containerd[1471]: time="2025-09-12T17:39:31.180176489Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:31.180589 containerd[1471]: time="2025-09-12T17:39:31.180201396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:31.180589 containerd[1471]: time="2025-09-12T17:39:31.180377429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:31.229305 systemd[1]: Created slice kubepods-besteffort-podca265a4a_4ae4_4972_aeb9_4383bbea6c85.slice - libcontainer container kubepods-besteffort-podca265a4a_4ae4_4972_aeb9_4383bbea6c85.slice. Sep 12 17:39:31.249764 systemd[1]: Started cri-containerd-0f0a6d9dfc920cc5ee6017e23402c92d204cd7b8328372709489d6efe8841ef0.scope - libcontainer container 0f0a6d9dfc920cc5ee6017e23402c92d204cd7b8328372709489d6efe8841ef0. Sep 12 17:39:31.262040 kubelet[2512]: I0912 17:39:31.261988 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ca265a4a-4ae4-4972-aeb9-4383bbea6c85-var-lib-calico\") pod \"tigera-operator-755d956888-6lvk6\" (UID: \"ca265a4a-4ae4-4972-aeb9-4383bbea6c85\") " pod="tigera-operator/tigera-operator-755d956888-6lvk6" Sep 12 17:39:31.262040 kubelet[2512]: I0912 17:39:31.262038 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6xss\" (UniqueName: \"kubernetes.io/projected/ca265a4a-4ae4-4972-aeb9-4383bbea6c85-kube-api-access-p6xss\") pod \"tigera-operator-755d956888-6lvk6\" (UID: \"ca265a4a-4ae4-4972-aeb9-4383bbea6c85\") " pod="tigera-operator/tigera-operator-755d956888-6lvk6" Sep 12 17:39:31.287963 containerd[1471]: time="2025-09-12T17:39:31.287606781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n9tct,Uid:de6e9065-5160-45a8-900a-cf94cb23c81c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f0a6d9dfc920cc5ee6017e23402c92d204cd7b8328372709489d6efe8841ef0\"" Sep 12 17:39:31.288961 kubelet[2512]: E0912 17:39:31.288864 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:31.291792 containerd[1471]: time="2025-09-12T17:39:31.291742757Z" level=info msg="CreateContainer within sandbox \"0f0a6d9dfc920cc5ee6017e23402c92d204cd7b8328372709489d6efe8841ef0\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:39:31.313290 containerd[1471]: time="2025-09-12T17:39:31.313183488Z" level=info msg="CreateContainer within sandbox \"0f0a6d9dfc920cc5ee6017e23402c92d204cd7b8328372709489d6efe8841ef0\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0b015a463a934012b5b608c258e68c79d4703d0232e0f3de9307d13c905ba79c\"" Sep 12 17:39:31.315065 containerd[1471]: time="2025-09-12T17:39:31.314805093Z" level=info msg="StartContainer for \"0b015a463a934012b5b608c258e68c79d4703d0232e0f3de9307d13c905ba79c\"" Sep 12 17:39:31.325909 kubelet[2512]: E0912 17:39:31.325672 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:31.364370 systemd[1]: Started cri-containerd-0b015a463a934012b5b608c258e68c79d4703d0232e0f3de9307d13c905ba79c.scope - libcontainer container 0b015a463a934012b5b608c258e68c79d4703d0232e0f3de9307d13c905ba79c. Sep 12 17:39:31.414160 containerd[1471]: time="2025-09-12T17:39:31.414096077Z" level=info msg="StartContainer for \"0b015a463a934012b5b608c258e68c79d4703d0232e0f3de9307d13c905ba79c\" returns successfully" Sep 12 17:39:31.545863 containerd[1471]: time="2025-09-12T17:39:31.545759427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-6lvk6,Uid:ca265a4a-4ae4-4972-aeb9-4383bbea6c85,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:39:31.582323 containerd[1471]: time="2025-09-12T17:39:31.581932660Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:31.586207 containerd[1471]: time="2025-09-12T17:39:31.583337745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:31.586207 containerd[1471]: time="2025-09-12T17:39:31.584418306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:31.586207 containerd[1471]: time="2025-09-12T17:39:31.585909067Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:31.626700 systemd[1]: Started cri-containerd-6dc6855c60c07c404551ccfb65b419f773b8d661fccfb7e51a0b35786a89d76d.scope - libcontainer container 6dc6855c60c07c404551ccfb65b419f773b8d661fccfb7e51a0b35786a89d76d. Sep 12 17:39:31.688827 containerd[1471]: time="2025-09-12T17:39:31.688768849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-6lvk6,Uid:ca265a4a-4ae4-4972-aeb9-4383bbea6c85,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6dc6855c60c07c404551ccfb65b419f773b8d661fccfb7e51a0b35786a89d76d\"" Sep 12 17:39:31.692826 containerd[1471]: time="2025-09-12T17:39:31.692766273Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:39:31.972792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount827011203.mount: Deactivated successfully. Sep 12 17:39:32.331678 kubelet[2512]: E0912 17:39:32.331594 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:33.372593 kubelet[2512]: E0912 17:39:33.371710 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:33.433343 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3633984682.mount: Deactivated successfully. Sep 12 17:39:34.053476 kubelet[2512]: E0912 17:39:34.052842 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:34.081740 kubelet[2512]: I0912 17:39:34.081189 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n9tct" podStartSLOduration=4.081161831 podStartE2EDuration="4.081161831s" podCreationTimestamp="2025-09-12 17:39:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:39:32.346210797 +0000 UTC m=+6.302437732" watchObservedRunningTime="2025-09-12 17:39:34.081161831 +0000 UTC m=+8.037388774" Sep 12 17:39:34.374947 kubelet[2512]: E0912 17:39:34.373796 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:34.651924 kubelet[2512]: E0912 17:39:34.651120 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:35.377870 kubelet[2512]: E0912 17:39:35.375356 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:35.380069 kubelet[2512]: E0912 17:39:35.380033 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:35.487701 containerd[1471]: time="2025-09-12T17:39:35.486163565Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:35.487701 containerd[1471]: time="2025-09-12T17:39:35.487206106Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:39:35.488279 containerd[1471]: time="2025-09-12T17:39:35.488100658Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:35.490513 containerd[1471]: time="2025-09-12T17:39:35.490472377Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:35.491323 containerd[1471]: time="2025-09-12T17:39:35.491281089Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.79846944s" Sep 12 17:39:35.491396 containerd[1471]: time="2025-09-12T17:39:35.491330171Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:39:35.495387 containerd[1471]: time="2025-09-12T17:39:35.495342196Z" level=info msg="CreateContainer within sandbox \"6dc6855c60c07c404551ccfb65b419f773b8d661fccfb7e51a0b35786a89d76d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:39:35.513115 containerd[1471]: time="2025-09-12T17:39:35.513019983Z" level=info msg="CreateContainer within sandbox \"6dc6855c60c07c404551ccfb65b419f773b8d661fccfb7e51a0b35786a89d76d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"16f8cab4293978e519bb6f9d0b81615bb6a37f3e6ba587289e20973453d8cb30\"" Sep 12 17:39:35.514232 containerd[1471]: time="2025-09-12T17:39:35.514203556Z" level=info msg="StartContainer for \"16f8cab4293978e519bb6f9d0b81615bb6a37f3e6ba587289e20973453d8cb30\"" Sep 12 17:39:35.559358 systemd[1]: Started cri-containerd-16f8cab4293978e519bb6f9d0b81615bb6a37f3e6ba587289e20973453d8cb30.scope - libcontainer container 16f8cab4293978e519bb6f9d0b81615bb6a37f3e6ba587289e20973453d8cb30. Sep 12 17:39:35.596517 containerd[1471]: time="2025-09-12T17:39:35.596339021Z" level=info msg="StartContainer for \"16f8cab4293978e519bb6f9d0b81615bb6a37f3e6ba587289e20973453d8cb30\" returns successfully" Sep 12 17:39:36.367167 update_engine[1447]: I20250912 17:39:36.366885 1447 update_attempter.cc:509] Updating boot flags... Sep 12 17:39:36.389058 kubelet[2512]: E0912 17:39:36.388943 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:36.450110 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2858) Sep 12 17:39:36.534116 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2856) Sep 12 17:39:36.640121 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2856) Sep 12 17:39:43.421256 sudo[1651]: pam_unix(sudo:session): session closed for user root Sep 12 17:39:43.428219 sshd[1648]: pam_unix(sshd:session): session closed for user core Sep 12 17:39:43.438688 systemd[1]: sshd@6-144.126.222.162:22-147.75.109.163:57116.service: Deactivated successfully. Sep 12 17:39:43.446788 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:39:43.447813 systemd[1]: session-7.scope: Consumed 7.342s CPU time, 147.0M memory peak, 0B memory swap peak. Sep 12 17:39:43.449541 systemd-logind[1445]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:39:43.451651 systemd-logind[1445]: Removed session 7. Sep 12 17:39:49.374185 kubelet[2512]: I0912 17:39:49.374111 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-6lvk6" podStartSLOduration=14.573175174 podStartE2EDuration="18.374049216s" podCreationTimestamp="2025-09-12 17:39:31 +0000 UTC" firstStartedPulling="2025-09-12 17:39:31.692125604 +0000 UTC m=+5.648352507" lastFinishedPulling="2025-09-12 17:39:35.49299963 +0000 UTC m=+9.449226549" observedRunningTime="2025-09-12 17:39:36.409012249 +0000 UTC m=+10.365239168" watchObservedRunningTime="2025-09-12 17:39:49.374049216 +0000 UTC m=+23.330276122" Sep 12 17:39:49.418318 systemd[1]: Created slice kubepods-besteffort-podb5002d17_aaaf_4aa6_bbfd_964e2e5668a8.slice - libcontainer container kubepods-besteffort-podb5002d17_aaaf_4aa6_bbfd_964e2e5668a8.slice. Sep 12 17:39:49.516590 kubelet[2512]: I0912 17:39:49.516145 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kb5b\" (UniqueName: \"kubernetes.io/projected/b5002d17-aaaf-4aa6-bbfd-964e2e5668a8-kube-api-access-6kb5b\") pod \"calico-typha-6795974894-ktkms\" (UID: \"b5002d17-aaaf-4aa6-bbfd-964e2e5668a8\") " pod="calico-system/calico-typha-6795974894-ktkms" Sep 12 17:39:49.516590 kubelet[2512]: I0912 17:39:49.516211 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b5002d17-aaaf-4aa6-bbfd-964e2e5668a8-typha-certs\") pod \"calico-typha-6795974894-ktkms\" (UID: \"b5002d17-aaaf-4aa6-bbfd-964e2e5668a8\") " pod="calico-system/calico-typha-6795974894-ktkms" Sep 12 17:39:49.516590 kubelet[2512]: I0912 17:39:49.516244 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5002d17-aaaf-4aa6-bbfd-964e2e5668a8-tigera-ca-bundle\") pod \"calico-typha-6795974894-ktkms\" (UID: \"b5002d17-aaaf-4aa6-bbfd-964e2e5668a8\") " pod="calico-system/calico-typha-6795974894-ktkms" Sep 12 17:39:49.743759 kubelet[2512]: E0912 17:39:49.743639 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:49.762318 containerd[1471]: time="2025-09-12T17:39:49.761605647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6795974894-ktkms,Uid:b5002d17-aaaf-4aa6-bbfd-964e2e5668a8,Namespace:calico-system,Attempt:0,}" Sep 12 17:39:49.819483 systemd[1]: Created slice kubepods-besteffort-podbfb6fe0d_9ca7_47b0_ba20_d80b105df5ad.slice - libcontainer container kubepods-besteffort-podbfb6fe0d_9ca7_47b0_ba20_d80b105df5ad.slice. Sep 12 17:39:49.827973 kubelet[2512]: I0912 17:39:49.827747 2512 status_manager.go:890] "Failed to get status for pod" podUID="bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad" pod="calico-system/calico-node-zc8l7" err="pods \"calico-node-zc8l7\" is forbidden: User \"system:node:ci-4081.3.6-a-756b4d7dc2\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-a-756b4d7dc2' and this object" Sep 12 17:39:49.828172 kubelet[2512]: W0912 17:39:49.828065 2512 reflector.go:569] object-"calico-system"/"cni-config": failed to list *v1.ConfigMap: configmaps "cni-config" is forbidden: User "system:node:ci-4081.3.6-a-756b4d7dc2" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.6-a-756b4d7dc2' and this object Sep 12 17:39:49.831927 kubelet[2512]: E0912 17:39:49.831595 2512 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"cni-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"cni-config\" is forbidden: User \"system:node:ci-4081.3.6-a-756b4d7dc2\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-a-756b4d7dc2' and this object" logger="UnhandledError" Sep 12 17:39:49.831927 kubelet[2512]: W0912 17:39:49.831667 2512 reflector.go:569] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ci-4081.3.6-a-756b4d7dc2" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.6-a-756b4d7dc2' and this object Sep 12 17:39:49.831927 kubelet[2512]: E0912 17:39:49.831753 2512 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:ci-4081.3.6-a-756b4d7dc2\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-a-756b4d7dc2' and this object" logger="UnhandledError" Sep 12 17:39:49.837525 containerd[1471]: time="2025-09-12T17:39:49.837115192Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:49.837525 containerd[1471]: time="2025-09-12T17:39:49.837270462Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:49.837525 containerd[1471]: time="2025-09-12T17:39:49.837307527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:49.839147 containerd[1471]: time="2025-09-12T17:39:49.837470763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:49.889325 systemd[1]: Started cri-containerd-1279c3c0a45fd6d12e9e40e31a434d36fd45eb7392c14ab2d597e1787c17ea98.scope - libcontainer container 1279c3c0a45fd6d12e9e40e31a434d36fd45eb7392c14ab2d597e1787c17ea98. Sep 12 17:39:49.919418 kubelet[2512]: I0912 17:39:49.919366 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad-lib-modules\") pod \"calico-node-zc8l7\" (UID: \"bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad\") " pod="calico-system/calico-node-zc8l7" Sep 12 17:39:49.919418 kubelet[2512]: I0912 17:39:49.919419 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad-tigera-ca-bundle\") pod \"calico-node-zc8l7\" (UID: \"bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad\") " pod="calico-system/calico-node-zc8l7" Sep 12 17:39:49.919646 kubelet[2512]: I0912 17:39:49.919442 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad-node-certs\") pod \"calico-node-zc8l7\" (UID: \"bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad\") " pod="calico-system/calico-node-zc8l7" Sep 12 17:39:49.919646 kubelet[2512]: I0912 17:39:49.919470 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad-var-lib-calico\") pod \"calico-node-zc8l7\" (UID: \"bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad\") " pod="calico-system/calico-node-zc8l7" Sep 12 17:39:49.919646 kubelet[2512]: I0912 17:39:49.919488 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad-cni-net-dir\") pod \"calico-node-zc8l7\" (UID: \"bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad\") " pod="calico-system/calico-node-zc8l7" Sep 12 17:39:49.919646 kubelet[2512]: I0912 17:39:49.919504 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad-flexvol-driver-host\") pod \"calico-node-zc8l7\" (UID: \"bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad\") " pod="calico-system/calico-node-zc8l7" Sep 12 17:39:49.919646 kubelet[2512]: I0912 17:39:49.919527 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad-var-run-calico\") pod \"calico-node-zc8l7\" (UID: \"bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad\") " pod="calico-system/calico-node-zc8l7" Sep 12 17:39:49.919773 kubelet[2512]: I0912 17:39:49.919544 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbsht\" (UniqueName: \"kubernetes.io/projected/bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad-kube-api-access-fbsht\") pod \"calico-node-zc8l7\" (UID: \"bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad\") " pod="calico-system/calico-node-zc8l7" Sep 12 17:39:49.919773 kubelet[2512]: I0912 17:39:49.919561 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad-policysync\") pod \"calico-node-zc8l7\" (UID: \"bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad\") " pod="calico-system/calico-node-zc8l7" Sep 12 17:39:49.919773 kubelet[2512]: I0912 17:39:49.919578 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad-xtables-lock\") pod \"calico-node-zc8l7\" (UID: \"bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad\") " pod="calico-system/calico-node-zc8l7" Sep 12 17:39:49.919773 kubelet[2512]: I0912 17:39:49.919594 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad-cni-log-dir\") pod \"calico-node-zc8l7\" (UID: \"bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad\") " pod="calico-system/calico-node-zc8l7" Sep 12 17:39:49.919773 kubelet[2512]: I0912 17:39:49.919619 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad-cni-bin-dir\") pod \"calico-node-zc8l7\" (UID: \"bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad\") " pod="calico-system/calico-node-zc8l7" Sep 12 17:39:50.031944 kubelet[2512]: E0912 17:39:50.031818 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.032604 kubelet[2512]: W0912 17:39:50.032397 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.033211 kubelet[2512]: E0912 17:39:50.033092 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.033285 kubelet[2512]: E0912 17:39:50.033262 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.033335 kubelet[2512]: W0912 17:39:50.033284 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.033335 kubelet[2512]: E0912 17:39:50.033324 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.034373 kubelet[2512]: E0912 17:39:50.034331 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.034373 kubelet[2512]: W0912 17:39:50.034350 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.034508 kubelet[2512]: E0912 17:39:50.034380 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.034822 kubelet[2512]: E0912 17:39:50.034700 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.034822 kubelet[2512]: W0912 17:39:50.034713 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.034822 kubelet[2512]: E0912 17:39:50.034750 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.035588 kubelet[2512]: E0912 17:39:50.035395 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.035588 kubelet[2512]: W0912 17:39:50.035411 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.035588 kubelet[2512]: E0912 17:39:50.035426 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.035849 kubelet[2512]: E0912 17:39:50.035704 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.035849 kubelet[2512]: W0912 17:39:50.035736 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.035849 kubelet[2512]: E0912 17:39:50.035752 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.037238 kubelet[2512]: E0912 17:39:50.037209 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.037238 kubelet[2512]: W0912 17:39:50.037234 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.037387 kubelet[2512]: E0912 17:39:50.037335 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.037942 kubelet[2512]: E0912 17:39:50.037813 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.037942 kubelet[2512]: W0912 17:39:50.037831 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.037942 kubelet[2512]: E0912 17:39:50.037846 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.054600 kubelet[2512]: E0912 17:39:50.054496 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.054600 kubelet[2512]: W0912 17:39:50.054530 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.054600 kubelet[2512]: E0912 17:39:50.054561 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.076679 kubelet[2512]: E0912 17:39:50.075852 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k29j" podUID="06c23272-db86-4e7e-9c53-92578f077ab6" Sep 12 17:39:50.091199 kubelet[2512]: E0912 17:39:50.091111 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.091349 kubelet[2512]: W0912 17:39:50.091151 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.091349 kubelet[2512]: E0912 17:39:50.091300 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.091980 kubelet[2512]: E0912 17:39:50.091880 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.091980 kubelet[2512]: W0912 17:39:50.091931 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.091980 kubelet[2512]: E0912 17:39:50.091954 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.092975 kubelet[2512]: E0912 17:39:50.092948 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.092975 kubelet[2512]: W0912 17:39:50.092974 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.093056 kubelet[2512]: E0912 17:39:50.092994 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.093389 kubelet[2512]: E0912 17:39:50.093359 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.093389 kubelet[2512]: W0912 17:39:50.093380 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.093468 kubelet[2512]: E0912 17:39:50.093398 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.094409 kubelet[2512]: E0912 17:39:50.094383 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.094409 kubelet[2512]: W0912 17:39:50.094406 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.095992 kubelet[2512]: E0912 17:39:50.094424 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.096785 kubelet[2512]: E0912 17:39:50.096474 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.096785 kubelet[2512]: W0912 17:39:50.096502 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.097259 kubelet[2512]: E0912 17:39:50.096524 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.098851 kubelet[2512]: E0912 17:39:50.098486 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.098851 kubelet[2512]: W0912 17:39:50.098516 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.098851 kubelet[2512]: E0912 17:39:50.098541 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.100109 kubelet[2512]: E0912 17:39:50.099870 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.100109 kubelet[2512]: W0912 17:39:50.099893 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.100109 kubelet[2512]: E0912 17:39:50.099912 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.100248 kubelet[2512]: E0912 17:39:50.100187 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.100248 kubelet[2512]: W0912 17:39:50.100196 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.100248 kubelet[2512]: E0912 17:39:50.100205 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.102511 kubelet[2512]: E0912 17:39:50.100478 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.102511 kubelet[2512]: W0912 17:39:50.100494 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.102511 kubelet[2512]: E0912 17:39:50.100505 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.102647 kubelet[2512]: E0912 17:39:50.102608 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.102647 kubelet[2512]: W0912 17:39:50.102629 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.102694 kubelet[2512]: E0912 17:39:50.102650 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.104016 kubelet[2512]: E0912 17:39:50.103443 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.104016 kubelet[2512]: W0912 17:39:50.103470 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.104016 kubelet[2512]: E0912 17:39:50.103490 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.104549 kubelet[2512]: E0912 17:39:50.104408 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.104549 kubelet[2512]: W0912 17:39:50.104433 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.104549 kubelet[2512]: E0912 17:39:50.104452 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.108117 kubelet[2512]: E0912 17:39:50.104694 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.108117 kubelet[2512]: W0912 17:39:50.104732 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.108117 kubelet[2512]: E0912 17:39:50.104748 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.108117 kubelet[2512]: E0912 17:39:50.104971 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.108117 kubelet[2512]: W0912 17:39:50.104997 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.108117 kubelet[2512]: E0912 17:39:50.105014 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.108117 kubelet[2512]: E0912 17:39:50.105240 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.108117 kubelet[2512]: W0912 17:39:50.105251 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.108117 kubelet[2512]: E0912 17:39:50.105263 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.108117 kubelet[2512]: E0912 17:39:50.106257 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.108542 kubelet[2512]: W0912 17:39:50.106274 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.108542 kubelet[2512]: E0912 17:39:50.106291 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.108542 kubelet[2512]: E0912 17:39:50.106511 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.108542 kubelet[2512]: W0912 17:39:50.106523 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.108542 kubelet[2512]: E0912 17:39:50.106536 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.108542 kubelet[2512]: E0912 17:39:50.106726 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.108542 kubelet[2512]: W0912 17:39:50.106737 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.108542 kubelet[2512]: E0912 17:39:50.106750 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.108542 kubelet[2512]: E0912 17:39:50.106950 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.108542 kubelet[2512]: W0912 17:39:50.106961 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.108807 kubelet[2512]: E0912 17:39:50.106974 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.127540 kubelet[2512]: E0912 17:39:50.125380 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.127540 kubelet[2512]: W0912 17:39:50.125407 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.127540 kubelet[2512]: E0912 17:39:50.125431 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.127540 kubelet[2512]: I0912 17:39:50.125464 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/06c23272-db86-4e7e-9c53-92578f077ab6-varrun\") pod \"csi-node-driver-4k29j\" (UID: \"06c23272-db86-4e7e-9c53-92578f077ab6\") " pod="calico-system/csi-node-driver-4k29j" Sep 12 17:39:50.127540 kubelet[2512]: E0912 17:39:50.125709 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.127540 kubelet[2512]: W0912 17:39:50.125719 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.127540 kubelet[2512]: E0912 17:39:50.125730 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.127540 kubelet[2512]: I0912 17:39:50.125745 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpq2q\" (UniqueName: \"kubernetes.io/projected/06c23272-db86-4e7e-9c53-92578f077ab6-kube-api-access-cpq2q\") pod \"csi-node-driver-4k29j\" (UID: \"06c23272-db86-4e7e-9c53-92578f077ab6\") " pod="calico-system/csi-node-driver-4k29j" Sep 12 17:39:50.127540 kubelet[2512]: E0912 17:39:50.125892 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.127919 kubelet[2512]: W0912 17:39:50.125900 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.127919 kubelet[2512]: E0912 17:39:50.125909 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.127919 kubelet[2512]: I0912 17:39:50.125923 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/06c23272-db86-4e7e-9c53-92578f077ab6-registration-dir\") pod \"csi-node-driver-4k29j\" (UID: \"06c23272-db86-4e7e-9c53-92578f077ab6\") " pod="calico-system/csi-node-driver-4k29j" Sep 12 17:39:50.127919 kubelet[2512]: E0912 17:39:50.127267 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.127919 kubelet[2512]: W0912 17:39:50.127285 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.127919 kubelet[2512]: E0912 17:39:50.127304 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.127919 kubelet[2512]: I0912 17:39:50.127331 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/06c23272-db86-4e7e-9c53-92578f077ab6-socket-dir\") pod \"csi-node-driver-4k29j\" (UID: \"06c23272-db86-4e7e-9c53-92578f077ab6\") " pod="calico-system/csi-node-driver-4k29j" Sep 12 17:39:50.131564 kubelet[2512]: E0912 17:39:50.128192 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.131564 kubelet[2512]: W0912 17:39:50.128212 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.131564 kubelet[2512]: E0912 17:39:50.128234 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.131564 kubelet[2512]: I0912 17:39:50.128267 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06c23272-db86-4e7e-9c53-92578f077ab6-kubelet-dir\") pod \"csi-node-driver-4k29j\" (UID: \"06c23272-db86-4e7e-9c53-92578f077ab6\") " pod="calico-system/csi-node-driver-4k29j" Sep 12 17:39:50.131564 kubelet[2512]: E0912 17:39:50.128554 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.131564 kubelet[2512]: W0912 17:39:50.128565 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.131564 kubelet[2512]: E0912 17:39:50.128577 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.131564 kubelet[2512]: E0912 17:39:50.128949 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.131564 kubelet[2512]: W0912 17:39:50.128960 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.131911 kubelet[2512]: E0912 17:39:50.128972 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.131911 kubelet[2512]: E0912 17:39:50.131289 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.131911 kubelet[2512]: W0912 17:39:50.131303 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.131911 kubelet[2512]: E0912 17:39:50.131318 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.131911 kubelet[2512]: E0912 17:39:50.131470 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.131911 kubelet[2512]: W0912 17:39:50.131476 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.131911 kubelet[2512]: E0912 17:39:50.131484 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.132800 kubelet[2512]: E0912 17:39:50.132686 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.133163 kubelet[2512]: W0912 17:39:50.132933 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.133163 kubelet[2512]: E0912 17:39:50.132965 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.133622 kubelet[2512]: E0912 17:39:50.133592 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.133826 kubelet[2512]: W0912 17:39:50.133809 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.133972 kubelet[2512]: E0912 17:39:50.133934 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.134755 kubelet[2512]: E0912 17:39:50.134646 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.135372 kubelet[2512]: W0912 17:39:50.135151 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.135372 kubelet[2512]: E0912 17:39:50.135172 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.136462 kubelet[2512]: E0912 17:39:50.136443 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.136646 kubelet[2512]: W0912 17:39:50.136529 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.136646 kubelet[2512]: E0912 17:39:50.136546 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.137285 kubelet[2512]: E0912 17:39:50.137268 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.138901 kubelet[2512]: W0912 17:39:50.138275 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.138901 kubelet[2512]: E0912 17:39:50.138314 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.138901 kubelet[2512]: E0912 17:39:50.138846 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.138901 kubelet[2512]: W0912 17:39:50.138857 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.138901 kubelet[2512]: E0912 17:39:50.138869 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.158634 containerd[1471]: time="2025-09-12T17:39:50.158429359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6795974894-ktkms,Uid:b5002d17-aaaf-4aa6-bbfd-964e2e5668a8,Namespace:calico-system,Attempt:0,} returns sandbox id \"1279c3c0a45fd6d12e9e40e31a434d36fd45eb7392c14ab2d597e1787c17ea98\"" Sep 12 17:39:50.188214 kubelet[2512]: E0912 17:39:50.187157 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:50.210323 containerd[1471]: time="2025-09-12T17:39:50.210270576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:39:50.229675 kubelet[2512]: E0912 17:39:50.229642 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.229675 kubelet[2512]: W0912 17:39:50.229667 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.229973 kubelet[2512]: E0912 17:39:50.229695 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.230123 kubelet[2512]: E0912 17:39:50.230039 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.230123 kubelet[2512]: W0912 17:39:50.230051 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.230123 kubelet[2512]: E0912 17:39:50.230084 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.232438 kubelet[2512]: E0912 17:39:50.232403 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.232438 kubelet[2512]: W0912 17:39:50.232427 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.233440 kubelet[2512]: E0912 17:39:50.233402 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.233737 kubelet[2512]: E0912 17:39:50.233720 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.233737 kubelet[2512]: W0912 17:39:50.233735 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.234050 kubelet[2512]: E0912 17:39:50.234018 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.234133 kubelet[2512]: W0912 17:39:50.234057 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.234307 kubelet[2512]: E0912 17:39:50.234292 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.234307 kubelet[2512]: W0912 17:39:50.234305 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.234377 kubelet[2512]: E0912 17:39:50.234326 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.234589 kubelet[2512]: E0912 17:39:50.234573 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.234589 kubelet[2512]: W0912 17:39:50.234586 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.234668 kubelet[2512]: E0912 17:39:50.234596 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.235145 kubelet[2512]: E0912 17:39:50.235126 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.235145 kubelet[2512]: W0912 17:39:50.235141 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.235240 kubelet[2512]: E0912 17:39:50.235152 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.235571 kubelet[2512]: E0912 17:39:50.235426 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.235571 kubelet[2512]: W0912 17:39:50.235444 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.235571 kubelet[2512]: E0912 17:39:50.235458 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.237368 kubelet[2512]: E0912 17:39:50.237281 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.237368 kubelet[2512]: W0912 17:39:50.237305 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.237368 kubelet[2512]: E0912 17:39:50.237326 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.239332 kubelet[2512]: E0912 17:39:50.239165 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.239332 kubelet[2512]: W0912 17:39:50.239329 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.239429 kubelet[2512]: E0912 17:39:50.239345 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.239429 kubelet[2512]: E0912 17:39:50.239379 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.240912 kubelet[2512]: E0912 17:39:50.240882 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.240912 kubelet[2512]: W0912 17:39:50.240902 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.240912 kubelet[2512]: E0912 17:39:50.240918 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.241301 kubelet[2512]: E0912 17:39:50.241278 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.241301 kubelet[2512]: W0912 17:39:50.241292 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.241301 kubelet[2512]: E0912 17:39:50.241302 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.244187 kubelet[2512]: E0912 17:39:50.244156 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.244187 kubelet[2512]: W0912 17:39:50.244178 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.244380 kubelet[2512]: E0912 17:39:50.244196 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.244380 kubelet[2512]: E0912 17:39:50.244230 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.244708 kubelet[2512]: E0912 17:39:50.244588 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.244708 kubelet[2512]: W0912 17:39:50.244614 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.244708 kubelet[2512]: E0912 17:39:50.244627 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.246199 kubelet[2512]: E0912 17:39:50.245566 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.246199 kubelet[2512]: W0912 17:39:50.245580 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.246199 kubelet[2512]: E0912 17:39:50.245783 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.247149 kubelet[2512]: E0912 17:39:50.247123 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.247149 kubelet[2512]: W0912 17:39:50.247142 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.248800 kubelet[2512]: E0912 17:39:50.248774 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.248892 kubelet[2512]: W0912 17:39:50.248813 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.249771 kubelet[2512]: E0912 17:39:50.249751 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.249771 kubelet[2512]: W0912 17:39:50.249767 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.249952 kubelet[2512]: E0912 17:39:50.249782 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.251329 kubelet[2512]: E0912 17:39:50.251149 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.251725 kubelet[2512]: E0912 17:39:50.251525 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.251725 kubelet[2512]: E0912 17:39:50.251689 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.251725 kubelet[2512]: W0912 17:39:50.251702 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.251725 kubelet[2512]: E0912 17:39:50.251716 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.253460 kubelet[2512]: E0912 17:39:50.253434 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.253460 kubelet[2512]: W0912 17:39:50.253460 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.253567 kubelet[2512]: E0912 17:39:50.253475 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.255447 kubelet[2512]: E0912 17:39:50.255421 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.255447 kubelet[2512]: W0912 17:39:50.255445 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.255535 kubelet[2512]: E0912 17:39:50.255463 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.255988 kubelet[2512]: E0912 17:39:50.255758 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.255988 kubelet[2512]: W0912 17:39:50.255775 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.255988 kubelet[2512]: E0912 17:39:50.255789 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.256706 kubelet[2512]: E0912 17:39:50.256680 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.256706 kubelet[2512]: W0912 17:39:50.256702 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.256794 kubelet[2512]: E0912 17:39:50.256719 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.260953 kubelet[2512]: E0912 17:39:50.260907 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.260953 kubelet[2512]: W0912 17:39:50.260940 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.260953 kubelet[2512]: E0912 17:39:50.260962 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:50.317912 kubelet[2512]: E0912 17:39:50.317781 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:50.317912 kubelet[2512]: W0912 17:39:50.317818 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:50.317912 kubelet[2512]: E0912 17:39:50.317844 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:51.032883 kubelet[2512]: E0912 17:39:51.032727 2512 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Sep 12 17:39:51.032883 kubelet[2512]: E0912 17:39:51.032895 2512 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad-node-certs podName:bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad nodeName:}" failed. No retries permitted until 2025-09-12 17:39:51.532859507 +0000 UTC m=+25.489086440 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad-node-certs") pod "calico-node-zc8l7" (UID: "bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad") : failed to sync secret cache: timed out waiting for the condition Sep 12 17:39:51.042235 kubelet[2512]: E0912 17:39:51.042067 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:51.042235 kubelet[2512]: W0912 17:39:51.042133 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:51.042235 kubelet[2512]: E0912 17:39:51.042162 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:51.143422 kubelet[2512]: E0912 17:39:51.143361 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:51.143422 kubelet[2512]: W0912 17:39:51.143404 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:51.143422 kubelet[2512]: E0912 17:39:51.143439 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:51.233291 kubelet[2512]: E0912 17:39:51.233219 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k29j" podUID="06c23272-db86-4e7e-9c53-92578f077ab6" Sep 12 17:39:51.244794 kubelet[2512]: E0912 17:39:51.244494 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:51.244794 kubelet[2512]: W0912 17:39:51.244568 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:51.244794 kubelet[2512]: E0912 17:39:51.244666 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:51.347147 kubelet[2512]: E0912 17:39:51.346187 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:51.347147 kubelet[2512]: W0912 17:39:51.346232 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:51.347147 kubelet[2512]: E0912 17:39:51.346263 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:51.449117 kubelet[2512]: E0912 17:39:51.448117 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:51.449117 kubelet[2512]: W0912 17:39:51.448160 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:51.449117 kubelet[2512]: E0912 17:39:51.448218 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:51.550048 kubelet[2512]: E0912 17:39:51.549668 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:51.550048 kubelet[2512]: W0912 17:39:51.549713 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:51.550048 kubelet[2512]: E0912 17:39:51.549768 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:51.550789 kubelet[2512]: E0912 17:39:51.550364 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:51.550789 kubelet[2512]: W0912 17:39:51.550410 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:51.550789 kubelet[2512]: E0912 17:39:51.550432 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:51.550901 kubelet[2512]: E0912 17:39:51.550797 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:51.550901 kubelet[2512]: W0912 17:39:51.550832 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:51.550901 kubelet[2512]: E0912 17:39:51.550853 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:51.551401 kubelet[2512]: E0912 17:39:51.551222 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:51.551401 kubelet[2512]: W0912 17:39:51.551260 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:51.551401 kubelet[2512]: E0912 17:39:51.551278 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:51.552059 kubelet[2512]: E0912 17:39:51.551620 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:51.552059 kubelet[2512]: W0912 17:39:51.551640 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:51.552059 kubelet[2512]: E0912 17:39:51.551656 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:51.565224 kubelet[2512]: E0912 17:39:51.565177 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:51.565224 kubelet[2512]: W0912 17:39:51.565221 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:51.565437 kubelet[2512]: E0912 17:39:51.565254 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:51.634479 containerd[1471]: time="2025-09-12T17:39:51.633881784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zc8l7,Uid:bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad,Namespace:calico-system,Attempt:0,}" Sep 12 17:39:51.720034 containerd[1471]: time="2025-09-12T17:39:51.719125772Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:39:51.720034 containerd[1471]: time="2025-09-12T17:39:51.719189516Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:39:51.720034 containerd[1471]: time="2025-09-12T17:39:51.719243344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:51.721545 containerd[1471]: time="2025-09-12T17:39:51.721308746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:39:51.781373 systemd[1]: Started cri-containerd-d5bfb8fcd371dfd6fe0497d56bc37db643883502308cd468152d500f60be5167.scope - libcontainer container d5bfb8fcd371dfd6fe0497d56bc37db643883502308cd468152d500f60be5167. Sep 12 17:39:51.835018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2696900856.mount: Deactivated successfully. Sep 12 17:39:51.861983 containerd[1471]: time="2025-09-12T17:39:51.861931625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zc8l7,Uid:bfb6fe0d-9ca7-47b0-ba20-d80b105df5ad,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5bfb8fcd371dfd6fe0497d56bc37db643883502308cd468152d500f60be5167\"" Sep 12 17:39:52.942362 containerd[1471]: time="2025-09-12T17:39:52.942290406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:52.944644 containerd[1471]: time="2025-09-12T17:39:52.944532949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 17:39:52.945768 containerd[1471]: time="2025-09-12T17:39:52.945716830Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:52.950352 containerd[1471]: time="2025-09-12T17:39:52.950292223Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:52.951240 containerd[1471]: time="2025-09-12T17:39:52.951173967Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.740800658s" Sep 12 17:39:52.951240 containerd[1471]: time="2025-09-12T17:39:52.951214392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:39:52.953096 containerd[1471]: time="2025-09-12T17:39:52.952951146Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:39:52.982789 containerd[1471]: time="2025-09-12T17:39:52.982730333Z" level=info msg="CreateContainer within sandbox \"1279c3c0a45fd6d12e9e40e31a434d36fd45eb7392c14ab2d597e1787c17ea98\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:39:53.122607 containerd[1471]: time="2025-09-12T17:39:53.121619118Z" level=info msg="CreateContainer within sandbox \"1279c3c0a45fd6d12e9e40e31a434d36fd45eb7392c14ab2d597e1787c17ea98\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3a4e4d1485daa945bf9e4d5e51c98cd961e49981175121899a33f90dad2012a0\"" Sep 12 17:39:53.127029 containerd[1471]: time="2025-09-12T17:39:53.126987704Z" level=info msg="StartContainer for \"3a4e4d1485daa945bf9e4d5e51c98cd961e49981175121899a33f90dad2012a0\"" Sep 12 17:39:53.185471 systemd[1]: Started cri-containerd-3a4e4d1485daa945bf9e4d5e51c98cd961e49981175121899a33f90dad2012a0.scope - libcontainer container 3a4e4d1485daa945bf9e4d5e51c98cd961e49981175121899a33f90dad2012a0. Sep 12 17:39:53.232387 kubelet[2512]: E0912 17:39:53.231918 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k29j" podUID="06c23272-db86-4e7e-9c53-92578f077ab6" Sep 12 17:39:53.279402 containerd[1471]: time="2025-09-12T17:39:53.278859619Z" level=info msg="StartContainer for \"3a4e4d1485daa945bf9e4d5e51c98cd961e49981175121899a33f90dad2012a0\" returns successfully" Sep 12 17:39:53.449152 kubelet[2512]: E0912 17:39:53.448220 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:53.474848 kubelet[2512]: I0912 17:39:53.474775 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6795974894-ktkms" podStartSLOduration=1.727855785 podStartE2EDuration="4.474750784s" podCreationTimestamp="2025-09-12 17:39:49 +0000 UTC" firstStartedPulling="2025-09-12 17:39:50.205828861 +0000 UTC m=+24.162055793" lastFinishedPulling="2025-09-12 17:39:52.952723873 +0000 UTC m=+26.908950792" observedRunningTime="2025-09-12 17:39:53.474601053 +0000 UTC m=+27.430827983" watchObservedRunningTime="2025-09-12 17:39:53.474750784 +0000 UTC m=+27.430977711" Sep 12 17:39:53.536698 kubelet[2512]: E0912 17:39:53.536594 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.536698 kubelet[2512]: W0912 17:39:53.536620 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.536698 kubelet[2512]: E0912 17:39:53.536643 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.538292 kubelet[2512]: E0912 17:39:53.538139 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.538292 kubelet[2512]: W0912 17:39:53.538166 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.538292 kubelet[2512]: E0912 17:39:53.538195 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.538851 kubelet[2512]: E0912 17:39:53.538671 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.538851 kubelet[2512]: W0912 17:39:53.538692 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.538851 kubelet[2512]: E0912 17:39:53.538713 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.539183 kubelet[2512]: E0912 17:39:53.539097 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.539183 kubelet[2512]: W0912 17:39:53.539118 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.539183 kubelet[2512]: E0912 17:39:53.539129 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.539594 kubelet[2512]: E0912 17:39:53.539444 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.539594 kubelet[2512]: W0912 17:39:53.539454 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.539594 kubelet[2512]: E0912 17:39:53.539464 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.541105 kubelet[2512]: E0912 17:39:53.539730 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.541105 kubelet[2512]: W0912 17:39:53.539740 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.541105 kubelet[2512]: E0912 17:39:53.539750 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.541440 kubelet[2512]: E0912 17:39:53.541426 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.541509 kubelet[2512]: W0912 17:39:53.541499 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.541559 kubelet[2512]: E0912 17:39:53.541550 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.541888 kubelet[2512]: E0912 17:39:53.541875 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.541965 kubelet[2512]: W0912 17:39:53.541956 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.542013 kubelet[2512]: E0912 17:39:53.542004 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.542299 kubelet[2512]: E0912 17:39:53.542288 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.542377 kubelet[2512]: W0912 17:39:53.542368 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.542598 kubelet[2512]: E0912 17:39:53.542582 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.544138 kubelet[2512]: E0912 17:39:53.543359 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.544138 kubelet[2512]: W0912 17:39:53.543378 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.544138 kubelet[2512]: E0912 17:39:53.543396 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.544569 kubelet[2512]: E0912 17:39:53.544470 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.544569 kubelet[2512]: W0912 17:39:53.544489 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.544569 kubelet[2512]: E0912 17:39:53.544510 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.544983 kubelet[2512]: E0912 17:39:53.544871 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.544983 kubelet[2512]: W0912 17:39:53.544882 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.544983 kubelet[2512]: E0912 17:39:53.544893 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.546147 kubelet[2512]: E0912 17:39:53.545986 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.546147 kubelet[2512]: W0912 17:39:53.546003 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.546147 kubelet[2512]: E0912 17:39:53.546016 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.546309 kubelet[2512]: E0912 17:39:53.546299 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.546369 kubelet[2512]: W0912 17:39:53.546356 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.546524 kubelet[2512]: E0912 17:39:53.546430 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.546721 kubelet[2512]: E0912 17:39:53.546658 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.546721 kubelet[2512]: W0912 17:39:53.546674 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.546721 kubelet[2512]: E0912 17:39:53.546689 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.573447 kubelet[2512]: E0912 17:39:53.573410 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.573447 kubelet[2512]: W0912 17:39:53.573437 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.573447 kubelet[2512]: E0912 17:39:53.573460 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.574566 kubelet[2512]: E0912 17:39:53.574537 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.574566 kubelet[2512]: W0912 17:39:53.574559 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.574907 kubelet[2512]: E0912 17:39:53.574585 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.574907 kubelet[2512]: E0912 17:39:53.574882 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.574907 kubelet[2512]: W0912 17:39:53.574894 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.574907 kubelet[2512]: E0912 17:39:53.574907 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.575168 kubelet[2512]: E0912 17:39:53.575154 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.575168 kubelet[2512]: W0912 17:39:53.575166 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.575410 kubelet[2512]: E0912 17:39:53.575183 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.575441 kubelet[2512]: E0912 17:39:53.575420 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.575441 kubelet[2512]: W0912 17:39:53.575430 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.575494 kubelet[2512]: E0912 17:39:53.575441 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.575628 kubelet[2512]: E0912 17:39:53.575615 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.575628 kubelet[2512]: W0912 17:39:53.575627 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.575781 kubelet[2512]: E0912 17:39:53.575683 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.576158 kubelet[2512]: E0912 17:39:53.576136 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.576158 kubelet[2512]: W0912 17:39:53.576153 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.577487 kubelet[2512]: E0912 17:39:53.576219 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.577487 kubelet[2512]: E0912 17:39:53.577260 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.577487 kubelet[2512]: W0912 17:39:53.577273 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.577487 kubelet[2512]: E0912 17:39:53.577361 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.577748 kubelet[2512]: E0912 17:39:53.577732 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.577874 kubelet[2512]: W0912 17:39:53.577790 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.577938 kubelet[2512]: E0912 17:39:53.577925 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.578425 kubelet[2512]: E0912 17:39:53.578330 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.578425 kubelet[2512]: W0912 17:39:53.578343 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.578425 kubelet[2512]: E0912 17:39:53.578373 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.578680 kubelet[2512]: E0912 17:39:53.578585 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.578680 kubelet[2512]: W0912 17:39:53.578595 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.578680 kubelet[2512]: E0912 17:39:53.578613 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.578896 kubelet[2512]: E0912 17:39:53.578806 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.578896 kubelet[2512]: W0912 17:39:53.578814 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.578896 kubelet[2512]: E0912 17:39:53.578830 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.579523 kubelet[2512]: E0912 17:39:53.579149 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.579523 kubelet[2512]: W0912 17:39:53.579160 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.579523 kubelet[2512]: E0912 17:39:53.579177 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.579523 kubelet[2512]: E0912 17:39:53.579410 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.579523 kubelet[2512]: W0912 17:39:53.579422 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.579523 kubelet[2512]: E0912 17:39:53.579434 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.580295 kubelet[2512]: E0912 17:39:53.579831 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.580295 kubelet[2512]: W0912 17:39:53.579847 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.580295 kubelet[2512]: E0912 17:39:53.579858 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.581198 kubelet[2512]: E0912 17:39:53.581179 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.581198 kubelet[2512]: W0912 17:39:53.581195 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.581285 kubelet[2512]: E0912 17:39:53.581214 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.582151 kubelet[2512]: E0912 17:39:53.581590 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.582151 kubelet[2512]: W0912 17:39:53.581605 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.582151 kubelet[2512]: E0912 17:39:53.581791 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:53.582151 kubelet[2512]: W0912 17:39:53.581798 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:53.582151 kubelet[2512]: E0912 17:39:53.581808 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:53.582151 kubelet[2512]: E0912 17:39:53.581835 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.452866 kubelet[2512]: I0912 17:39:54.452824 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:39:54.453994 kubelet[2512]: E0912 17:39:54.453319 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:39:54.456228 kubelet[2512]: E0912 17:39:54.456193 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.456228 kubelet[2512]: W0912 17:39:54.456226 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.456467 kubelet[2512]: E0912 17:39:54.456431 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.459119 kubelet[2512]: E0912 17:39:54.457788 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.459119 kubelet[2512]: W0912 17:39:54.457814 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.459119 kubelet[2512]: E0912 17:39:54.458187 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.460151 kubelet[2512]: E0912 17:39:54.459696 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.460539 kubelet[2512]: W0912 17:39:54.459926 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.460539 kubelet[2512]: E0912 17:39:54.460374 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.463105 kubelet[2512]: E0912 17:39:54.462919 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.463105 kubelet[2512]: W0912 17:39:54.462945 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.463263 kubelet[2512]: E0912 17:39:54.463248 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.463938 kubelet[2512]: E0912 17:39:54.463846 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.463938 kubelet[2512]: W0912 17:39:54.463868 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.463938 kubelet[2512]: E0912 17:39:54.463888 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.464839 kubelet[2512]: E0912 17:39:54.464811 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.464924 kubelet[2512]: W0912 17:39:54.464883 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.464924 kubelet[2512]: E0912 17:39:54.464906 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.466016 kubelet[2512]: E0912 17:39:54.465992 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.466016 kubelet[2512]: W0912 17:39:54.466013 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.466369 kubelet[2512]: E0912 17:39:54.466032 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.466595 kubelet[2512]: E0912 17:39:54.466575 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.466595 kubelet[2512]: W0912 17:39:54.466593 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.466694 kubelet[2512]: E0912 17:39:54.466610 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.466982 kubelet[2512]: E0912 17:39:54.466961 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.467051 kubelet[2512]: W0912 17:39:54.467005 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.467051 kubelet[2512]: E0912 17:39:54.467024 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.467636 kubelet[2512]: E0912 17:39:54.467437 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.467709 kubelet[2512]: W0912 17:39:54.467641 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.467709 kubelet[2512]: E0912 17:39:54.467662 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.468434 kubelet[2512]: E0912 17:39:54.468410 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.468434 kubelet[2512]: W0912 17:39:54.468433 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.468434 kubelet[2512]: E0912 17:39:54.468465 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.469685 kubelet[2512]: E0912 17:39:54.469450 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.469685 kubelet[2512]: W0912 17:39:54.469472 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.469685 kubelet[2512]: E0912 17:39:54.469489 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.470176 kubelet[2512]: E0912 17:39:54.469947 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.470176 kubelet[2512]: W0912 17:39:54.469967 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.470176 kubelet[2512]: E0912 17:39:54.469984 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.471277 kubelet[2512]: E0912 17:39:54.470350 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.471277 kubelet[2512]: W0912 17:39:54.470366 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.471277 kubelet[2512]: E0912 17:39:54.470381 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.471277 kubelet[2512]: E0912 17:39:54.470637 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.471277 kubelet[2512]: W0912 17:39:54.470649 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.471277 kubelet[2512]: E0912 17:39:54.470663 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.475061 containerd[1471]: time="2025-09-12T17:39:54.474189842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:54.475695 containerd[1471]: time="2025-09-12T17:39:54.475639220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 17:39:54.476602 containerd[1471]: time="2025-09-12T17:39:54.476565552Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:54.478977 containerd[1471]: time="2025-09-12T17:39:54.478932960Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:54.481124 containerd[1471]: time="2025-09-12T17:39:54.481051940Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.528065502s" Sep 12 17:39:54.481291 containerd[1471]: time="2025-09-12T17:39:54.481268760Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:39:54.483239 kubelet[2512]: E0912 17:39:54.483211 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.483400 kubelet[2512]: W0912 17:39:54.483380 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.483498 kubelet[2512]: E0912 17:39:54.483481 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.484573 kubelet[2512]: E0912 17:39:54.484550 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.484687 kubelet[2512]: W0912 17:39:54.484671 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.484788 kubelet[2512]: E0912 17:39:54.484773 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.486152 containerd[1471]: time="2025-09-12T17:39:54.485520753Z" level=info msg="CreateContainer within sandbox \"d5bfb8fcd371dfd6fe0497d56bc37db643883502308cd468152d500f60be5167\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:39:54.486496 kubelet[2512]: E0912 17:39:54.486379 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.486747 kubelet[2512]: W0912 17:39:54.486723 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.487352 kubelet[2512]: E0912 17:39:54.487325 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.487703 kubelet[2512]: E0912 17:39:54.487346 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.487703 kubelet[2512]: W0912 17:39:54.487560 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.487703 kubelet[2512]: E0912 17:39:54.487578 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.490354 kubelet[2512]: E0912 17:39:54.490144 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.490354 kubelet[2512]: W0912 17:39:54.490169 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.490354 kubelet[2512]: E0912 17:39:54.490193 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.490622 kubelet[2512]: E0912 17:39:54.490601 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.490622 kubelet[2512]: W0912 17:39:54.490622 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.490761 kubelet[2512]: E0912 17:39:54.490742 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.491003 kubelet[2512]: E0912 17:39:54.490962 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.491155 kubelet[2512]: W0912 17:39:54.491005 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.491155 kubelet[2512]: E0912 17:39:54.491037 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.491748 kubelet[2512]: E0912 17:39:54.491384 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.491748 kubelet[2512]: W0912 17:39:54.491402 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.491748 kubelet[2512]: E0912 17:39:54.491432 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.492109 kubelet[2512]: E0912 17:39:54.491779 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.492109 kubelet[2512]: W0912 17:39:54.491793 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.492109 kubelet[2512]: E0912 17:39:54.491816 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.492511 kubelet[2512]: E0912 17:39:54.492253 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.492511 kubelet[2512]: W0912 17:39:54.492270 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.492511 kubelet[2512]: E0912 17:39:54.492289 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.493436 kubelet[2512]: E0912 17:39:54.492675 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.493436 kubelet[2512]: W0912 17:39:54.492688 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.493436 kubelet[2512]: E0912 17:39:54.492712 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.493436 kubelet[2512]: E0912 17:39:54.493106 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.493436 kubelet[2512]: W0912 17:39:54.493121 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.493436 kubelet[2512]: E0912 17:39:54.493142 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.493686 kubelet[2512]: E0912 17:39:54.493466 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.493686 kubelet[2512]: W0912 17:39:54.493480 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.493686 kubelet[2512]: E0912 17:39:54.493522 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.493985 kubelet[2512]: E0912 17:39:54.493888 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.493985 kubelet[2512]: W0912 17:39:54.493902 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.493985 kubelet[2512]: E0912 17:39:54.493933 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.494341 kubelet[2512]: E0912 17:39:54.494196 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.494341 kubelet[2512]: W0912 17:39:54.494211 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.494341 kubelet[2512]: E0912 17:39:54.494233 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.494528 kubelet[2512]: E0912 17:39:54.494497 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.494528 kubelet[2512]: W0912 17:39:54.494508 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.494528 kubelet[2512]: E0912 17:39:54.494525 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.495059 kubelet[2512]: E0912 17:39:54.495033 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.495059 kubelet[2512]: W0912 17:39:54.495051 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.495873 kubelet[2512]: E0912 17:39:54.495324 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.495873 kubelet[2512]: E0912 17:39:54.495336 2512 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:39:54.495873 kubelet[2512]: W0912 17:39:54.495376 2512 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:39:54.495873 kubelet[2512]: E0912 17:39:54.495390 2512 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:39:54.511319 containerd[1471]: time="2025-09-12T17:39:54.511259021Z" level=info msg="CreateContainer within sandbox \"d5bfb8fcd371dfd6fe0497d56bc37db643883502308cd468152d500f60be5167\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"aa8f7fa386d6a68d2a2875508f8f73ac13b770a689e11d2f4628002d31109766\"" Sep 12 17:39:54.513809 containerd[1471]: time="2025-09-12T17:39:54.513754515Z" level=info msg="StartContainer for \"aa8f7fa386d6a68d2a2875508f8f73ac13b770a689e11d2f4628002d31109766\"" Sep 12 17:39:54.584386 systemd[1]: Started cri-containerd-aa8f7fa386d6a68d2a2875508f8f73ac13b770a689e11d2f4628002d31109766.scope - libcontainer container aa8f7fa386d6a68d2a2875508f8f73ac13b770a689e11d2f4628002d31109766. Sep 12 17:39:54.639108 containerd[1471]: time="2025-09-12T17:39:54.639006461Z" level=info msg="StartContainer for \"aa8f7fa386d6a68d2a2875508f8f73ac13b770a689e11d2f4628002d31109766\" returns successfully" Sep 12 17:39:54.672887 systemd[1]: cri-containerd-aa8f7fa386d6a68d2a2875508f8f73ac13b770a689e11d2f4628002d31109766.scope: Deactivated successfully. Sep 12 17:39:54.715612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa8f7fa386d6a68d2a2875508f8f73ac13b770a689e11d2f4628002d31109766-rootfs.mount: Deactivated successfully. Sep 12 17:39:54.746233 containerd[1471]: time="2025-09-12T17:39:54.725904145Z" level=info msg="shim disconnected" id=aa8f7fa386d6a68d2a2875508f8f73ac13b770a689e11d2f4628002d31109766 namespace=k8s.io Sep 12 17:39:54.746719 containerd[1471]: time="2025-09-12T17:39:54.746481200Z" level=warning msg="cleaning up after shim disconnected" id=aa8f7fa386d6a68d2a2875508f8f73ac13b770a689e11d2f4628002d31109766 namespace=k8s.io Sep 12 17:39:54.746719 containerd[1471]: time="2025-09-12T17:39:54.746508256Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:39:55.232577 kubelet[2512]: E0912 17:39:55.232386 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k29j" podUID="06c23272-db86-4e7e-9c53-92578f077ab6" Sep 12 17:39:55.463398 containerd[1471]: time="2025-09-12T17:39:55.463347624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:39:57.234197 kubelet[2512]: E0912 17:39:57.232897 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k29j" podUID="06c23272-db86-4e7e-9c53-92578f077ab6" Sep 12 17:39:59.232769 kubelet[2512]: E0912 17:39:59.232423 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4k29j" podUID="06c23272-db86-4e7e-9c53-92578f077ab6" Sep 12 17:39:59.884554 containerd[1471]: time="2025-09-12T17:39:59.884499242Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:59.886495 containerd[1471]: time="2025-09-12T17:39:59.886440679Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:39:59.887524 containerd[1471]: time="2025-09-12T17:39:59.887299016Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:59.889955 containerd[1471]: time="2025-09-12T17:39:59.889892451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:39:59.891806 containerd[1471]: time="2025-09-12T17:39:59.891384307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 4.427976822s" Sep 12 17:39:59.891806 containerd[1471]: time="2025-09-12T17:39:59.891443910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:39:59.895320 containerd[1471]: time="2025-09-12T17:39:59.895164724Z" level=info msg="CreateContainer within sandbox \"d5bfb8fcd371dfd6fe0497d56bc37db643883502308cd468152d500f60be5167\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:39:59.920815 containerd[1471]: time="2025-09-12T17:39:59.920644362Z" level=info msg="CreateContainer within sandbox \"d5bfb8fcd371dfd6fe0497d56bc37db643883502308cd468152d500f60be5167\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d52e2abc53b0e5f91f1bf91c197821b36bf4686ffa9b9f3f577d87def6c67cca\"" Sep 12 17:39:59.924032 containerd[1471]: time="2025-09-12T17:39:59.923255576Z" level=info msg="StartContainer for \"d52e2abc53b0e5f91f1bf91c197821b36bf4686ffa9b9f3f577d87def6c67cca\"" Sep 12 17:40:00.042443 systemd[1]: Started cri-containerd-d52e2abc53b0e5f91f1bf91c197821b36bf4686ffa9b9f3f577d87def6c67cca.scope - libcontainer container d52e2abc53b0e5f91f1bf91c197821b36bf4686ffa9b9f3f577d87def6c67cca. Sep 12 17:40:00.201267 containerd[1471]: time="2025-09-12T17:40:00.198817260Z" level=info msg="StartContainer for \"d52e2abc53b0e5f91f1bf91c197821b36bf4686ffa9b9f3f577d87def6c67cca\" returns successfully" Sep 12 17:40:01.060977 systemd[1]: cri-containerd-d52e2abc53b0e5f91f1bf91c197821b36bf4686ffa9b9f3f577d87def6c67cca.scope: Deactivated successfully. Sep 12 17:40:01.123441 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d52e2abc53b0e5f91f1bf91c197821b36bf4686ffa9b9f3f577d87def6c67cca-rootfs.mount: Deactivated successfully. Sep 12 17:40:01.135552 containerd[1471]: time="2025-09-12T17:40:01.133329443Z" level=info msg="shim disconnected" id=d52e2abc53b0e5f91f1bf91c197821b36bf4686ffa9b9f3f577d87def6c67cca namespace=k8s.io Sep 12 17:40:01.135552 containerd[1471]: time="2025-09-12T17:40:01.133434918Z" level=warning msg="cleaning up after shim disconnected" id=d52e2abc53b0e5f91f1bf91c197821b36bf4686ffa9b9f3f577d87def6c67cca namespace=k8s.io Sep 12 17:40:01.135552 containerd[1471]: time="2025-09-12T17:40:01.133450827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:40:01.169418 kubelet[2512]: I0912 17:40:01.165512 2512 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:40:01.251430 systemd[1]: Created slice kubepods-besteffort-pod06c23272_db86_4e7e_9c53_92578f077ab6.slice - libcontainer container kubepods-besteffort-pod06c23272_db86_4e7e_9c53_92578f077ab6.slice. Sep 12 17:40:01.262307 containerd[1471]: time="2025-09-12T17:40:01.262227442Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4k29j,Uid:06c23272-db86-4e7e-9c53-92578f077ab6,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:01.276247 systemd[1]: Created slice kubepods-burstable-pod5925fe00_d8c6_4534_a505_f32b5402931d.slice - libcontainer container kubepods-burstable-pod5925fe00_d8c6_4534_a505_f32b5402931d.slice. Sep 12 17:40:01.351340 kubelet[2512]: W0912 17:40:01.348356 2512 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:ci-4081.3.6-a-756b4d7dc2" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.6-a-756b4d7dc2' and this object Sep 12 17:40:01.351340 kubelet[2512]: E0912 17:40:01.348407 2512 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:ci-4081.3.6-a-756b4d7dc2\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-a-756b4d7dc2' and this object" logger="UnhandledError" Sep 12 17:40:01.351340 kubelet[2512]: W0912 17:40:01.348495 2512 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.6-a-756b4d7dc2" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.6-a-756b4d7dc2' and this object Sep 12 17:40:01.351340 kubelet[2512]: E0912 17:40:01.348508 2512 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.6-a-756b4d7dc2\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.6-a-756b4d7dc2' and this object" logger="UnhandledError" Sep 12 17:40:01.351715 kubelet[2512]: W0912 17:40:01.348904 2512 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ci-4081.3.6-a-756b4d7dc2" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ci-4081.3.6-a-756b4d7dc2' and this object Sep 12 17:40:01.351715 kubelet[2512]: E0912 17:40:01.348930 2512 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:ci-4081.3.6-a-756b4d7dc2\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'ci-4081.3.6-a-756b4d7dc2' and this object" logger="UnhandledError" Sep 12 17:40:01.355309 systemd[1]: Created slice kubepods-besteffort-podfcab9732_3c6c_4b48_8986_45fc7d97bb57.slice - libcontainer container kubepods-besteffort-podfcab9732_3c6c_4b48_8986_45fc7d97bb57.slice. Sep 12 17:40:01.356227 kubelet[2512]: I0912 17:40:01.356192 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80356a36-c1e0-4d4a-9e83-4527f10c0a1e-whisker-ca-bundle\") pod \"whisker-6994b65855-tsfv5\" (UID: \"80356a36-c1e0-4d4a-9e83-4527f10c0a1e\") " pod="calico-system/whisker-6994b65855-tsfv5" Sep 12 17:40:01.357273 kubelet[2512]: I0912 17:40:01.356442 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fcab9732-3c6c-4b48-8986-45fc7d97bb57-calico-apiserver-certs\") pod \"calico-apiserver-854494659d-kvgvx\" (UID: \"fcab9732-3c6c-4b48-8986-45fc7d97bb57\") " pod="calico-apiserver/calico-apiserver-854494659d-kvgvx" Sep 12 17:40:01.357520 kubelet[2512]: I0912 17:40:01.357490 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-658bz\" (UniqueName: \"kubernetes.io/projected/b35527ec-e920-470f-8bab-682f16b3b48b-kube-api-access-658bz\") pod \"coredns-668d6bf9bc-g49wb\" (UID: \"b35527ec-e920-470f-8bab-682f16b3b48b\") " pod="kube-system/coredns-668d6bf9bc-g49wb" Sep 12 17:40:01.357656 kubelet[2512]: I0912 17:40:01.357632 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5925fe00-d8c6-4534-a505-f32b5402931d-config-volume\") pod \"coredns-668d6bf9bc-t55ll\" (UID: \"5925fe00-d8c6-4534-a505-f32b5402931d\") " pod="kube-system/coredns-668d6bf9bc-t55ll" Sep 12 17:40:01.359225 kubelet[2512]: I0912 17:40:01.357741 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gl47\" (UniqueName: \"kubernetes.io/projected/5925fe00-d8c6-4534-a505-f32b5402931d-kube-api-access-2gl47\") pod \"coredns-668d6bf9bc-t55ll\" (UID: \"5925fe00-d8c6-4534-a505-f32b5402931d\") " pod="kube-system/coredns-668d6bf9bc-t55ll" Sep 12 17:40:01.359225 kubelet[2512]: I0912 17:40:01.357787 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sl9gs\" (UniqueName: \"kubernetes.io/projected/80356a36-c1e0-4d4a-9e83-4527f10c0a1e-kube-api-access-sl9gs\") pod \"whisker-6994b65855-tsfv5\" (UID: \"80356a36-c1e0-4d4a-9e83-4527f10c0a1e\") " pod="calico-system/whisker-6994b65855-tsfv5" Sep 12 17:40:01.359225 kubelet[2512]: I0912 17:40:01.357822 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jp4x\" (UniqueName: \"kubernetes.io/projected/fcab9732-3c6c-4b48-8986-45fc7d97bb57-kube-api-access-2jp4x\") pod \"calico-apiserver-854494659d-kvgvx\" (UID: \"fcab9732-3c6c-4b48-8986-45fc7d97bb57\") " pod="calico-apiserver/calico-apiserver-854494659d-kvgvx" Sep 12 17:40:01.359225 kubelet[2512]: I0912 17:40:01.357856 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e802a89-f9ff-49ae-94df-fd49494b3d9b-tigera-ca-bundle\") pod \"calico-kube-controllers-dc9d95797-xw78z\" (UID: \"8e802a89-f9ff-49ae-94df-fd49494b3d9b\") " pod="calico-system/calico-kube-controllers-dc9d95797-xw78z" Sep 12 17:40:01.359225 kubelet[2512]: I0912 17:40:01.357882 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b35527ec-e920-470f-8bab-682f16b3b48b-config-volume\") pod \"coredns-668d6bf9bc-g49wb\" (UID: \"b35527ec-e920-470f-8bab-682f16b3b48b\") " pod="kube-system/coredns-668d6bf9bc-g49wb" Sep 12 17:40:01.359535 kubelet[2512]: I0912 17:40:01.357928 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/80356a36-c1e0-4d4a-9e83-4527f10c0a1e-whisker-backend-key-pair\") pod \"whisker-6994b65855-tsfv5\" (UID: \"80356a36-c1e0-4d4a-9e83-4527f10c0a1e\") " pod="calico-system/whisker-6994b65855-tsfv5" Sep 12 17:40:01.359535 kubelet[2512]: I0912 17:40:01.357957 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwk2f\" (UniqueName: \"kubernetes.io/projected/8e802a89-f9ff-49ae-94df-fd49494b3d9b-kube-api-access-cwk2f\") pod \"calico-kube-controllers-dc9d95797-xw78z\" (UID: \"8e802a89-f9ff-49ae-94df-fd49494b3d9b\") " pod="calico-system/calico-kube-controllers-dc9d95797-xw78z" Sep 12 17:40:01.401667 systemd[1]: Created slice kubepods-besteffort-pod8e802a89_f9ff_49ae_94df_fd49494b3d9b.slice - libcontainer container kubepods-besteffort-pod8e802a89_f9ff_49ae_94df_fd49494b3d9b.slice. Sep 12 17:40:01.436895 systemd[1]: Created slice kubepods-besteffort-pod80356a36_c1e0_4d4a_9e83_4527f10c0a1e.slice - libcontainer container kubepods-besteffort-pod80356a36_c1e0_4d4a_9e83_4527f10c0a1e.slice. Sep 12 17:40:01.462116 kubelet[2512]: I0912 17:40:01.460698 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/eb8fd32a-7751-44df-ab7f-52654a5a58c4-goldmane-key-pair\") pod \"goldmane-54d579b49d-srrrn\" (UID: \"eb8fd32a-7751-44df-ab7f-52654a5a58c4\") " pod="calico-system/goldmane-54d579b49d-srrrn" Sep 12 17:40:01.462116 kubelet[2512]: I0912 17:40:01.460803 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb8fd32a-7751-44df-ab7f-52654a5a58c4-config\") pod \"goldmane-54d579b49d-srrrn\" (UID: \"eb8fd32a-7751-44df-ab7f-52654a5a58c4\") " pod="calico-system/goldmane-54d579b49d-srrrn" Sep 12 17:40:01.462116 kubelet[2512]: I0912 17:40:01.461127 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2sfz\" (UniqueName: \"kubernetes.io/projected/2abfa125-f454-4039-a6a8-433554adf69c-kube-api-access-t2sfz\") pod \"calico-apiserver-74f4d49d55-cj5zp\" (UID: \"2abfa125-f454-4039-a6a8-433554adf69c\") " pod="calico-apiserver/calico-apiserver-74f4d49d55-cj5zp" Sep 12 17:40:01.462116 kubelet[2512]: I0912 17:40:01.461236 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb8fd32a-7751-44df-ab7f-52654a5a58c4-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-srrrn\" (UID: \"eb8fd32a-7751-44df-ab7f-52654a5a58c4\") " pod="calico-system/goldmane-54d579b49d-srrrn" Sep 12 17:40:01.462116 kubelet[2512]: I0912 17:40:01.461283 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/2abfa125-f454-4039-a6a8-433554adf69c-calico-apiserver-certs\") pod \"calico-apiserver-74f4d49d55-cj5zp\" (UID: \"2abfa125-f454-4039-a6a8-433554adf69c\") " pod="calico-apiserver/calico-apiserver-74f4d49d55-cj5zp" Sep 12 17:40:01.462411 kubelet[2512]: I0912 17:40:01.461339 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9lxv\" (UniqueName: \"kubernetes.io/projected/eb8fd32a-7751-44df-ab7f-52654a5a58c4-kube-api-access-k9lxv\") pod \"goldmane-54d579b49d-srrrn\" (UID: \"eb8fd32a-7751-44df-ab7f-52654a5a58c4\") " pod="calico-system/goldmane-54d579b49d-srrrn" Sep 12 17:40:01.462411 kubelet[2512]: I0912 17:40:01.461362 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dbf3d71b-4b9a-4508-98f1-871dbdecc9e3-calico-apiserver-certs\") pod \"calico-apiserver-854494659d-799gf\" (UID: \"dbf3d71b-4b9a-4508-98f1-871dbdecc9e3\") " pod="calico-apiserver/calico-apiserver-854494659d-799gf" Sep 12 17:40:01.462411 kubelet[2512]: I0912 17:40:01.461413 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvg22\" (UniqueName: \"kubernetes.io/projected/dbf3d71b-4b9a-4508-98f1-871dbdecc9e3-kube-api-access-nvg22\") pod \"calico-apiserver-854494659d-799gf\" (UID: \"dbf3d71b-4b9a-4508-98f1-871dbdecc9e3\") " pod="calico-apiserver/calico-apiserver-854494659d-799gf" Sep 12 17:40:01.503645 systemd[1]: Created slice kubepods-burstable-podb35527ec_e920_470f_8bab_682f16b3b48b.slice - libcontainer container kubepods-burstable-podb35527ec_e920_470f_8bab_682f16b3b48b.slice. Sep 12 17:40:01.509143 containerd[1471]: time="2025-09-12T17:40:01.505258120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:40:01.557063 systemd[1]: Created slice kubepods-besteffort-pod2abfa125_f454_4039_a6a8_433554adf69c.slice - libcontainer container kubepods-besteffort-pod2abfa125_f454_4039_a6a8_433554adf69c.slice. Sep 12 17:40:01.682627 systemd[1]: Created slice kubepods-besteffort-podeb8fd32a_7751_44df_ab7f_52654a5a58c4.slice - libcontainer container kubepods-besteffort-podeb8fd32a_7751_44df_ab7f_52654a5a58c4.slice. Sep 12 17:40:01.703794 systemd[1]: Created slice kubepods-besteffort-poddbf3d71b_4b9a_4508_98f1_871dbdecc9e3.slice - libcontainer container kubepods-besteffort-poddbf3d71b_4b9a_4508_98f1_871dbdecc9e3.slice. Sep 12 17:40:01.730116 containerd[1471]: time="2025-09-12T17:40:01.729522739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc9d95797-xw78z,Uid:8e802a89-f9ff-49ae-94df-fd49494b3d9b,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:01.841721 kubelet[2512]: E0912 17:40:01.841128 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:01.849731 containerd[1471]: time="2025-09-12T17:40:01.848551247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g49wb,Uid:b35527ec-e920-470f-8bab-682f16b3b48b,Namespace:kube-system,Attempt:0,}" Sep 12 17:40:01.891771 kubelet[2512]: E0912 17:40:01.891166 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:01.892892 containerd[1471]: time="2025-09-12T17:40:01.892487419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t55ll,Uid:5925fe00-d8c6-4534-a505-f32b5402931d,Namespace:kube-system,Attempt:0,}" Sep 12 17:40:02.011474 containerd[1471]: time="2025-09-12T17:40:02.010302231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-srrrn,Uid:eb8fd32a-7751-44df-ab7f-52654a5a58c4,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:02.369659 containerd[1471]: time="2025-09-12T17:40:02.368957585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854494659d-kvgvx,Uid:fcab9732-3c6c-4b48-8986-45fc7d97bb57,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:40:02.376997 containerd[1471]: time="2025-09-12T17:40:02.376168535Z" level=error msg="Failed to destroy network for sandbox \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.391060 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be-shm.mount: Deactivated successfully. Sep 12 17:40:02.425328 containerd[1471]: time="2025-09-12T17:40:02.424158302Z" level=error msg="encountered an error cleaning up failed sandbox \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.427117 containerd[1471]: time="2025-09-12T17:40:02.426522539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4k29j,Uid:06c23272-db86-4e7e-9c53-92578f077ab6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.429110 kubelet[2512]: E0912 17:40:02.428043 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.430428 kubelet[2512]: E0912 17:40:02.429964 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4k29j" Sep 12 17:40:02.430428 kubelet[2512]: E0912 17:40:02.430115 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4k29j" Sep 12 17:40:02.432068 kubelet[2512]: E0912 17:40:02.430239 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4k29j_calico-system(06c23272-db86-4e7e-9c53-92578f077ab6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4k29j_calico-system(06c23272-db86-4e7e-9c53-92578f077ab6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4k29j" podUID="06c23272-db86-4e7e-9c53-92578f077ab6" Sep 12 17:40:02.462111 containerd[1471]: time="2025-09-12T17:40:02.460517444Z" level=error msg="Failed to destroy network for sandbox \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.462111 containerd[1471]: time="2025-09-12T17:40:02.460823192Z" level=error msg="Failed to destroy network for sandbox \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.462111 containerd[1471]: time="2025-09-12T17:40:02.461124645Z" level=error msg="encountered an error cleaning up failed sandbox \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.462111 containerd[1471]: time="2025-09-12T17:40:02.461215967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc9d95797-xw78z,Uid:8e802a89-f9ff-49ae-94df-fd49494b3d9b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.462111 containerd[1471]: time="2025-09-12T17:40:02.461306140Z" level=error msg="encountered an error cleaning up failed sandbox \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.462111 containerd[1471]: time="2025-09-12T17:40:02.461375479Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g49wb,Uid:b35527ec-e920-470f-8bab-682f16b3b48b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.470927 kubelet[2512]: E0912 17:40:02.470856 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.471312 kubelet[2512]: E0912 17:40:02.470856 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.471517 kubelet[2512]: E0912 17:40:02.471473 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g49wb" Sep 12 17:40:02.471699 kubelet[2512]: E0912 17:40:02.471669 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-g49wb" Sep 12 17:40:02.471883 kubelet[2512]: E0912 17:40:02.471606 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dc9d95797-xw78z" Sep 12 17:40:02.472023 kubelet[2512]: E0912 17:40:02.471997 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-dc9d95797-xw78z" Sep 12 17:40:02.473291 kubelet[2512]: E0912 17:40:02.473054 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-dc9d95797-xw78z_calico-system(8e802a89-f9ff-49ae-94df-fd49494b3d9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-dc9d95797-xw78z_calico-system(8e802a89-f9ff-49ae-94df-fd49494b3d9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dc9d95797-xw78z" podUID="8e802a89-f9ff-49ae-94df-fd49494b3d9b" Sep 12 17:40:02.474635 kubelet[2512]: E0912 17:40:02.472153 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-g49wb_kube-system(b35527ec-e920-470f-8bab-682f16b3b48b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-g49wb_kube-system(b35527ec-e920-470f-8bab-682f16b3b48b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g49wb" podUID="b35527ec-e920-470f-8bab-682f16b3b48b" Sep 12 17:40:02.513829 kubelet[2512]: I0912 17:40:02.512920 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Sep 12 17:40:02.529741 kubelet[2512]: I0912 17:40:02.529491 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Sep 12 17:40:02.535337 containerd[1471]: time="2025-09-12T17:40:02.535273284Z" level=info msg="StopPodSandbox for \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\"" Sep 12 17:40:02.537274 containerd[1471]: time="2025-09-12T17:40:02.537092248Z" level=info msg="StopPodSandbox for \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\"" Sep 12 17:40:02.545125 containerd[1471]: time="2025-09-12T17:40:02.544697894Z" level=info msg="Ensure that sandbox 6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be in task-service has been cleanup successfully" Sep 12 17:40:02.549493 containerd[1471]: time="2025-09-12T17:40:02.549424953Z" level=info msg="Ensure that sandbox c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0 in task-service has been cleanup successfully" Sep 12 17:40:02.551766 containerd[1471]: time="2025-09-12T17:40:02.551060892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74f4d49d55-cj5zp,Uid:2abfa125-f454-4039-a6a8-433554adf69c,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:40:02.558119 kubelet[2512]: I0912 17:40:02.557810 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Sep 12 17:40:02.565931 containerd[1471]: time="2025-09-12T17:40:02.565757953Z" level=info msg="StopPodSandbox for \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\"" Sep 12 17:40:02.570368 containerd[1471]: time="2025-09-12T17:40:02.569008365Z" level=info msg="Ensure that sandbox 17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263 in task-service has been cleanup successfully" Sep 12 17:40:02.578590 containerd[1471]: time="2025-09-12T17:40:02.578180462Z" level=error msg="Failed to destroy network for sandbox \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.586237 containerd[1471]: time="2025-09-12T17:40:02.586019870Z" level=error msg="encountered an error cleaning up failed sandbox \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.590950 containerd[1471]: time="2025-09-12T17:40:02.590271174Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t55ll,Uid:5925fe00-d8c6-4534-a505-f32b5402931d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.592836 kubelet[2512]: E0912 17:40:02.592766 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.594372 kubelet[2512]: E0912 17:40:02.592864 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t55ll" Sep 12 17:40:02.594372 kubelet[2512]: E0912 17:40:02.592902 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t55ll" Sep 12 17:40:02.594372 kubelet[2512]: E0912 17:40:02.592984 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t55ll_kube-system(5925fe00-d8c6-4534-a505-f32b5402931d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t55ll_kube-system(5925fe00-d8c6-4534-a505-f32b5402931d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t55ll" podUID="5925fe00-d8c6-4534-a505-f32b5402931d" Sep 12 17:40:02.635162 containerd[1471]: time="2025-09-12T17:40:02.632107342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854494659d-799gf,Uid:dbf3d71b-4b9a-4508-98f1-871dbdecc9e3,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:40:02.675691 containerd[1471]: time="2025-09-12T17:40:02.675360720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6994b65855-tsfv5,Uid:80356a36-c1e0-4d4a-9e83-4527f10c0a1e,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:02.801509 containerd[1471]: time="2025-09-12T17:40:02.801320132Z" level=error msg="Failed to destroy network for sandbox \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.803125 containerd[1471]: time="2025-09-12T17:40:02.803004980Z" level=error msg="encountered an error cleaning up failed sandbox \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.804107 containerd[1471]: time="2025-09-12T17:40:02.803727534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-srrrn,Uid:eb8fd32a-7751-44df-ab7f-52654a5a58c4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.805489 kubelet[2512]: E0912 17:40:02.805209 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.805489 kubelet[2512]: E0912 17:40:02.805316 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-srrrn" Sep 12 17:40:02.805489 kubelet[2512]: E0912 17:40:02.805352 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-srrrn" Sep 12 17:40:02.806062 kubelet[2512]: E0912 17:40:02.805429 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-srrrn_calico-system(eb8fd32a-7751-44df-ab7f-52654a5a58c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-srrrn_calico-system(eb8fd32a-7751-44df-ab7f-52654a5a58c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-srrrn" podUID="eb8fd32a-7751-44df-ab7f-52654a5a58c4" Sep 12 17:40:02.815784 containerd[1471]: time="2025-09-12T17:40:02.815665102Z" level=error msg="StopPodSandbox for \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\" failed" error="failed to destroy network for sandbox \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.816744 kubelet[2512]: E0912 17:40:02.816483 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Sep 12 17:40:02.816744 kubelet[2512]: E0912 17:40:02.816573 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0"} Sep 12 17:40:02.816744 kubelet[2512]: E0912 17:40:02.816641 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b35527ec-e920-470f-8bab-682f16b3b48b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:02.816744 kubelet[2512]: E0912 17:40:02.816690 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b35527ec-e920-470f-8bab-682f16b3b48b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-g49wb" podUID="b35527ec-e920-470f-8bab-682f16b3b48b" Sep 12 17:40:02.852577 containerd[1471]: time="2025-09-12T17:40:02.852408998Z" level=error msg="StopPodSandbox for \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\" failed" error="failed to destroy network for sandbox \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.854721 kubelet[2512]: E0912 17:40:02.854219 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Sep 12 17:40:02.855531 kubelet[2512]: E0912 17:40:02.855047 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263"} Sep 12 17:40:02.856351 kubelet[2512]: E0912 17:40:02.856178 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8e802a89-f9ff-49ae-94df-fd49494b3d9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:02.856351 kubelet[2512]: E0912 17:40:02.856272 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8e802a89-f9ff-49ae-94df-fd49494b3d9b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-dc9d95797-xw78z" podUID="8e802a89-f9ff-49ae-94df-fd49494b3d9b" Sep 12 17:40:02.870751 containerd[1471]: time="2025-09-12T17:40:02.870686425Z" level=error msg="StopPodSandbox for \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\" failed" error="failed to destroy network for sandbox \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.871913 kubelet[2512]: E0912 17:40:02.871021 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Sep 12 17:40:02.871913 kubelet[2512]: E0912 17:40:02.871116 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be"} Sep 12 17:40:02.871913 kubelet[2512]: E0912 17:40:02.871219 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06c23272-db86-4e7e-9c53-92578f077ab6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:02.871913 kubelet[2512]: E0912 17:40:02.871287 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06c23272-db86-4e7e-9c53-92578f077ab6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4k29j" podUID="06c23272-db86-4e7e-9c53-92578f077ab6" Sep 12 17:40:02.995978 containerd[1471]: time="2025-09-12T17:40:02.994357302Z" level=error msg="Failed to destroy network for sandbox \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.995978 containerd[1471]: time="2025-09-12T17:40:02.995183492Z" level=error msg="encountered an error cleaning up failed sandbox \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.995978 containerd[1471]: time="2025-09-12T17:40:02.995264623Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854494659d-kvgvx,Uid:fcab9732-3c6c-4b48-8986-45fc7d97bb57,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.998167 kubelet[2512]: E0912 17:40:02.997899 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:02.998167 kubelet[2512]: E0912 17:40:02.997980 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-854494659d-kvgvx" Sep 12 17:40:02.998167 kubelet[2512]: E0912 17:40:02.998028 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-854494659d-kvgvx" Sep 12 17:40:02.998794 kubelet[2512]: E0912 17:40:02.998653 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-854494659d-kvgvx_calico-apiserver(fcab9732-3c6c-4b48-8986-45fc7d97bb57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-854494659d-kvgvx_calico-apiserver(fcab9732-3c6c-4b48-8986-45fc7d97bb57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-854494659d-kvgvx" podUID="fcab9732-3c6c-4b48-8986-45fc7d97bb57" Sep 12 17:40:03.032002 containerd[1471]: time="2025-09-12T17:40:03.031523309Z" level=error msg="Failed to destroy network for sandbox \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.034171 containerd[1471]: time="2025-09-12T17:40:03.034068304Z" level=error msg="encountered an error cleaning up failed sandbox \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.034592 containerd[1471]: time="2025-09-12T17:40:03.034373620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74f4d49d55-cj5zp,Uid:2abfa125-f454-4039-a6a8-433554adf69c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.035247 kubelet[2512]: E0912 17:40:03.035012 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.035247 kubelet[2512]: E0912 17:40:03.035179 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74f4d49d55-cj5zp" Sep 12 17:40:03.036532 kubelet[2512]: E0912 17:40:03.035484 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-74f4d49d55-cj5zp" Sep 12 17:40:03.036532 kubelet[2512]: E0912 17:40:03.035597 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-74f4d49d55-cj5zp_calico-apiserver(2abfa125-f454-4039-a6a8-433554adf69c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-74f4d49d55-cj5zp_calico-apiserver(2abfa125-f454-4039-a6a8-433554adf69c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74f4d49d55-cj5zp" podUID="2abfa125-f454-4039-a6a8-433554adf69c" Sep 12 17:40:03.048309 containerd[1471]: time="2025-09-12T17:40:03.048194548Z" level=error msg="Failed to destroy network for sandbox \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.050647 containerd[1471]: time="2025-09-12T17:40:03.050332695Z" level=error msg="encountered an error cleaning up failed sandbox \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.050647 containerd[1471]: time="2025-09-12T17:40:03.050446666Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6994b65855-tsfv5,Uid:80356a36-c1e0-4d4a-9e83-4527f10c0a1e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.050964 kubelet[2512]: E0912 17:40:03.050754 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.050964 kubelet[2512]: E0912 17:40:03.050837 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6994b65855-tsfv5" Sep 12 17:40:03.050964 kubelet[2512]: E0912 17:40:03.050898 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6994b65855-tsfv5" Sep 12 17:40:03.053117 kubelet[2512]: E0912 17:40:03.050955 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6994b65855-tsfv5_calico-system(80356a36-c1e0-4d4a-9e83-4527f10c0a1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6994b65855-tsfv5_calico-system(80356a36-c1e0-4d4a-9e83-4527f10c0a1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6994b65855-tsfv5" podUID="80356a36-c1e0-4d4a-9e83-4527f10c0a1e" Sep 12 17:40:03.077981 containerd[1471]: time="2025-09-12T17:40:03.077804896Z" level=error msg="Failed to destroy network for sandbox \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.079670 containerd[1471]: time="2025-09-12T17:40:03.079506363Z" level=error msg="encountered an error cleaning up failed sandbox \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.079813 containerd[1471]: time="2025-09-12T17:40:03.079706495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854494659d-799gf,Uid:dbf3d71b-4b9a-4508-98f1-871dbdecc9e3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.080255 kubelet[2512]: E0912 17:40:03.080142 2512 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.081215 kubelet[2512]: E0912 17:40:03.080534 2512 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-854494659d-799gf" Sep 12 17:40:03.081215 kubelet[2512]: E0912 17:40:03.080634 2512 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-854494659d-799gf" Sep 12 17:40:03.081215 kubelet[2512]: E0912 17:40:03.080755 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-854494659d-799gf_calico-apiserver(dbf3d71b-4b9a-4508-98f1-871dbdecc9e3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-854494659d-799gf_calico-apiserver(dbf3d71b-4b9a-4508-98f1-871dbdecc9e3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-854494659d-799gf" podUID="dbf3d71b-4b9a-4508-98f1-871dbdecc9e3" Sep 12 17:40:03.129131 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef-shm.mount: Deactivated successfully. Sep 12 17:40:03.129319 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e-shm.mount: Deactivated successfully. Sep 12 17:40:03.129415 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0-shm.mount: Deactivated successfully. Sep 12 17:40:03.129500 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263-shm.mount: Deactivated successfully. Sep 12 17:40:03.575651 kubelet[2512]: I0912 17:40:03.575600 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Sep 12 17:40:03.582619 containerd[1471]: time="2025-09-12T17:40:03.582010026Z" level=info msg="StopPodSandbox for \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\"" Sep 12 17:40:03.582619 containerd[1471]: time="2025-09-12T17:40:03.582319509Z" level=info msg="Ensure that sandbox 460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37 in task-service has been cleanup successfully" Sep 12 17:40:03.586632 kubelet[2512]: I0912 17:40:03.586595 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Sep 12 17:40:03.593408 containerd[1471]: time="2025-09-12T17:40:03.592157704Z" level=info msg="StopPodSandbox for \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\"" Sep 12 17:40:03.594692 containerd[1471]: time="2025-09-12T17:40:03.594181333Z" level=info msg="Ensure that sandbox 0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7 in task-service has been cleanup successfully" Sep 12 17:40:03.603262 kubelet[2512]: I0912 17:40:03.602481 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Sep 12 17:40:03.606229 containerd[1471]: time="2025-09-12T17:40:03.605711778Z" level=info msg="StopPodSandbox for \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\"" Sep 12 17:40:03.608293 containerd[1471]: time="2025-09-12T17:40:03.608049113Z" level=info msg="Ensure that sandbox ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129 in task-service has been cleanup successfully" Sep 12 17:40:03.616434 kubelet[2512]: I0912 17:40:03.616325 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Sep 12 17:40:03.626393 containerd[1471]: time="2025-09-12T17:40:03.624058262Z" level=info msg="StopPodSandbox for \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\"" Sep 12 17:40:03.626393 containerd[1471]: time="2025-09-12T17:40:03.626059121Z" level=info msg="Ensure that sandbox c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76 in task-service has been cleanup successfully" Sep 12 17:40:03.636444 kubelet[2512]: I0912 17:40:03.636287 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Sep 12 17:40:03.642446 containerd[1471]: time="2025-09-12T17:40:03.642159354Z" level=info msg="StopPodSandbox for \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\"" Sep 12 17:40:03.643115 containerd[1471]: time="2025-09-12T17:40:03.642834722Z" level=info msg="Ensure that sandbox b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef in task-service has been cleanup successfully" Sep 12 17:40:03.652686 kubelet[2512]: I0912 17:40:03.652527 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Sep 12 17:40:03.656235 containerd[1471]: time="2025-09-12T17:40:03.656009511Z" level=info msg="StopPodSandbox for \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\"" Sep 12 17:40:03.660669 containerd[1471]: time="2025-09-12T17:40:03.660364070Z" level=info msg="Ensure that sandbox 22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e in task-service has been cleanup successfully" Sep 12 17:40:03.735690 containerd[1471]: time="2025-09-12T17:40:03.735633796Z" level=error msg="StopPodSandbox for \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\" failed" error="failed to destroy network for sandbox \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.737597 kubelet[2512]: E0912 17:40:03.737314 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Sep 12 17:40:03.737597 kubelet[2512]: E0912 17:40:03.737418 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef"} Sep 12 17:40:03.737597 kubelet[2512]: E0912 17:40:03.737476 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"eb8fd32a-7751-44df-ab7f-52654a5a58c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:03.737597 kubelet[2512]: E0912 17:40:03.737514 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"eb8fd32a-7751-44df-ab7f-52654a5a58c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-srrrn" podUID="eb8fd32a-7751-44df-ab7f-52654a5a58c4" Sep 12 17:40:03.774728 containerd[1471]: time="2025-09-12T17:40:03.773880205Z" level=error msg="StopPodSandbox for \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\" failed" error="failed to destroy network for sandbox \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.775484 kubelet[2512]: E0912 17:40:03.775291 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Sep 12 17:40:03.775484 kubelet[2512]: E0912 17:40:03.775358 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76"} Sep 12 17:40:03.775484 kubelet[2512]: E0912 17:40:03.775406 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dbf3d71b-4b9a-4508-98f1-871dbdecc9e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:03.775484 kubelet[2512]: E0912 17:40:03.775433 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dbf3d71b-4b9a-4508-98f1-871dbdecc9e3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-854494659d-799gf" podUID="dbf3d71b-4b9a-4508-98f1-871dbdecc9e3" Sep 12 17:40:03.806759 containerd[1471]: time="2025-09-12T17:40:03.806368876Z" level=error msg="StopPodSandbox for \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\" failed" error="failed to destroy network for sandbox \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.807923 kubelet[2512]: E0912 17:40:03.807168 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Sep 12 17:40:03.807923 kubelet[2512]: E0912 17:40:03.807251 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37"} Sep 12 17:40:03.807923 kubelet[2512]: E0912 17:40:03.807383 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2abfa125-f454-4039-a6a8-433554adf69c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:03.807923 kubelet[2512]: E0912 17:40:03.807426 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2abfa125-f454-4039-a6a8-433554adf69c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-74f4d49d55-cj5zp" podUID="2abfa125-f454-4039-a6a8-433554adf69c" Sep 12 17:40:03.838841 containerd[1471]: time="2025-09-12T17:40:03.838061598Z" level=error msg="StopPodSandbox for \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\" failed" error="failed to destroy network for sandbox \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.839019 kubelet[2512]: E0912 17:40:03.838413 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Sep 12 17:40:03.839019 kubelet[2512]: E0912 17:40:03.838522 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7"} Sep 12 17:40:03.839019 kubelet[2512]: E0912 17:40:03.838587 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fcab9732-3c6c-4b48-8986-45fc7d97bb57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:03.839019 kubelet[2512]: E0912 17:40:03.838628 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fcab9732-3c6c-4b48-8986-45fc7d97bb57\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-854494659d-kvgvx" podUID="fcab9732-3c6c-4b48-8986-45fc7d97bb57" Sep 12 17:40:03.850743 containerd[1471]: time="2025-09-12T17:40:03.850671833Z" level=error msg="StopPodSandbox for \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\" failed" error="failed to destroy network for sandbox \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.851543 kubelet[2512]: E0912 17:40:03.851277 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Sep 12 17:40:03.851543 kubelet[2512]: E0912 17:40:03.851366 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129"} Sep 12 17:40:03.851543 kubelet[2512]: E0912 17:40:03.851415 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"80356a36-c1e0-4d4a-9e83-4527f10c0a1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:03.851543 kubelet[2512]: E0912 17:40:03.851454 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"80356a36-c1e0-4d4a-9e83-4527f10c0a1e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6994b65855-tsfv5" podUID="80356a36-c1e0-4d4a-9e83-4527f10c0a1e" Sep 12 17:40:03.855132 containerd[1471]: time="2025-09-12T17:40:03.854997637Z" level=error msg="StopPodSandbox for \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\" failed" error="failed to destroy network for sandbox \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:40:03.857558 kubelet[2512]: E0912 17:40:03.857299 2512 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Sep 12 17:40:03.857558 kubelet[2512]: E0912 17:40:03.857395 2512 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e"} Sep 12 17:40:03.857558 kubelet[2512]: E0912 17:40:03.857454 2512 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5925fe00-d8c6-4534-a505-f32b5402931d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:40:03.857558 kubelet[2512]: E0912 17:40:03.857502 2512 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5925fe00-d8c6-4534-a505-f32b5402931d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t55ll" podUID="5925fe00-d8c6-4534-a505-f32b5402931d" Sep 12 17:40:10.755158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4257151929.mount: Deactivated successfully. Sep 12 17:40:10.967421 containerd[1471]: time="2025-09-12T17:40:10.905804571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:40:10.987434 containerd[1471]: time="2025-09-12T17:40:10.986706678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.473673337s" Sep 12 17:40:10.987434 containerd[1471]: time="2025-09-12T17:40:10.986792669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:40:11.026463 containerd[1471]: time="2025-09-12T17:40:11.024929659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:11.047383 containerd[1471]: time="2025-09-12T17:40:11.047324610Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:11.049484 containerd[1471]: time="2025-09-12T17:40:11.049395136Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:11.097260 containerd[1471]: time="2025-09-12T17:40:11.096632045Z" level=info msg="CreateContainer within sandbox \"d5bfb8fcd371dfd6fe0497d56bc37db643883502308cd468152d500f60be5167\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:40:11.219038 containerd[1471]: time="2025-09-12T17:40:11.218835114Z" level=info msg="CreateContainer within sandbox \"d5bfb8fcd371dfd6fe0497d56bc37db643883502308cd468152d500f60be5167\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"021d9dfb44e12b75ce1e4afad6f62f474d6766f1938c6ce26645cac8a026e3dd\"" Sep 12 17:40:11.220496 containerd[1471]: time="2025-09-12T17:40:11.219862270Z" level=info msg="StartContainer for \"021d9dfb44e12b75ce1e4afad6f62f474d6766f1938c6ce26645cac8a026e3dd\"" Sep 12 17:40:11.469483 systemd[1]: Started cri-containerd-021d9dfb44e12b75ce1e4afad6f62f474d6766f1938c6ce26645cac8a026e3dd.scope - libcontainer container 021d9dfb44e12b75ce1e4afad6f62f474d6766f1938c6ce26645cac8a026e3dd. Sep 12 17:40:11.554842 containerd[1471]: time="2025-09-12T17:40:11.554561090Z" level=info msg="StartContainer for \"021d9dfb44e12b75ce1e4afad6f62f474d6766f1938c6ce26645cac8a026e3dd\" returns successfully" Sep 12 17:40:11.758519 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:40:11.761112 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:40:11.802783 kubelet[2512]: I0912 17:40:11.796527 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zc8l7" podStartSLOduration=3.660951743 podStartE2EDuration="22.790118196s" podCreationTimestamp="2025-09-12 17:39:49 +0000 UTC" firstStartedPulling="2025-09-12 17:39:51.864274651 +0000 UTC m=+25.820501559" lastFinishedPulling="2025-09-12 17:40:10.99344109 +0000 UTC m=+44.949668012" observedRunningTime="2025-09-12 17:40:11.787421479 +0000 UTC m=+45.743648412" watchObservedRunningTime="2025-09-12 17:40:11.790118196 +0000 UTC m=+45.746345133" Sep 12 17:40:11.991879 containerd[1471]: time="2025-09-12T17:40:11.991189831Z" level=info msg="StopPodSandbox for \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\"" Sep 12 17:40:12.347210 containerd[1471]: 2025-09-12 17:40:12.120 [INFO][3823] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Sep 12 17:40:12.347210 containerd[1471]: 2025-09-12 17:40:12.121 [INFO][3823] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" iface="eth0" netns="/var/run/netns/cni-bbf12119-6f7f-b274-7110-cd747c688d5b" Sep 12 17:40:12.347210 containerd[1471]: 2025-09-12 17:40:12.122 [INFO][3823] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" iface="eth0" netns="/var/run/netns/cni-bbf12119-6f7f-b274-7110-cd747c688d5b" Sep 12 17:40:12.347210 containerd[1471]: 2025-09-12 17:40:12.123 [INFO][3823] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" iface="eth0" netns="/var/run/netns/cni-bbf12119-6f7f-b274-7110-cd747c688d5b" Sep 12 17:40:12.347210 containerd[1471]: 2025-09-12 17:40:12.123 [INFO][3823] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Sep 12 17:40:12.347210 containerd[1471]: 2025-09-12 17:40:12.123 [INFO][3823] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Sep 12 17:40:12.347210 containerd[1471]: 2025-09-12 17:40:12.311 [INFO][3831] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" HandleID="k8s-pod-network.ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--6994b65855--tsfv5-eth0" Sep 12 17:40:12.347210 containerd[1471]: 2025-09-12 17:40:12.315 [INFO][3831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:12.347210 containerd[1471]: 2025-09-12 17:40:12.315 [INFO][3831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:12.347210 containerd[1471]: 2025-09-12 17:40:12.337 [WARNING][3831] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" HandleID="k8s-pod-network.ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--6994b65855--tsfv5-eth0" Sep 12 17:40:12.347210 containerd[1471]: 2025-09-12 17:40:12.337 [INFO][3831] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" HandleID="k8s-pod-network.ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--6994b65855--tsfv5-eth0" Sep 12 17:40:12.347210 containerd[1471]: 2025-09-12 17:40:12.339 [INFO][3831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:12.347210 containerd[1471]: 2025-09-12 17:40:12.342 [INFO][3823] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Sep 12 17:40:12.349377 containerd[1471]: time="2025-09-12T17:40:12.347462186Z" level=info msg="TearDown network for sandbox \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\" successfully" Sep 12 17:40:12.349377 containerd[1471]: time="2025-09-12T17:40:12.347503384Z" level=info msg="StopPodSandbox for \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\" returns successfully" Sep 12 17:40:12.354632 systemd[1]: run-netns-cni\x2dbbf12119\x2d6f7f\x2db274\x2d7110\x2dcd747c688d5b.mount: Deactivated successfully. Sep 12 17:40:12.558626 kubelet[2512]: I0912 17:40:12.558298 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sl9gs\" (UniqueName: \"kubernetes.io/projected/80356a36-c1e0-4d4a-9e83-4527f10c0a1e-kube-api-access-sl9gs\") pod \"80356a36-c1e0-4d4a-9e83-4527f10c0a1e\" (UID: \"80356a36-c1e0-4d4a-9e83-4527f10c0a1e\") " Sep 12 17:40:12.558626 kubelet[2512]: I0912 17:40:12.558390 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/80356a36-c1e0-4d4a-9e83-4527f10c0a1e-whisker-backend-key-pair\") pod \"80356a36-c1e0-4d4a-9e83-4527f10c0a1e\" (UID: \"80356a36-c1e0-4d4a-9e83-4527f10c0a1e\") " Sep 12 17:40:12.558626 kubelet[2512]: I0912 17:40:12.558446 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80356a36-c1e0-4d4a-9e83-4527f10c0a1e-whisker-ca-bundle\") pod \"80356a36-c1e0-4d4a-9e83-4527f10c0a1e\" (UID: \"80356a36-c1e0-4d4a-9e83-4527f10c0a1e\") " Sep 12 17:40:12.571959 systemd[1]: var-lib-kubelet-pods-80356a36\x2dc1e0\x2d4d4a\x2d9e83\x2d4527f10c0a1e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:40:12.572382 systemd[1]: var-lib-kubelet-pods-80356a36\x2dc1e0\x2d4d4a\x2d9e83\x2d4527f10c0a1e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsl9gs.mount: Deactivated successfully. Sep 12 17:40:12.600417 kubelet[2512]: I0912 17:40:12.597809 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80356a36-c1e0-4d4a-9e83-4527f10c0a1e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "80356a36-c1e0-4d4a-9e83-4527f10c0a1e" (UID: "80356a36-c1e0-4d4a-9e83-4527f10c0a1e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:40:12.605125 kubelet[2512]: I0912 17:40:12.605020 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80356a36-c1e0-4d4a-9e83-4527f10c0a1e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "80356a36-c1e0-4d4a-9e83-4527f10c0a1e" (UID: "80356a36-c1e0-4d4a-9e83-4527f10c0a1e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:40:12.618182 kubelet[2512]: I0912 17:40:12.617884 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80356a36-c1e0-4d4a-9e83-4527f10c0a1e-kube-api-access-sl9gs" (OuterVolumeSpecName: "kube-api-access-sl9gs") pod "80356a36-c1e0-4d4a-9e83-4527f10c0a1e" (UID: "80356a36-c1e0-4d4a-9e83-4527f10c0a1e"). InnerVolumeSpecName "kube-api-access-sl9gs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:40:12.659173 kubelet[2512]: I0912 17:40:12.658992 2512 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80356a36-c1e0-4d4a-9e83-4527f10c0a1e-whisker-ca-bundle\") on node \"ci-4081.3.6-a-756b4d7dc2\" DevicePath \"\"" Sep 12 17:40:12.659173 kubelet[2512]: I0912 17:40:12.659045 2512 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sl9gs\" (UniqueName: \"kubernetes.io/projected/80356a36-c1e0-4d4a-9e83-4527f10c0a1e-kube-api-access-sl9gs\") on node \"ci-4081.3.6-a-756b4d7dc2\" DevicePath \"\"" Sep 12 17:40:12.659173 kubelet[2512]: I0912 17:40:12.659058 2512 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/80356a36-c1e0-4d4a-9e83-4527f10c0a1e-whisker-backend-key-pair\") on node \"ci-4081.3.6-a-756b4d7dc2\" DevicePath \"\"" Sep 12 17:40:12.733690 systemd[1]: Removed slice kubepods-besteffort-pod80356a36_c1e0_4d4a_9e83_4527f10c0a1e.slice - libcontainer container kubepods-besteffort-pod80356a36_c1e0_4d4a_9e83_4527f10c0a1e.slice. Sep 12 17:40:12.735655 kubelet[2512]: I0912 17:40:12.735173 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:12.908869 systemd[1]: Created slice kubepods-besteffort-pod40064e21_a431_4c2a_a859_8fe4fc7bf5ed.slice - libcontainer container kubepods-besteffort-pod40064e21_a431_4c2a_a859_8fe4fc7bf5ed.slice. Sep 12 17:40:12.962471 kubelet[2512]: I0912 17:40:12.962395 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40064e21-a431-4c2a-a859-8fe4fc7bf5ed-whisker-ca-bundle\") pod \"whisker-5d9c765474-k2rqc\" (UID: \"40064e21-a431-4c2a-a859-8fe4fc7bf5ed\") " pod="calico-system/whisker-5d9c765474-k2rqc" Sep 12 17:40:12.962471 kubelet[2512]: I0912 17:40:12.962480 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7427t\" (UniqueName: \"kubernetes.io/projected/40064e21-a431-4c2a-a859-8fe4fc7bf5ed-kube-api-access-7427t\") pod \"whisker-5d9c765474-k2rqc\" (UID: \"40064e21-a431-4c2a-a859-8fe4fc7bf5ed\") " pod="calico-system/whisker-5d9c765474-k2rqc" Sep 12 17:40:12.963438 kubelet[2512]: I0912 17:40:12.962500 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/40064e21-a431-4c2a-a859-8fe4fc7bf5ed-whisker-backend-key-pair\") pod \"whisker-5d9c765474-k2rqc\" (UID: \"40064e21-a431-4c2a-a859-8fe4fc7bf5ed\") " pod="calico-system/whisker-5d9c765474-k2rqc" Sep 12 17:40:13.217751 containerd[1471]: time="2025-09-12T17:40:13.216941646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d9c765474-k2rqc,Uid:40064e21-a431-4c2a-a859-8fe4fc7bf5ed,Namespace:calico-system,Attempt:0,}" Sep 12 17:40:13.486546 systemd-networkd[1365]: calid19508082ad: Link UP Sep 12 17:40:13.488470 systemd-networkd[1365]: calid19508082ad: Gained carrier Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.303 [INFO][3854] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.330 [INFO][3854] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--756b4d7dc2-k8s-whisker--5d9c765474--k2rqc-eth0 whisker-5d9c765474- calico-system 40064e21-a431-4c2a-a859-8fe4fc7bf5ed 980 0 2025-09-12 17:40:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d9c765474 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-a-756b4d7dc2 whisker-5d9c765474-k2rqc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calid19508082ad [] [] }} ContainerID="00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" Namespace="calico-system" Pod="whisker-5d9c765474-k2rqc" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--5d9c765474--k2rqc-" Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.330 [INFO][3854] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" Namespace="calico-system" Pod="whisker-5d9c765474-k2rqc" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--5d9c765474--k2rqc-eth0" Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.377 [INFO][3865] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" HandleID="k8s-pod-network.00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--5d9c765474--k2rqc-eth0" Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.378 [INFO][3865] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" HandleID="k8s-pod-network.00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--5d9c765474--k2rqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-756b4d7dc2", "pod":"whisker-5d9c765474-k2rqc", "timestamp":"2025-09-12 17:40:13.377759115 +0000 UTC"}, Hostname:"ci-4081.3.6-a-756b4d7dc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.378 [INFO][3865] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.378 [INFO][3865] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.378 [INFO][3865] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-756b4d7dc2' Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.392 [INFO][3865] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.402 [INFO][3865] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.417 [INFO][3865] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.421 [INFO][3865] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.426 [INFO][3865] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.426 [INFO][3865] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.432 [INFO][3865] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8 Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.442 [INFO][3865] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.455 [INFO][3865] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.64.129/26] block=192.168.64.128/26 handle="k8s-pod-network.00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.455 [INFO][3865] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.129/26] handle="k8s-pod-network.00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.455 [INFO][3865] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:13.530062 containerd[1471]: 2025-09-12 17:40:13.455 [INFO][3865] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.129/26] IPv6=[] ContainerID="00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" HandleID="k8s-pod-network.00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--5d9c765474--k2rqc-eth0" Sep 12 17:40:13.531763 containerd[1471]: 2025-09-12 17:40:13.462 [INFO][3854] cni-plugin/k8s.go 418: Populated endpoint ContainerID="00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" Namespace="calico-system" Pod="whisker-5d9c765474-k2rqc" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--5d9c765474--k2rqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-whisker--5d9c765474--k2rqc-eth0", GenerateName:"whisker-5d9c765474-", Namespace:"calico-system", SelfLink:"", UID:"40064e21-a431-4c2a-a859-8fe4fc7bf5ed", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d9c765474", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"", Pod:"whisker-5d9c765474-k2rqc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid19508082ad", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:13.531763 containerd[1471]: 2025-09-12 17:40:13.462 [INFO][3854] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.129/32] ContainerID="00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" Namespace="calico-system" Pod="whisker-5d9c765474-k2rqc" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--5d9c765474--k2rqc-eth0" Sep 12 17:40:13.531763 containerd[1471]: 2025-09-12 17:40:13.463 [INFO][3854] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid19508082ad ContainerID="00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" Namespace="calico-system" Pod="whisker-5d9c765474-k2rqc" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--5d9c765474--k2rqc-eth0" Sep 12 17:40:13.531763 containerd[1471]: 2025-09-12 17:40:13.475 [INFO][3854] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" Namespace="calico-system" Pod="whisker-5d9c765474-k2rqc" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--5d9c765474--k2rqc-eth0" Sep 12 17:40:13.531763 containerd[1471]: 2025-09-12 17:40:13.477 [INFO][3854] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" Namespace="calico-system" Pod="whisker-5d9c765474-k2rqc" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--5d9c765474--k2rqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-whisker--5d9c765474--k2rqc-eth0", GenerateName:"whisker-5d9c765474-", Namespace:"calico-system", SelfLink:"", UID:"40064e21-a431-4c2a-a859-8fe4fc7bf5ed", ResourceVersion:"980", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d9c765474", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8", Pod:"whisker-5d9c765474-k2rqc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.64.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calid19508082ad", MAC:"42:5a:54:2f:03:6b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:13.531763 containerd[1471]: 2025-09-12 17:40:13.526 [INFO][3854] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8" Namespace="calico-system" Pod="whisker-5d9c765474-k2rqc" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--5d9c765474--k2rqc-eth0" Sep 12 17:40:13.588690 containerd[1471]: time="2025-09-12T17:40:13.586746915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:13.588690 containerd[1471]: time="2025-09-12T17:40:13.586983695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:13.588690 containerd[1471]: time="2025-09-12T17:40:13.587008334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:13.588690 containerd[1471]: time="2025-09-12T17:40:13.587282760Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:13.651431 systemd[1]: Started cri-containerd-00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8.scope - libcontainer container 00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8. Sep 12 17:40:13.799531 containerd[1471]: time="2025-09-12T17:40:13.799453815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d9c765474-k2rqc,Uid:40064e21-a431-4c2a-a859-8fe4fc7bf5ed,Namespace:calico-system,Attempt:0,} returns sandbox id \"00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8\"" Sep 12 17:40:13.816501 containerd[1471]: time="2025-09-12T17:40:13.816158780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:40:14.239147 containerd[1471]: time="2025-09-12T17:40:14.235496624Z" level=info msg="StopPodSandbox for \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\"" Sep 12 17:40:14.241317 containerd[1471]: time="2025-09-12T17:40:14.240289172Z" level=info msg="StopPodSandbox for \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\"" Sep 12 17:40:14.299423 containerd[1471]: time="2025-09-12T17:40:14.299190951Z" level=info msg="StopPodSandbox for \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\"" Sep 12 17:40:14.307822 kubelet[2512]: I0912 17:40:14.307569 2512 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80356a36-c1e0-4d4a-9e83-4527f10c0a1e" path="/var/lib/kubelet/pods/80356a36-c1e0-4d4a-9e83-4527f10c0a1e/volumes" Sep 12 17:40:14.555114 containerd[1471]: 2025-09-12 17:40:14.437 [INFO][4036] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Sep 12 17:40:14.555114 containerd[1471]: 2025-09-12 17:40:14.437 [INFO][4036] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" iface="eth0" netns="/var/run/netns/cni-dc805e22-7e4e-e743-8c16-3f3512ee5bb5" Sep 12 17:40:14.555114 containerd[1471]: 2025-09-12 17:40:14.440 [INFO][4036] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" iface="eth0" netns="/var/run/netns/cni-dc805e22-7e4e-e743-8c16-3f3512ee5bb5" Sep 12 17:40:14.555114 containerd[1471]: 2025-09-12 17:40:14.447 [INFO][4036] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" iface="eth0" netns="/var/run/netns/cni-dc805e22-7e4e-e743-8c16-3f3512ee5bb5" Sep 12 17:40:14.555114 containerd[1471]: 2025-09-12 17:40:14.449 [INFO][4036] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Sep 12 17:40:14.555114 containerd[1471]: 2025-09-12 17:40:14.449 [INFO][4036] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Sep 12 17:40:14.555114 containerd[1471]: 2025-09-12 17:40:14.516 [INFO][4059] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" HandleID="k8s-pod-network.c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:14.555114 containerd[1471]: 2025-09-12 17:40:14.516 [INFO][4059] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:14.555114 containerd[1471]: 2025-09-12 17:40:14.517 [INFO][4059] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:14.555114 containerd[1471]: 2025-09-12 17:40:14.535 [WARNING][4059] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" HandleID="k8s-pod-network.c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:14.555114 containerd[1471]: 2025-09-12 17:40:14.535 [INFO][4059] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" HandleID="k8s-pod-network.c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:14.555114 containerd[1471]: 2025-09-12 17:40:14.542 [INFO][4059] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:14.555114 containerd[1471]: 2025-09-12 17:40:14.545 [INFO][4036] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Sep 12 17:40:14.559424 containerd[1471]: time="2025-09-12T17:40:14.557422722Z" level=info msg="TearDown network for sandbox \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\" successfully" Sep 12 17:40:14.559424 containerd[1471]: time="2025-09-12T17:40:14.557468273Z" level=info msg="StopPodSandbox for \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\" returns successfully" Sep 12 17:40:14.559424 containerd[1471]: time="2025-09-12T17:40:14.559272296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854494659d-799gf,Uid:dbf3d71b-4b9a-4508-98f1-871dbdecc9e3,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:40:14.558993 systemd[1]: run-netns-cni\x2ddc805e22\x2d7e4e\x2de743\x2d8c16\x2d3f3512ee5bb5.mount: Deactivated successfully. Sep 12 17:40:14.606252 containerd[1471]: 2025-09-12 17:40:14.491 [INFO][4039] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Sep 12 17:40:14.606252 containerd[1471]: 2025-09-12 17:40:14.493 [INFO][4039] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" iface="eth0" netns="/var/run/netns/cni-3e9ad71d-d082-ea41-01ff-8fb36b88b5e9" Sep 12 17:40:14.606252 containerd[1471]: 2025-09-12 17:40:14.494 [INFO][4039] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" iface="eth0" netns="/var/run/netns/cni-3e9ad71d-d082-ea41-01ff-8fb36b88b5e9" Sep 12 17:40:14.606252 containerd[1471]: 2025-09-12 17:40:14.496 [INFO][4039] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" iface="eth0" netns="/var/run/netns/cni-3e9ad71d-d082-ea41-01ff-8fb36b88b5e9" Sep 12 17:40:14.606252 containerd[1471]: 2025-09-12 17:40:14.496 [INFO][4039] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Sep 12 17:40:14.606252 containerd[1471]: 2025-09-12 17:40:14.496 [INFO][4039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Sep 12 17:40:14.606252 containerd[1471]: 2025-09-12 17:40:14.553 [INFO][4069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" HandleID="k8s-pod-network.c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:14.606252 containerd[1471]: 2025-09-12 17:40:14.554 [INFO][4069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:14.606252 containerd[1471]: 2025-09-12 17:40:14.554 [INFO][4069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:14.606252 containerd[1471]: 2025-09-12 17:40:14.571 [WARNING][4069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" HandleID="k8s-pod-network.c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:14.606252 containerd[1471]: 2025-09-12 17:40:14.571 [INFO][4069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" HandleID="k8s-pod-network.c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:14.606252 containerd[1471]: 2025-09-12 17:40:14.574 [INFO][4069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:14.606252 containerd[1471]: 2025-09-12 17:40:14.589 [INFO][4039] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Sep 12 17:40:14.607469 containerd[1471]: time="2025-09-12T17:40:14.606678229Z" level=info msg="TearDown network for sandbox \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\" successfully" Sep 12 17:40:14.607469 containerd[1471]: time="2025-09-12T17:40:14.606715379Z" level=info msg="StopPodSandbox for \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\" returns successfully" Sep 12 17:40:14.611884 kubelet[2512]: E0912 17:40:14.611407 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:14.612816 systemd[1]: run-netns-cni\x2d3e9ad71d\x2dd082\x2dea41\x2d01ff\x2d8fb36b88b5e9.mount: Deactivated successfully. Sep 12 17:40:14.616730 containerd[1471]: time="2025-09-12T17:40:14.615876516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g49wb,Uid:b35527ec-e920-470f-8bab-682f16b3b48b,Namespace:kube-system,Attempt:1,}" Sep 12 17:40:14.641815 containerd[1471]: 2025-09-12 17:40:14.460 [INFO][4049] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Sep 12 17:40:14.641815 containerd[1471]: 2025-09-12 17:40:14.461 [INFO][4049] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" iface="eth0" netns="/var/run/netns/cni-99285e4d-5ab4-655d-0f32-f39acede0329" Sep 12 17:40:14.641815 containerd[1471]: 2025-09-12 17:40:14.462 [INFO][4049] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" iface="eth0" netns="/var/run/netns/cni-99285e4d-5ab4-655d-0f32-f39acede0329" Sep 12 17:40:14.641815 containerd[1471]: 2025-09-12 17:40:14.468 [INFO][4049] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" iface="eth0" netns="/var/run/netns/cni-99285e4d-5ab4-655d-0f32-f39acede0329" Sep 12 17:40:14.641815 containerd[1471]: 2025-09-12 17:40:14.468 [INFO][4049] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Sep 12 17:40:14.641815 containerd[1471]: 2025-09-12 17:40:14.469 [INFO][4049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Sep 12 17:40:14.641815 containerd[1471]: 2025-09-12 17:40:14.565 [INFO][4064] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" HandleID="k8s-pod-network.460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:14.641815 containerd[1471]: 2025-09-12 17:40:14.565 [INFO][4064] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:14.641815 containerd[1471]: 2025-09-12 17:40:14.574 [INFO][4064] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:14.641815 containerd[1471]: 2025-09-12 17:40:14.608 [WARNING][4064] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" HandleID="k8s-pod-network.460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:14.641815 containerd[1471]: 2025-09-12 17:40:14.615 [INFO][4064] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" HandleID="k8s-pod-network.460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:14.641815 containerd[1471]: 2025-09-12 17:40:14.623 [INFO][4064] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:14.641815 containerd[1471]: 2025-09-12 17:40:14.634 [INFO][4049] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Sep 12 17:40:14.643724 containerd[1471]: time="2025-09-12T17:40:14.643262069Z" level=info msg="TearDown network for sandbox \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\" successfully" Sep 12 17:40:14.643724 containerd[1471]: time="2025-09-12T17:40:14.643424872Z" level=info msg="StopPodSandbox for \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\" returns successfully" Sep 12 17:40:14.647455 systemd[1]: run-netns-cni\x2d99285e4d\x2d5ab4\x2d655d\x2d0f32\x2df39acede0329.mount: Deactivated successfully. Sep 12 17:40:14.648039 containerd[1471]: time="2025-09-12T17:40:14.647970124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74f4d49d55-cj5zp,Uid:2abfa125-f454-4039-a6a8-433554adf69c,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:40:14.920867 systemd-networkd[1365]: cali2cf06bf79f5: Link UP Sep 12 17:40:14.924524 systemd-networkd[1365]: cali2cf06bf79f5: Gained carrier Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.686 [INFO][4080] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.724 [INFO][4080] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0 calico-apiserver-854494659d- calico-apiserver dbf3d71b-4b9a-4508-98f1-871dbdecc9e3 991 0 2025-09-12 17:39:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:854494659d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-a-756b4d7dc2 calico-apiserver-854494659d-799gf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2cf06bf79f5 [] [] }} ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-799gf" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-" Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.726 [INFO][4080] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-799gf" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.820 [INFO][4116] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" HandleID="k8s-pod-network.490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.820 [INFO][4116] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" HandleID="k8s-pod-network.490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-a-756b4d7dc2", "pod":"calico-apiserver-854494659d-799gf", "timestamp":"2025-09-12 17:40:14.82053537 +0000 UTC"}, Hostname:"ci-4081.3.6-a-756b4d7dc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.821 [INFO][4116] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.821 [INFO][4116] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.821 [INFO][4116] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-756b4d7dc2' Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.841 [INFO][4116] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.863 [INFO][4116] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.879 [INFO][4116] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.882 [INFO][4116] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.886 [INFO][4116] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.886 [INFO][4116] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.889 [INFO][4116] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3 Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.900 [INFO][4116] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.908 [INFO][4116] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.64.130/26] block=192.168.64.128/26 handle="k8s-pod-network.490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.908 [INFO][4116] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.130/26] handle="k8s-pod-network.490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.909 [INFO][4116] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:14.958208 containerd[1471]: 2025-09-12 17:40:14.909 [INFO][4116] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.130/26] IPv6=[] ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" HandleID="k8s-pod-network.490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:14.960681 containerd[1471]: 2025-09-12 17:40:14.913 [INFO][4080] cni-plugin/k8s.go 418: Populated endpoint ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-799gf" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0", GenerateName:"calico-apiserver-854494659d-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbf3d71b-4b9a-4508-98f1-871dbdecc9e3", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"854494659d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"", Pod:"calico-apiserver-854494659d-799gf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cf06bf79f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:14.960681 containerd[1471]: 2025-09-12 17:40:14.913 [INFO][4080] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.130/32] ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-799gf" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:14.960681 containerd[1471]: 2025-09-12 17:40:14.913 [INFO][4080] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2cf06bf79f5 ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-799gf" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:14.960681 containerd[1471]: 2025-09-12 17:40:14.925 [INFO][4080] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-799gf" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:14.960681 containerd[1471]: 2025-09-12 17:40:14.926 [INFO][4080] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-799gf" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0", GenerateName:"calico-apiserver-854494659d-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbf3d71b-4b9a-4508-98f1-871dbdecc9e3", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"854494659d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3", Pod:"calico-apiserver-854494659d-799gf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cf06bf79f5", MAC:"f6:49:53:3a:e5:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:14.960681 containerd[1471]: 2025-09-12 17:40:14.946 [INFO][4080] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-799gf" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:15.019284 containerd[1471]: time="2025-09-12T17:40:15.018664299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:15.019284 containerd[1471]: time="2025-09-12T17:40:15.018750214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:15.019284 containerd[1471]: time="2025-09-12T17:40:15.018775418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:15.019284 containerd[1471]: time="2025-09-12T17:40:15.018910191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:15.047936 systemd-networkd[1365]: cali41c90d5c9d9: Link UP Sep 12 17:40:15.049602 systemd-networkd[1365]: cali41c90d5c9d9: Gained carrier Sep 12 17:40:15.102394 systemd[1]: Started cri-containerd-490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3.scope - libcontainer container 490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3. Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.726 [INFO][4100] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.754 [INFO][4100] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0 calico-apiserver-74f4d49d55- calico-apiserver 2abfa125-f454-4039-a6a8-433554adf69c 992 0 2025-09-12 17:39:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74f4d49d55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-a-756b4d7dc2 calico-apiserver-74f4d49d55-cj5zp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali41c90d5c9d9 [] [] }} ContainerID="ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-cj5zp" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-" Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.755 [INFO][4100] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-cj5zp" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.868 [INFO][4124] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" HandleID="k8s-pod-network.ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.868 [INFO][4124] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" HandleID="k8s-pod-network.ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-a-756b4d7dc2", "pod":"calico-apiserver-74f4d49d55-cj5zp", "timestamp":"2025-09-12 17:40:14.868107264 +0000 UTC"}, Hostname:"ci-4081.3.6-a-756b4d7dc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.868 [INFO][4124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.909 [INFO][4124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.910 [INFO][4124] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-756b4d7dc2' Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.946 [INFO][4124] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.962 [INFO][4124] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.976 [INFO][4124] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.981 [INFO][4124] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.985 [INFO][4124] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.985 [INFO][4124] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:14.990 [INFO][4124] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:15.003 [INFO][4124] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:15.022 [INFO][4124] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.64.131/26] block=192.168.64.128/26 handle="k8s-pod-network.ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:15.023 [INFO][4124] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.131/26] handle="k8s-pod-network.ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:15.023 [INFO][4124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:15.122627 containerd[1471]: 2025-09-12 17:40:15.023 [INFO][4124] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.131/26] IPv6=[] ContainerID="ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" HandleID="k8s-pod-network.ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:15.124539 containerd[1471]: 2025-09-12 17:40:15.032 [INFO][4100] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-cj5zp" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0", GenerateName:"calico-apiserver-74f4d49d55-", Namespace:"calico-apiserver", SelfLink:"", UID:"2abfa125-f454-4039-a6a8-433554adf69c", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74f4d49d55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"", Pod:"calico-apiserver-74f4d49d55-cj5zp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41c90d5c9d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:15.124539 containerd[1471]: 2025-09-12 17:40:15.035 [INFO][4100] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.131/32] ContainerID="ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-cj5zp" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:15.124539 containerd[1471]: 2025-09-12 17:40:15.036 [INFO][4100] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41c90d5c9d9 ContainerID="ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-cj5zp" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:15.124539 containerd[1471]: 2025-09-12 17:40:15.059 [INFO][4100] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-cj5zp" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:15.124539 containerd[1471]: 2025-09-12 17:40:15.060 [INFO][4100] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-cj5zp" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0", GenerateName:"calico-apiserver-74f4d49d55-", Namespace:"calico-apiserver", SelfLink:"", UID:"2abfa125-f454-4039-a6a8-433554adf69c", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74f4d49d55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e", Pod:"calico-apiserver-74f4d49d55-cj5zp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41c90d5c9d9", MAC:"72:f5:03:af:97:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:15.124539 containerd[1471]: 2025-09-12 17:40:15.115 [INFO][4100] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-cj5zp" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:15.182359 containerd[1471]: time="2025-09-12T17:40:15.179920083Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:15.182359 containerd[1471]: time="2025-09-12T17:40:15.180941117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:15.182359 containerd[1471]: time="2025-09-12T17:40:15.181131835Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:15.183065 containerd[1471]: time="2025-09-12T17:40:15.182468104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:15.225982 systemd-networkd[1365]: calicea41103d7a: Link UP Sep 12 17:40:15.234937 systemd-networkd[1365]: calicea41103d7a: Gained carrier Sep 12 17:40:15.239047 containerd[1471]: time="2025-09-12T17:40:15.238976744Z" level=info msg="StopPodSandbox for \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\"" Sep 12 17:40:15.241329 containerd[1471]: time="2025-09-12T17:40:15.241264430Z" level=info msg="StopPodSandbox for \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\"" Sep 12 17:40:15.244426 systemd[1]: Started cri-containerd-ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e.scope - libcontainer container ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e. Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:14.755 [INFO][4090] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:14.778 [INFO][4090] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0 coredns-668d6bf9bc- kube-system b35527ec-e920-470f-8bab-682f16b3b48b 993 0 2025-09-12 17:39:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-a-756b4d7dc2 coredns-668d6bf9bc-g49wb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicea41103d7a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-g49wb" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-" Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:14.778 [INFO][4090] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-g49wb" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:14.871 [INFO][4130] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" HandleID="k8s-pod-network.4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:14.872 [INFO][4130] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" HandleID="k8s-pod-network.4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f9a0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-a-756b4d7dc2", "pod":"coredns-668d6bf9bc-g49wb", "timestamp":"2025-09-12 17:40:14.871895071 +0000 UTC"}, Hostname:"ci-4081.3.6-a-756b4d7dc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:14.872 [INFO][4130] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:15.023 [INFO][4130] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:15.023 [INFO][4130] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-756b4d7dc2' Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:15.050 [INFO][4130] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:15.106 [INFO][4130] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:15.132 [INFO][4130] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:15.138 [INFO][4130] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:15.145 [INFO][4130] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:15.145 [INFO][4130] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:15.150 [INFO][4130] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:15.160 [INFO][4130] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:15.176 [INFO][4130] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.64.132/26] block=192.168.64.128/26 handle="k8s-pod-network.4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:15.176 [INFO][4130] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.132/26] handle="k8s-pod-network.4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:15.176 [INFO][4130] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:15.311408 containerd[1471]: 2025-09-12 17:40:15.176 [INFO][4130] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.132/26] IPv6=[] ContainerID="4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" HandleID="k8s-pod-network.4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:15.316013 containerd[1471]: 2025-09-12 17:40:15.198 [INFO][4090] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-g49wb" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b35527ec-e920-470f-8bab-682f16b3b48b", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"", Pod:"coredns-668d6bf9bc-g49wb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicea41103d7a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:15.316013 containerd[1471]: 2025-09-12 17:40:15.199 [INFO][4090] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.132/32] ContainerID="4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-g49wb" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:15.316013 containerd[1471]: 2025-09-12 17:40:15.199 [INFO][4090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicea41103d7a ContainerID="4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-g49wb" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:15.316013 containerd[1471]: 2025-09-12 17:40:15.234 [INFO][4090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-g49wb" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:15.316013 containerd[1471]: 2025-09-12 17:40:15.256 [INFO][4090] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-g49wb" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b35527ec-e920-470f-8bab-682f16b3b48b", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce", Pod:"coredns-668d6bf9bc-g49wb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicea41103d7a", MAC:"ba:73:82:c4:6d:97", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:15.316013 containerd[1471]: 2025-09-12 17:40:15.293 [INFO][4090] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce" Namespace="kube-system" Pod="coredns-668d6bf9bc-g49wb" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:15.431053 systemd-networkd[1365]: calid19508082ad: Gained IPv6LL Sep 12 17:40:15.461530 containerd[1471]: time="2025-09-12T17:40:15.459426620Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:15.463033 containerd[1471]: time="2025-09-12T17:40:15.461742160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:15.463033 containerd[1471]: time="2025-09-12T17:40:15.461785772Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:15.467561 containerd[1471]: time="2025-09-12T17:40:15.466680396Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:15.532126 kubelet[2512]: I0912 17:40:15.531127 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:15.532126 kubelet[2512]: E0912 17:40:15.531660 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:15.631472 systemd[1]: Started cri-containerd-4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce.scope - libcontainer container 4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce. Sep 12 17:40:15.696564 systemd[1]: run-containerd-runc-k8s.io-4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce-runc.yKpwtx.mount: Deactivated successfully. Sep 12 17:40:15.767121 kubelet[2512]: E0912 17:40:15.766743 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:15.831123 containerd[1471]: 2025-09-12 17:40:15.477 [INFO][4241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Sep 12 17:40:15.831123 containerd[1471]: 2025-09-12 17:40:15.477 [INFO][4241] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" iface="eth0" netns="/var/run/netns/cni-5f86ab76-97c4-c231-178f-851213fc7671" Sep 12 17:40:15.831123 containerd[1471]: 2025-09-12 17:40:15.477 [INFO][4241] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" iface="eth0" netns="/var/run/netns/cni-5f86ab76-97c4-c231-178f-851213fc7671" Sep 12 17:40:15.831123 containerd[1471]: 2025-09-12 17:40:15.480 [INFO][4241] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" iface="eth0" netns="/var/run/netns/cni-5f86ab76-97c4-c231-178f-851213fc7671" Sep 12 17:40:15.831123 containerd[1471]: 2025-09-12 17:40:15.480 [INFO][4241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Sep 12 17:40:15.831123 containerd[1471]: 2025-09-12 17:40:15.480 [INFO][4241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Sep 12 17:40:15.831123 containerd[1471]: 2025-09-12 17:40:15.768 [INFO][4297] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" HandleID="k8s-pod-network.17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:15.831123 containerd[1471]: 2025-09-12 17:40:15.774 [INFO][4297] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:15.831123 containerd[1471]: 2025-09-12 17:40:15.776 [INFO][4297] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:15.831123 containerd[1471]: 2025-09-12 17:40:15.802 [WARNING][4297] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" HandleID="k8s-pod-network.17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:15.831123 containerd[1471]: 2025-09-12 17:40:15.802 [INFO][4297] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" HandleID="k8s-pod-network.17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:15.831123 containerd[1471]: 2025-09-12 17:40:15.808 [INFO][4297] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:15.831123 containerd[1471]: 2025-09-12 17:40:15.819 [INFO][4241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Sep 12 17:40:15.832652 containerd[1471]: time="2025-09-12T17:40:15.832217957Z" level=info msg="TearDown network for sandbox \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\" successfully" Sep 12 17:40:15.832652 containerd[1471]: time="2025-09-12T17:40:15.832303534Z" level=info msg="StopPodSandbox for \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\" returns successfully" Sep 12 17:40:15.839609 systemd[1]: run-netns-cni\x2d5f86ab76\x2d97c4\x2dc231\x2d178f\x2d851213fc7671.mount: Deactivated successfully. Sep 12 17:40:15.841528 containerd[1471]: time="2025-09-12T17:40:15.839635991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc9d95797-xw78z,Uid:8e802a89-f9ff-49ae-94df-fd49494b3d9b,Namespace:calico-system,Attempt:1,}" Sep 12 17:40:15.892633 containerd[1471]: time="2025-09-12T17:40:15.891902201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854494659d-799gf,Uid:dbf3d71b-4b9a-4508-98f1-871dbdecc9e3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3\"" Sep 12 17:40:15.929833 containerd[1471]: 2025-09-12 17:40:15.517 [INFO][4250] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Sep 12 17:40:15.929833 containerd[1471]: 2025-09-12 17:40:15.517 [INFO][4250] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" iface="eth0" netns="/var/run/netns/cni-e267262c-20e6-e882-70ab-d24673f32b3a" Sep 12 17:40:15.929833 containerd[1471]: 2025-09-12 17:40:15.537 [INFO][4250] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" iface="eth0" netns="/var/run/netns/cni-e267262c-20e6-e882-70ab-d24673f32b3a" Sep 12 17:40:15.929833 containerd[1471]: 2025-09-12 17:40:15.546 [INFO][4250] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" iface="eth0" netns="/var/run/netns/cni-e267262c-20e6-e882-70ab-d24673f32b3a" Sep 12 17:40:15.929833 containerd[1471]: 2025-09-12 17:40:15.547 [INFO][4250] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Sep 12 17:40:15.929833 containerd[1471]: 2025-09-12 17:40:15.547 [INFO][4250] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Sep 12 17:40:15.929833 containerd[1471]: 2025-09-12 17:40:15.789 [INFO][4311] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" HandleID="k8s-pod-network.6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:15.929833 containerd[1471]: 2025-09-12 17:40:15.800 [INFO][4311] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:15.929833 containerd[1471]: 2025-09-12 17:40:15.810 [INFO][4311] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:15.929833 containerd[1471]: 2025-09-12 17:40:15.874 [WARNING][4311] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" HandleID="k8s-pod-network.6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:15.929833 containerd[1471]: 2025-09-12 17:40:15.876 [INFO][4311] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" HandleID="k8s-pod-network.6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:15.929833 containerd[1471]: 2025-09-12 17:40:15.895 [INFO][4311] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:15.929833 containerd[1471]: 2025-09-12 17:40:15.924 [INFO][4250] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Sep 12 17:40:15.933976 containerd[1471]: time="2025-09-12T17:40:15.933548438Z" level=info msg="TearDown network for sandbox \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\" successfully" Sep 12 17:40:15.941846 containerd[1471]: time="2025-09-12T17:40:15.941625659Z" level=info msg="StopPodSandbox for \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\" returns successfully" Sep 12 17:40:15.948185 containerd[1471]: time="2025-09-12T17:40:15.947178245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4k29j,Uid:06c23272-db86-4e7e-9c53-92578f077ab6,Namespace:calico-system,Attempt:1,}" Sep 12 17:40:15.969397 containerd[1471]: time="2025-09-12T17:40:15.967688707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-g49wb,Uid:b35527ec-e920-470f-8bab-682f16b3b48b,Namespace:kube-system,Attempt:1,} returns sandbox id \"4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce\"" Sep 12 17:40:15.977912 kubelet[2512]: E0912 17:40:15.977475 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:15.990560 containerd[1471]: time="2025-09-12T17:40:15.989691853Z" level=info msg="CreateContainer within sandbox \"4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:40:16.180209 containerd[1471]: time="2025-09-12T17:40:16.180159875Z" level=info msg="CreateContainer within sandbox \"4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f29a5804c3a2cb7c5007954ee586283c2c660d893e79561e47f58cb177270a9c\"" Sep 12 17:40:16.185038 containerd[1471]: time="2025-09-12T17:40:16.184051500Z" level=info msg="StartContainer for \"f29a5804c3a2cb7c5007954ee586283c2c660d893e79561e47f58cb177270a9c\"" Sep 12 17:40:16.276829 containerd[1471]: time="2025-09-12T17:40:16.275418603Z" level=info msg="StopPodSandbox for \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\"" Sep 12 17:40:16.308433 systemd-networkd[1365]: cali2cf06bf79f5: Gained IPv6LL Sep 12 17:40:16.323791 containerd[1471]: time="2025-09-12T17:40:16.317289310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74f4d49d55-cj5zp,Uid:2abfa125-f454-4039-a6a8-433554adf69c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e\"" Sep 12 17:40:16.545548 systemd[1]: Started cri-containerd-f29a5804c3a2cb7c5007954ee586283c2c660d893e79561e47f58cb177270a9c.scope - libcontainer container f29a5804c3a2cb7c5007954ee586283c2c660d893e79561e47f58cb177270a9c. Sep 12 17:40:16.574378 systemd[1]: run-netns-cni\x2de267262c\x2d20e6\x2de882\x2d70ab\x2dd24673f32b3a.mount: Deactivated successfully. Sep 12 17:40:16.693205 systemd-networkd[1365]: cali41c90d5c9d9: Gained IPv6LL Sep 12 17:40:16.814135 containerd[1471]: time="2025-09-12T17:40:16.813978165Z" level=info msg="StartContainer for \"f29a5804c3a2cb7c5007954ee586283c2c660d893e79561e47f58cb177270a9c\" returns successfully" Sep 12 17:40:16.894067 systemd-networkd[1365]: cali7ce226be312: Link UP Sep 12 17:40:16.908653 systemd-networkd[1365]: cali7ce226be312: Gained carrier Sep 12 17:40:16.933898 containerd[1471]: time="2025-09-12T17:40:16.933816345Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:16.936563 containerd[1471]: time="2025-09-12T17:40:16.936504610Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:40:16.938165 containerd[1471]: time="2025-09-12T17:40:16.937219968Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:16.947929 containerd[1471]: time="2025-09-12T17:40:16.947602321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:16.950440 containerd[1471]: time="2025-09-12T17:40:16.950359882Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 3.134135005s" Sep 12 17:40:16.950440 containerd[1471]: time="2025-09-12T17:40:16.950425351Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:40:16.954642 containerd[1471]: time="2025-09-12T17:40:16.954223340Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:40:16.955450 containerd[1471]: time="2025-09-12T17:40:16.955409097Z" level=info msg="CreateContainer within sandbox \"00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:40:16.988612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount389058634.mount: Deactivated successfully. Sep 12 17:40:17.025004 containerd[1471]: time="2025-09-12T17:40:17.024793747Z" level=info msg="CreateContainer within sandbox \"00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"ec3047a06773d13b2ffbaa29a7c06e3b00f0086d9c7cc8efafbb8f8374a5346b\"" Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.319 [INFO][4365] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.376 [INFO][4365] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0 csi-node-driver- calico-system 06c23272-db86-4e7e-9c53-92578f077ab6 1009 0 2025-09-12 17:39:50 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-a-756b4d7dc2 csi-node-driver-4k29j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7ce226be312 [] [] }} ContainerID="8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" Namespace="calico-system" Pod="csi-node-driver-4k29j" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-" Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.376 [INFO][4365] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" Namespace="calico-system" Pod="csi-node-driver-4k29j" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.623 [INFO][4415] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" HandleID="k8s-pod-network.8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.623 [INFO][4415] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" HandleID="k8s-pod-network.8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d61a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-756b4d7dc2", "pod":"csi-node-driver-4k29j", "timestamp":"2025-09-12 17:40:16.623646529 +0000 UTC"}, Hostname:"ci-4081.3.6-a-756b4d7dc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.624 [INFO][4415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.624 [INFO][4415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.632 [INFO][4415] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-756b4d7dc2' Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.668 [INFO][4415] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.688 [INFO][4415] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.734 [INFO][4415] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.745 [INFO][4415] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.758 [INFO][4415] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.758 [INFO][4415] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.769 [INFO][4415] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493 Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.796 [INFO][4415] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.838 [INFO][4415] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.64.133/26] block=192.168.64.128/26 handle="k8s-pod-network.8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.838 [INFO][4415] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.133/26] handle="k8s-pod-network.8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.838 [INFO][4415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:17.025897 containerd[1471]: 2025-09-12 17:40:16.838 [INFO][4415] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.133/26] IPv6=[] ContainerID="8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" HandleID="k8s-pod-network.8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:17.029302 containerd[1471]: 2025-09-12 17:40:16.858 [INFO][4365] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" Namespace="calico-system" Pod="csi-node-driver-4k29j" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06c23272-db86-4e7e-9c53-92578f077ab6", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"", Pod:"csi-node-driver-4k29j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7ce226be312", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:17.029302 containerd[1471]: 2025-09-12 17:40:16.859 [INFO][4365] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.133/32] ContainerID="8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" Namespace="calico-system" Pod="csi-node-driver-4k29j" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:17.029302 containerd[1471]: 2025-09-12 17:40:16.859 [INFO][4365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ce226be312 ContainerID="8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" Namespace="calico-system" Pod="csi-node-driver-4k29j" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:17.029302 containerd[1471]: 2025-09-12 17:40:16.920 [INFO][4365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" Namespace="calico-system" Pod="csi-node-driver-4k29j" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:17.029302 containerd[1471]: 2025-09-12 17:40:16.925 [INFO][4365] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" Namespace="calico-system" Pod="csi-node-driver-4k29j" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06c23272-db86-4e7e-9c53-92578f077ab6", ResourceVersion:"1009", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493", Pod:"csi-node-driver-4k29j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7ce226be312", MAC:"2a:80:83:c2:47:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:17.029302 containerd[1471]: 2025-09-12 17:40:17.003 [INFO][4365] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493" Namespace="calico-system" Pod="csi-node-driver-4k29j" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:17.046900 containerd[1471]: time="2025-09-12T17:40:17.046236245Z" level=info msg="StartContainer for \"ec3047a06773d13b2ffbaa29a7c06e3b00f0086d9c7cc8efafbb8f8374a5346b\"" Sep 12 17:40:17.153436 systemd-networkd[1365]: calib2e7d72fd6d: Link UP Sep 12 17:40:17.164320 containerd[1471]: time="2025-09-12T17:40:17.162783657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:17.164320 containerd[1471]: time="2025-09-12T17:40:17.163745849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:17.164320 containerd[1471]: time="2025-09-12T17:40:17.163844624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:17.164320 containerd[1471]: time="2025-09-12T17:40:17.164178660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:17.171575 systemd-networkd[1365]: calib2e7d72fd6d: Gained carrier Sep 12 17:40:17.198860 systemd[1]: Started cri-containerd-8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493.scope - libcontainer container 8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493. Sep 12 17:40:17.214341 containerd[1471]: 2025-09-12 17:40:16.913 [INFO][4405] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Sep 12 17:40:17.214341 containerd[1471]: 2025-09-12 17:40:16.914 [INFO][4405] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" iface="eth0" netns="/var/run/netns/cni-2c2070a1-94b2-1d28-b7e0-36f38742616a" Sep 12 17:40:17.214341 containerd[1471]: 2025-09-12 17:40:16.920 [INFO][4405] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" iface="eth0" netns="/var/run/netns/cni-2c2070a1-94b2-1d28-b7e0-36f38742616a" Sep 12 17:40:17.214341 containerd[1471]: 2025-09-12 17:40:16.921 [INFO][4405] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" iface="eth0" netns="/var/run/netns/cni-2c2070a1-94b2-1d28-b7e0-36f38742616a" Sep 12 17:40:17.214341 containerd[1471]: 2025-09-12 17:40:16.921 [INFO][4405] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Sep 12 17:40:17.214341 containerd[1471]: 2025-09-12 17:40:16.921 [INFO][4405] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Sep 12 17:40:17.214341 containerd[1471]: 2025-09-12 17:40:17.047 [INFO][4476] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" HandleID="k8s-pod-network.b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:17.214341 containerd[1471]: 2025-09-12 17:40:17.050 [INFO][4476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:17.214341 containerd[1471]: 2025-09-12 17:40:17.111 [INFO][4476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:17.214341 containerd[1471]: 2025-09-12 17:40:17.140 [WARNING][4476] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" HandleID="k8s-pod-network.b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:17.214341 containerd[1471]: 2025-09-12 17:40:17.140 [INFO][4476] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" HandleID="k8s-pod-network.b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:17.214341 containerd[1471]: 2025-09-12 17:40:17.147 [INFO][4476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:17.214341 containerd[1471]: 2025-09-12 17:40:17.208 [INFO][4405] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Sep 12 17:40:17.216596 containerd[1471]: time="2025-09-12T17:40:17.214610912Z" level=info msg="TearDown network for sandbox \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\" successfully" Sep 12 17:40:17.216596 containerd[1471]: time="2025-09-12T17:40:17.214643427Z" level=info msg="StopPodSandbox for \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\" returns successfully" Sep 12 17:40:17.224259 containerd[1471]: time="2025-09-12T17:40:17.223706213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-srrrn,Uid:eb8fd32a-7751-44df-ab7f-52654a5a58c4,Namespace:calico-system,Attempt:1,}" Sep 12 17:40:17.237794 containerd[1471]: time="2025-09-12T17:40:17.237731222Z" level=info msg="StopPodSandbox for \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\"" Sep 12 17:40:17.261524 systemd[1]: Started cri-containerd-ec3047a06773d13b2ffbaa29a7c06e3b00f0086d9c7cc8efafbb8f8374a5346b.scope - libcontainer container ec3047a06773d13b2ffbaa29a7c06e3b00f0086d9c7cc8efafbb8f8374a5346b. Sep 12 17:40:17.267378 systemd-networkd[1365]: calicea41103d7a: Gained IPv6LL Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:16.234 [INFO][4349] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:16.348 [INFO][4349] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0 calico-kube-controllers-dc9d95797- calico-system 8e802a89-f9ff-49ae-94df-fd49494b3d9b 1008 0 2025-09-12 17:39:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:dc9d95797 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-a-756b4d7dc2 calico-kube-controllers-dc9d95797-xw78z eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib2e7d72fd6d [] [] }} ContainerID="597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" Namespace="calico-system" Pod="calico-kube-controllers-dc9d95797-xw78z" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-" Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:16.348 [INFO][4349] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" Namespace="calico-system" Pod="calico-kube-controllers-dc9d95797-xw78z" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:16.657 [INFO][4404] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" HandleID="k8s-pod-network.597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:16.660 [INFO][4404] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" HandleID="k8s-pod-network.597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000357050), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-756b4d7dc2", "pod":"calico-kube-controllers-dc9d95797-xw78z", "timestamp":"2025-09-12 17:40:16.653987437 +0000 UTC"}, Hostname:"ci-4081.3.6-a-756b4d7dc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:16.660 [INFO][4404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:16.845 [INFO][4404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:16.845 [INFO][4404] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-756b4d7dc2' Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:16.875 [INFO][4404] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:16.957 [INFO][4404] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:17.017 [INFO][4404] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:17.030 [INFO][4404] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:17.036 [INFO][4404] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:17.037 [INFO][4404] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:17.041 [INFO][4404] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5 Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:17.073 [INFO][4404] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:17.110 [INFO][4404] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.64.134/26] block=192.168.64.128/26 handle="k8s-pod-network.597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:17.110 [INFO][4404] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.134/26] handle="k8s-pod-network.597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:17.110 [INFO][4404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:17.287985 containerd[1471]: 2025-09-12 17:40:17.110 [INFO][4404] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.134/26] IPv6=[] ContainerID="597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" HandleID="k8s-pod-network.597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:17.290731 containerd[1471]: 2025-09-12 17:40:17.131 [INFO][4349] cni-plugin/k8s.go 418: Populated endpoint ContainerID="597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" Namespace="calico-system" Pod="calico-kube-controllers-dc9d95797-xw78z" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0", GenerateName:"calico-kube-controllers-dc9d95797-", Namespace:"calico-system", SelfLink:"", UID:"8e802a89-f9ff-49ae-94df-fd49494b3d9b", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc9d95797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"", Pod:"calico-kube-controllers-dc9d95797-xw78z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2e7d72fd6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:17.290731 containerd[1471]: 2025-09-12 17:40:17.131 [INFO][4349] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.134/32] ContainerID="597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" Namespace="calico-system" Pod="calico-kube-controllers-dc9d95797-xw78z" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:17.290731 containerd[1471]: 2025-09-12 17:40:17.131 [INFO][4349] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2e7d72fd6d ContainerID="597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" Namespace="calico-system" Pod="calico-kube-controllers-dc9d95797-xw78z" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:17.290731 containerd[1471]: 2025-09-12 17:40:17.188 [INFO][4349] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" Namespace="calico-system" Pod="calico-kube-controllers-dc9d95797-xw78z" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:17.290731 containerd[1471]: 2025-09-12 17:40:17.188 [INFO][4349] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" Namespace="calico-system" Pod="calico-kube-controllers-dc9d95797-xw78z" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0", GenerateName:"calico-kube-controllers-dc9d95797-", Namespace:"calico-system", SelfLink:"", UID:"8e802a89-f9ff-49ae-94df-fd49494b3d9b", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc9d95797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5", Pod:"calico-kube-controllers-dc9d95797-xw78z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2e7d72fd6d", MAC:"2e:7c:fa:28:41:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:17.290731 containerd[1471]: 2025-09-12 17:40:17.239 [INFO][4349] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5" Namespace="calico-system" Pod="calico-kube-controllers-dc9d95797-xw78z" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:17.471309 containerd[1471]: time="2025-09-12T17:40:17.470757228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4k29j,Uid:06c23272-db86-4e7e-9c53-92578f077ab6,Namespace:calico-system,Attempt:1,} returns sandbox id \"8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493\"" Sep 12 17:40:17.500130 containerd[1471]: time="2025-09-12T17:40:17.497418345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:17.500130 containerd[1471]: time="2025-09-12T17:40:17.497557244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:17.500130 containerd[1471]: time="2025-09-12T17:40:17.497592293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:17.500708 containerd[1471]: time="2025-09-12T17:40:17.500561128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:17.568176 systemd[1]: run-netns-cni\x2d2c2070a1\x2d94b2\x2d1d28\x2db7e0\x2d36f38742616a.mount: Deactivated successfully. Sep 12 17:40:17.593494 systemd[1]: Started cri-containerd-597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5.scope - libcontainer container 597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5. Sep 12 17:40:17.730427 containerd[1471]: 2025-09-12 17:40:17.617 [INFO][4579] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Sep 12 17:40:17.730427 containerd[1471]: 2025-09-12 17:40:17.617 [INFO][4579] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" iface="eth0" netns="/var/run/netns/cni-ac18f306-3e30-77d9-75cd-11ee738be1e3" Sep 12 17:40:17.730427 containerd[1471]: 2025-09-12 17:40:17.621 [INFO][4579] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" iface="eth0" netns="/var/run/netns/cni-ac18f306-3e30-77d9-75cd-11ee738be1e3" Sep 12 17:40:17.730427 containerd[1471]: 2025-09-12 17:40:17.624 [INFO][4579] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" iface="eth0" netns="/var/run/netns/cni-ac18f306-3e30-77d9-75cd-11ee738be1e3" Sep 12 17:40:17.730427 containerd[1471]: 2025-09-12 17:40:17.624 [INFO][4579] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Sep 12 17:40:17.730427 containerd[1471]: 2025-09-12 17:40:17.624 [INFO][4579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Sep 12 17:40:17.730427 containerd[1471]: 2025-09-12 17:40:17.692 [INFO][4634] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" HandleID="k8s-pod-network.22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:17.730427 containerd[1471]: 2025-09-12 17:40:17.695 [INFO][4634] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:17.730427 containerd[1471]: 2025-09-12 17:40:17.695 [INFO][4634] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:17.730427 containerd[1471]: 2025-09-12 17:40:17.713 [WARNING][4634] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" HandleID="k8s-pod-network.22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:17.730427 containerd[1471]: 2025-09-12 17:40:17.714 [INFO][4634] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" HandleID="k8s-pod-network.22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:17.730427 containerd[1471]: 2025-09-12 17:40:17.718 [INFO][4634] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:17.730427 containerd[1471]: 2025-09-12 17:40:17.728 [INFO][4579] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Sep 12 17:40:17.733630 containerd[1471]: time="2025-09-12T17:40:17.730640796Z" level=info msg="TearDown network for sandbox \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\" successfully" Sep 12 17:40:17.733630 containerd[1471]: time="2025-09-12T17:40:17.730700173Z" level=info msg="StopPodSandbox for \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\" returns successfully" Sep 12 17:40:17.733630 containerd[1471]: time="2025-09-12T17:40:17.732512902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t55ll,Uid:5925fe00-d8c6-4534-a505-f32b5402931d,Namespace:kube-system,Attempt:1,}" Sep 12 17:40:17.733843 kubelet[2512]: E0912 17:40:17.731537 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:17.737634 systemd[1]: run-netns-cni\x2dac18f306\x2d3e30\x2d77d9\x2d75cd\x2d11ee738be1e3.mount: Deactivated successfully. Sep 12 17:40:17.877562 kubelet[2512]: E0912 17:40:17.873533 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:17.877712 systemd-networkd[1365]: cali1cd7628fc3e: Link UP Sep 12 17:40:17.880210 systemd-networkd[1365]: cali1cd7628fc3e: Gained carrier Sep 12 17:40:17.965723 kubelet[2512]: I0912 17:40:17.956697 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-g49wb" podStartSLOduration=46.956637192 podStartE2EDuration="46.956637192s" podCreationTimestamp="2025-09-12 17:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:40:17.904972399 +0000 UTC m=+51.861199389" watchObservedRunningTime="2025-09-12 17:40:17.956637192 +0000 UTC m=+51.912864130" Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.498 [INFO][4561] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.548 [INFO][4561] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0 goldmane-54d579b49d- calico-system eb8fd32a-7751-44df-ab7f-52654a5a58c4 1030 0 2025-09-12 17:39:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-a-756b4d7dc2 goldmane-54d579b49d-srrrn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1cd7628fc3e [] [] }} ContainerID="6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" Namespace="calico-system" Pod="goldmane-54d579b49d-srrrn" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-" Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.548 [INFO][4561] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" Namespace="calico-system" Pod="goldmane-54d579b49d-srrrn" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.725 [INFO][4621] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" HandleID="k8s-pod-network.6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.725 [INFO][4621] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" HandleID="k8s-pod-network.6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003631d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-a-756b4d7dc2", "pod":"goldmane-54d579b49d-srrrn", "timestamp":"2025-09-12 17:40:17.725031145 +0000 UTC"}, Hostname:"ci-4081.3.6-a-756b4d7dc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.725 [INFO][4621] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.725 [INFO][4621] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.725 [INFO][4621] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-756b4d7dc2' Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.755 [INFO][4621] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.772 [INFO][4621] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.801 [INFO][4621] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.807 [INFO][4621] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.813 [INFO][4621] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.813 [INFO][4621] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.817 [INFO][4621] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749 Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.826 [INFO][4621] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.837 [INFO][4621] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.64.135/26] block=192.168.64.128/26 handle="k8s-pod-network.6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.837 [INFO][4621] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.135/26] handle="k8s-pod-network.6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.837 [INFO][4621] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:17.987897 containerd[1471]: 2025-09-12 17:40:17.837 [INFO][4621] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.135/26] IPv6=[] ContainerID="6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" HandleID="k8s-pod-network.6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:17.993043 containerd[1471]: 2025-09-12 17:40:17.850 [INFO][4561] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" Namespace="calico-system" Pod="goldmane-54d579b49d-srrrn" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"eb8fd32a-7751-44df-ab7f-52654a5a58c4", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"", Pod:"goldmane-54d579b49d-srrrn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1cd7628fc3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:17.993043 containerd[1471]: 2025-09-12 17:40:17.851 [INFO][4561] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.135/32] ContainerID="6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" Namespace="calico-system" Pod="goldmane-54d579b49d-srrrn" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:17.993043 containerd[1471]: 2025-09-12 17:40:17.851 [INFO][4561] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1cd7628fc3e ContainerID="6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" Namespace="calico-system" Pod="goldmane-54d579b49d-srrrn" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:17.993043 containerd[1471]: 2025-09-12 17:40:17.883 [INFO][4561] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" Namespace="calico-system" Pod="goldmane-54d579b49d-srrrn" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:17.993043 containerd[1471]: 2025-09-12 17:40:17.887 [INFO][4561] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" Namespace="calico-system" Pod="goldmane-54d579b49d-srrrn" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"eb8fd32a-7751-44df-ab7f-52654a5a58c4", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749", Pod:"goldmane-54d579b49d-srrrn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1cd7628fc3e", MAC:"5a:86:b6:81:8f:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:17.993043 containerd[1471]: 2025-09-12 17:40:17.955 [INFO][4561] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749" Namespace="calico-system" Pod="goldmane-54d579b49d-srrrn" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:18.011351 containerd[1471]: time="2025-09-12T17:40:18.011059607Z" level=info msg="StartContainer for \"ec3047a06773d13b2ffbaa29a7c06e3b00f0086d9c7cc8efafbb8f8374a5346b\" returns successfully" Sep 12 17:40:18.053906 kubelet[2512]: I0912 17:40:18.053354 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:18.120178 containerd[1471]: time="2025-09-12T17:40:18.107035968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:18.120178 containerd[1471]: time="2025-09-12T17:40:18.107169706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:18.120178 containerd[1471]: time="2025-09-12T17:40:18.107189982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:18.120178 containerd[1471]: time="2025-09-12T17:40:18.107346253Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:18.136551 kernel: bpftool[4700]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:40:18.169456 systemd[1]: Started cri-containerd-6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749.scope - libcontainer container 6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749. Sep 12 17:40:18.273927 containerd[1471]: time="2025-09-12T17:40:18.273883953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-dc9d95797-xw78z,Uid:8e802a89-f9ff-49ae-94df-fd49494b3d9b,Namespace:calico-system,Attempt:1,} returns sandbox id \"597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5\"" Sep 12 17:40:18.415178 systemd-networkd[1365]: calid40e8a897c6: Link UP Sep 12 17:40:18.420804 systemd-networkd[1365]: calid40e8a897c6: Gained carrier Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.068 [INFO][4646] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0 coredns-668d6bf9bc- kube-system 5925fe00-d8c6-4534-a505-f32b5402931d 1040 0 2025-09-12 17:39:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-a-756b4d7dc2 coredns-668d6bf9bc-t55ll eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid40e8a897c6 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" Namespace="kube-system" Pod="coredns-668d6bf9bc-t55ll" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-" Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.072 [INFO][4646] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" Namespace="kube-system" Pod="coredns-668d6bf9bc-t55ll" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.314 [INFO][4699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" HandleID="k8s-pod-network.437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.315 [INFO][4699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" HandleID="k8s-pod-network.437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122cb0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-a-756b4d7dc2", "pod":"coredns-668d6bf9bc-t55ll", "timestamp":"2025-09-12 17:40:18.313716378 +0000 UTC"}, Hostname:"ci-4081.3.6-a-756b4d7dc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.316 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.316 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.318 [INFO][4699] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-756b4d7dc2' Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.333 [INFO][4699] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.341 [INFO][4699] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.351 [INFO][4699] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.354 [INFO][4699] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.360 [INFO][4699] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.360 [INFO][4699] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.365 [INFO][4699] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576 Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.372 [INFO][4699] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.391 [INFO][4699] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.64.136/26] block=192.168.64.128/26 handle="k8s-pod-network.437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.391 [INFO][4699] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.136/26] handle="k8s-pod-network.437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.391 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:18.462010 containerd[1471]: 2025-09-12 17:40:18.391 [INFO][4699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.136/26] IPv6=[] ContainerID="437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" HandleID="k8s-pod-network.437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:18.465552 containerd[1471]: 2025-09-12 17:40:18.403 [INFO][4646] cni-plugin/k8s.go 418: Populated endpoint ContainerID="437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" Namespace="kube-system" Pod="coredns-668d6bf9bc-t55ll" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5925fe00-d8c6-4534-a505-f32b5402931d", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"", Pod:"coredns-668d6bf9bc-t55ll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid40e8a897c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:18.465552 containerd[1471]: 2025-09-12 17:40:18.403 [INFO][4646] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.136/32] ContainerID="437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" Namespace="kube-system" Pod="coredns-668d6bf9bc-t55ll" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:18.465552 containerd[1471]: 2025-09-12 17:40:18.403 [INFO][4646] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid40e8a897c6 ContainerID="437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" Namespace="kube-system" Pod="coredns-668d6bf9bc-t55ll" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:18.465552 containerd[1471]: 2025-09-12 17:40:18.424 [INFO][4646] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" Namespace="kube-system" Pod="coredns-668d6bf9bc-t55ll" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:18.465552 containerd[1471]: 2025-09-12 17:40:18.425 [INFO][4646] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" Namespace="kube-system" Pod="coredns-668d6bf9bc-t55ll" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5925fe00-d8c6-4534-a505-f32b5402931d", ResourceVersion:"1040", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576", Pod:"coredns-668d6bf9bc-t55ll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid40e8a897c6", MAC:"fa:49:be:59:1b:ab", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:18.465552 containerd[1471]: 2025-09-12 17:40:18.449 [INFO][4646] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576" Namespace="kube-system" Pod="coredns-668d6bf9bc-t55ll" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:18.537886 containerd[1471]: time="2025-09-12T17:40:18.533317005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:18.537886 containerd[1471]: time="2025-09-12T17:40:18.534577481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:18.537886 containerd[1471]: time="2025-09-12T17:40:18.534694647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:18.537886 containerd[1471]: time="2025-09-12T17:40:18.536303358Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:18.608096 containerd[1471]: time="2025-09-12T17:40:18.607643692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-srrrn,Uid:eb8fd32a-7751-44df-ab7f-52654a5a58c4,Namespace:calico-system,Attempt:1,} returns sandbox id \"6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749\"" Sep 12 17:40:18.612301 systemd-networkd[1365]: cali7ce226be312: Gained IPv6LL Sep 12 17:40:18.620968 systemd[1]: Started cri-containerd-437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576.scope - libcontainer container 437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576. Sep 12 17:40:18.781039 containerd[1471]: time="2025-09-12T17:40:18.780573977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t55ll,Uid:5925fe00-d8c6-4534-a505-f32b5402931d,Namespace:kube-system,Attempt:1,} returns sandbox id \"437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576\"" Sep 12 17:40:18.782385 kubelet[2512]: E0912 17:40:18.782353 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:18.792990 containerd[1471]: time="2025-09-12T17:40:18.792808533Z" level=info msg="CreateContainer within sandbox \"437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:40:18.832707 containerd[1471]: time="2025-09-12T17:40:18.832539203Z" level=info msg="CreateContainer within sandbox \"437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a06f53a88e726095aee968a9f3d67f159d89d4708bc4469c612c0418d912981f\"" Sep 12 17:40:18.834129 containerd[1471]: time="2025-09-12T17:40:18.833643570Z" level=info msg="StartContainer for \"a06f53a88e726095aee968a9f3d67f159d89d4708bc4469c612c0418d912981f\"" Sep 12 17:40:18.867943 systemd-networkd[1365]: calib2e7d72fd6d: Gained IPv6LL Sep 12 17:40:18.932002 systemd-networkd[1365]: cali1cd7628fc3e: Gained IPv6LL Sep 12 17:40:18.934642 systemd[1]: Started cri-containerd-a06f53a88e726095aee968a9f3d67f159d89d4708bc4469c612c0418d912981f.scope - libcontainer container a06f53a88e726095aee968a9f3d67f159d89d4708bc4469c612c0418d912981f. Sep 12 17:40:18.975767 kubelet[2512]: E0912 17:40:18.975641 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:19.054766 containerd[1471]: time="2025-09-12T17:40:19.054605563Z" level=info msg="StartContainer for \"a06f53a88e726095aee968a9f3d67f159d89d4708bc4469c612c0418d912981f\" returns successfully" Sep 12 17:40:19.234954 containerd[1471]: time="2025-09-12T17:40:19.234837977Z" level=info msg="StopPodSandbox for \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\"" Sep 12 17:40:19.494314 containerd[1471]: 2025-09-12 17:40:19.369 [INFO][4876] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Sep 12 17:40:19.494314 containerd[1471]: 2025-09-12 17:40:19.371 [INFO][4876] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" iface="eth0" netns="/var/run/netns/cni-59cfe653-78f7-ecb5-3d42-7a74eef679f9" Sep 12 17:40:19.494314 containerd[1471]: 2025-09-12 17:40:19.371 [INFO][4876] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" iface="eth0" netns="/var/run/netns/cni-59cfe653-78f7-ecb5-3d42-7a74eef679f9" Sep 12 17:40:19.494314 containerd[1471]: 2025-09-12 17:40:19.372 [INFO][4876] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" iface="eth0" netns="/var/run/netns/cni-59cfe653-78f7-ecb5-3d42-7a74eef679f9" Sep 12 17:40:19.494314 containerd[1471]: 2025-09-12 17:40:19.374 [INFO][4876] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Sep 12 17:40:19.494314 containerd[1471]: 2025-09-12 17:40:19.374 [INFO][4876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Sep 12 17:40:19.494314 containerd[1471]: 2025-09-12 17:40:19.448 [INFO][4883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" HandleID="k8s-pod-network.0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:19.494314 containerd[1471]: 2025-09-12 17:40:19.449 [INFO][4883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:19.494314 containerd[1471]: 2025-09-12 17:40:19.449 [INFO][4883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:19.494314 containerd[1471]: 2025-09-12 17:40:19.479 [WARNING][4883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" HandleID="k8s-pod-network.0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:19.494314 containerd[1471]: 2025-09-12 17:40:19.479 [INFO][4883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" HandleID="k8s-pod-network.0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:19.494314 containerd[1471]: 2025-09-12 17:40:19.482 [INFO][4883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:19.494314 containerd[1471]: 2025-09-12 17:40:19.489 [INFO][4876] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Sep 12 17:40:19.497984 containerd[1471]: time="2025-09-12T17:40:19.497799851Z" level=info msg="TearDown network for sandbox \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\" successfully" Sep 12 17:40:19.497984 containerd[1471]: time="2025-09-12T17:40:19.497843182Z" level=info msg="StopPodSandbox for \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\" returns successfully" Sep 12 17:40:19.504227 containerd[1471]: time="2025-09-12T17:40:19.503587844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854494659d-kvgvx,Uid:fcab9732-3c6c-4b48-8986-45fc7d97bb57,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:40:19.574538 systemd[1]: run-netns-cni\x2d59cfe653\x2d78f7\x2decb5\x2d3d42\x2d7a74eef679f9.mount: Deactivated successfully. Sep 12 17:40:19.891909 systemd-networkd[1365]: calid40e8a897c6: Gained IPv6LL Sep 12 17:40:19.981837 kubelet[2512]: E0912 17:40:19.981804 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:19.984397 kubelet[2512]: E0912 17:40:19.981927 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:19.995878 systemd[1]: Started sshd@7-144.126.222.162:22-147.75.109.163:45532.service - OpenSSH per-connection server daemon (147.75.109.163:45532). Sep 12 17:40:20.046895 kubelet[2512]: I0912 17:40:20.046822 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t55ll" podStartSLOduration=49.046798868 podStartE2EDuration="49.046798868s" podCreationTimestamp="2025-09-12 17:39:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:40:20.018957359 +0000 UTC m=+53.975184285" watchObservedRunningTime="2025-09-12 17:40:20.046798868 +0000 UTC m=+54.003025796" Sep 12 17:40:20.235308 sshd[4926]: Accepted publickey for core from 147.75.109.163 port 45532 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:20.244793 sshd[4926]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:20.267263 systemd-logind[1445]: New session 8 of user core. Sep 12 17:40:20.272093 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:40:20.642584 systemd-networkd[1365]: calibe48816aabe: Link UP Sep 12 17:40:20.651004 systemd-networkd[1365]: calibe48816aabe: Gained carrier Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:19.789 [INFO][4895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0 calico-apiserver-854494659d- calico-apiserver fcab9732-3c6c-4b48-8986-45fc7d97bb57 1071 0 2025-09-12 17:39:44 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:854494659d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-a-756b4d7dc2 calico-apiserver-854494659d-kvgvx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calibe48816aabe [] [] }} ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-kvgvx" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-" Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:19.789 [INFO][4895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-kvgvx" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.083 [INFO][4921] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" HandleID="k8s-pod-network.d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.084 [INFO][4921] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" HandleID="k8s-pod-network.d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000395a00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-a-756b4d7dc2", "pod":"calico-apiserver-854494659d-kvgvx", "timestamp":"2025-09-12 17:40:20.083377625 +0000 UTC"}, Hostname:"ci-4081.3.6-a-756b4d7dc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.084 [INFO][4921] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.084 [INFO][4921] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.084 [INFO][4921] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-756b4d7dc2' Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.184 [INFO][4921] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.304 [INFO][4921] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.392 [INFO][4921] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.409 [INFO][4921] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.440 [INFO][4921] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.440 [INFO][4921] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.452 [INFO][4921] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.511 [INFO][4921] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.604 [INFO][4921] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.64.137/26] block=192.168.64.128/26 handle="k8s-pod-network.d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.606 [INFO][4921] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.137/26] handle="k8s-pod-network.d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.606 [INFO][4921] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:20.756269 containerd[1471]: 2025-09-12 17:40:20.606 [INFO][4921] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.137/26] IPv6=[] ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" HandleID="k8s-pod-network.d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:20.757597 containerd[1471]: 2025-09-12 17:40:20.620 [INFO][4895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-kvgvx" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0", GenerateName:"calico-apiserver-854494659d-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcab9732-3c6c-4b48-8986-45fc7d97bb57", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"854494659d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"", Pod:"calico-apiserver-854494659d-kvgvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe48816aabe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:20.757597 containerd[1471]: 2025-09-12 17:40:20.627 [INFO][4895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.137/32] ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-kvgvx" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:20.757597 containerd[1471]: 2025-09-12 17:40:20.632 [INFO][4895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe48816aabe ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-kvgvx" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:20.757597 containerd[1471]: 2025-09-12 17:40:20.652 [INFO][4895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-kvgvx" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:20.757597 containerd[1471]: 2025-09-12 17:40:20.658 [INFO][4895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-kvgvx" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0", GenerateName:"calico-apiserver-854494659d-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcab9732-3c6c-4b48-8986-45fc7d97bb57", ResourceVersion:"1071", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"854494659d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f", Pod:"calico-apiserver-854494659d-kvgvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe48816aabe", MAC:"86:b9:36:95:aa:b8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:20.757597 containerd[1471]: 2025-09-12 17:40:20.744 [INFO][4895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Namespace="calico-apiserver" Pod="calico-apiserver-854494659d-kvgvx" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:20.851159 systemd-networkd[1365]: vxlan.calico: Link UP Sep 12 17:40:20.851173 systemd-networkd[1365]: vxlan.calico: Gained carrier Sep 12 17:40:21.008417 kubelet[2512]: E0912 17:40:21.006512 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:21.097535 containerd[1471]: time="2025-09-12T17:40:21.095931550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:21.097535 containerd[1471]: time="2025-09-12T17:40:21.096040805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:21.097535 containerd[1471]: time="2025-09-12T17:40:21.096067397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:21.097535 containerd[1471]: time="2025-09-12T17:40:21.096230053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:21.189891 sshd[4926]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:21.213851 systemd[1]: sshd@7-144.126.222.162:22-147.75.109.163:45532.service: Deactivated successfully. Sep 12 17:40:21.218419 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:40:21.235230 systemd-logind[1445]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:40:21.240136 systemd[1]: run-containerd-runc-k8s.io-d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f-runc.v8TJEu.mount: Deactivated successfully. Sep 12 17:40:21.254631 systemd-logind[1445]: Removed session 8. Sep 12 17:40:21.264232 systemd[1]: Started cri-containerd-d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f.scope - libcontainer container d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f. Sep 12 17:40:21.730367 containerd[1471]: time="2025-09-12T17:40:21.730264073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-854494659d-kvgvx,Uid:fcab9732-3c6c-4b48-8986-45fc7d97bb57,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f\"" Sep 12 17:40:22.017937 kubelet[2512]: E0912 17:40:22.016361 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:22.260661 systemd-networkd[1365]: vxlan.calico: Gained IPv6LL Sep 12 17:40:22.580450 systemd-networkd[1365]: calibe48816aabe: Gained IPv6LL Sep 12 17:40:22.772047 containerd[1471]: time="2025-09-12T17:40:22.771966920Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:22.773664 containerd[1471]: time="2025-09-12T17:40:22.773338628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:40:22.774849 containerd[1471]: time="2025-09-12T17:40:22.774773803Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:22.778750 containerd[1471]: time="2025-09-12T17:40:22.778459514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:22.779523 containerd[1471]: time="2025-09-12T17:40:22.779460121Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.825189541s" Sep 12 17:40:22.779523 containerd[1471]: time="2025-09-12T17:40:22.779504551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:40:22.781646 containerd[1471]: time="2025-09-12T17:40:22.781604105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:40:22.784784 containerd[1471]: time="2025-09-12T17:40:22.784452032Z" level=info msg="CreateContainer within sandbox \"490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:40:22.809446 containerd[1471]: time="2025-09-12T17:40:22.809378552Z" level=info msg="CreateContainer within sandbox \"490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3c2c55562187c6c761f040e0254b4df9e00047bddaa7edce0ede79a4da232200\"" Sep 12 17:40:22.811283 containerd[1471]: time="2025-09-12T17:40:22.811219025Z" level=info msg="StartContainer for \"3c2c55562187c6c761f040e0254b4df9e00047bddaa7edce0ede79a4da232200\"" Sep 12 17:40:22.865463 systemd[1]: Started cri-containerd-3c2c55562187c6c761f040e0254b4df9e00047bddaa7edce0ede79a4da232200.scope - libcontainer container 3c2c55562187c6c761f040e0254b4df9e00047bddaa7edce0ede79a4da232200. Sep 12 17:40:22.933830 containerd[1471]: time="2025-09-12T17:40:22.933583908Z" level=info msg="StartContainer for \"3c2c55562187c6c761f040e0254b4df9e00047bddaa7edce0ede79a4da232200\" returns successfully" Sep 12 17:40:23.208224 containerd[1471]: time="2025-09-12T17:40:23.207952792Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:23.209511 containerd[1471]: time="2025-09-12T17:40:23.209395418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:40:23.213883 containerd[1471]: time="2025-09-12T17:40:23.213646145Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 432.000568ms" Sep 12 17:40:23.213883 containerd[1471]: time="2025-09-12T17:40:23.213729704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:40:23.216889 containerd[1471]: time="2025-09-12T17:40:23.216842019Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:40:23.221969 containerd[1471]: time="2025-09-12T17:40:23.221729015Z" level=info msg="CreateContainer within sandbox \"ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:40:23.243724 containerd[1471]: time="2025-09-12T17:40:23.243654141Z" level=info msg="CreateContainer within sandbox \"ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"73a48c9852979c7a89d283c353b183f1b1c37522b22a52d9075b49b705ecc7ca\"" Sep 12 17:40:23.245498 containerd[1471]: time="2025-09-12T17:40:23.245323165Z" level=info msg="StartContainer for \"73a48c9852979c7a89d283c353b183f1b1c37522b22a52d9075b49b705ecc7ca\"" Sep 12 17:40:23.307863 systemd[1]: Started cri-containerd-73a48c9852979c7a89d283c353b183f1b1c37522b22a52d9075b49b705ecc7ca.scope - libcontainer container 73a48c9852979c7a89d283c353b183f1b1c37522b22a52d9075b49b705ecc7ca. Sep 12 17:40:23.389368 containerd[1471]: time="2025-09-12T17:40:23.388912606Z" level=info msg="StartContainer for \"73a48c9852979c7a89d283c353b183f1b1c37522b22a52d9075b49b705ecc7ca\" returns successfully" Sep 12 17:40:24.032945 kubelet[2512]: I0912 17:40:24.032002 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:24.052711 kubelet[2512]: I0912 17:40:24.052610 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-854494659d-799gf" podStartSLOduration=33.179897948 podStartE2EDuration="40.052576059s" podCreationTimestamp="2025-09-12 17:39:44 +0000 UTC" firstStartedPulling="2025-09-12 17:40:15.90841355 +0000 UTC m=+49.864640468" lastFinishedPulling="2025-09-12 17:40:22.781091653 +0000 UTC m=+56.737318579" observedRunningTime="2025-09-12 17:40:23.045525581 +0000 UTC m=+57.001752525" watchObservedRunningTime="2025-09-12 17:40:24.052576059 +0000 UTC m=+58.008803005" Sep 12 17:40:25.191114 containerd[1471]: time="2025-09-12T17:40:25.191025166Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:25.193107 containerd[1471]: time="2025-09-12T17:40:25.192982907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:40:25.194119 containerd[1471]: time="2025-09-12T17:40:25.193935496Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:25.200236 containerd[1471]: time="2025-09-12T17:40:25.200024359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:25.204522 containerd[1471]: time="2025-09-12T17:40:25.203901930Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 1.98700723s" Sep 12 17:40:25.204522 containerd[1471]: time="2025-09-12T17:40:25.203947287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:40:25.207148 containerd[1471]: time="2025-09-12T17:40:25.206425549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:40:25.215351 containerd[1471]: time="2025-09-12T17:40:25.215287861Z" level=info msg="CreateContainer within sandbox \"8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:40:25.254669 containerd[1471]: time="2025-09-12T17:40:25.250919073Z" level=info msg="CreateContainer within sandbox \"8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8442b680e901914d0262c964d602f362802ff2b1860b58596c32d93d0a67bb17\"" Sep 12 17:40:25.254669 containerd[1471]: time="2025-09-12T17:40:25.253629807Z" level=info msg="StartContainer for \"8442b680e901914d0262c964d602f362802ff2b1860b58596c32d93d0a67bb17\"" Sep 12 17:40:25.265044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3459923800.mount: Deactivated successfully. Sep 12 17:40:25.333428 systemd[1]: Started cri-containerd-8442b680e901914d0262c964d602f362802ff2b1860b58596c32d93d0a67bb17.scope - libcontainer container 8442b680e901914d0262c964d602f362802ff2b1860b58596c32d93d0a67bb17. Sep 12 17:40:25.480207 containerd[1471]: time="2025-09-12T17:40:25.479985959Z" level=info msg="StartContainer for \"8442b680e901914d0262c964d602f362802ff2b1860b58596c32d93d0a67bb17\" returns successfully" Sep 12 17:40:25.750241 kubelet[2512]: I0912 17:40:25.747762 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74f4d49d55-cj5zp" podStartSLOduration=31.868985938 podStartE2EDuration="38.747732447s" podCreationTimestamp="2025-09-12 17:39:47 +0000 UTC" firstStartedPulling="2025-09-12 17:40:16.337543734 +0000 UTC m=+50.293770664" lastFinishedPulling="2025-09-12 17:40:23.216290232 +0000 UTC m=+57.172517173" observedRunningTime="2025-09-12 17:40:24.052994032 +0000 UTC m=+58.009220964" watchObservedRunningTime="2025-09-12 17:40:25.747732447 +0000 UTC m=+59.703959383" Sep 12 17:40:25.903672 systemd[1]: Created slice kubepods-besteffort-podacbb60d9_8ffc_42f1_8164_11ab3c9ba7e6.slice - libcontainer container kubepods-besteffort-podacbb60d9_8ffc_42f1_8164_11ab3c9ba7e6.slice. Sep 12 17:40:26.050846 kubelet[2512]: I0912 17:40:26.050777 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsvd2\" (UniqueName: \"kubernetes.io/projected/acbb60d9-8ffc-42f1-8164-11ab3c9ba7e6-kube-api-access-vsvd2\") pod \"calico-apiserver-74f4d49d55-qz29v\" (UID: \"acbb60d9-8ffc-42f1-8164-11ab3c9ba7e6\") " pod="calico-apiserver/calico-apiserver-74f4d49d55-qz29v" Sep 12 17:40:26.051610 kubelet[2512]: I0912 17:40:26.050942 2512 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/acbb60d9-8ffc-42f1-8164-11ab3c9ba7e6-calico-apiserver-certs\") pod \"calico-apiserver-74f4d49d55-qz29v\" (UID: \"acbb60d9-8ffc-42f1-8164-11ab3c9ba7e6\") " pod="calico-apiserver/calico-apiserver-74f4d49d55-qz29v" Sep 12 17:40:26.217797 systemd[1]: Started sshd@8-144.126.222.162:22-147.75.109.163:45534.service - OpenSSH per-connection server daemon (147.75.109.163:45534). Sep 12 17:40:26.302101 containerd[1471]: time="2025-09-12T17:40:26.301585731Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74f4d49d55-qz29v,Uid:acbb60d9-8ffc-42f1-8164-11ab3c9ba7e6,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:40:26.520117 sshd[5191]: Accepted publickey for core from 147.75.109.163 port 45534 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:26.529378 sshd[5191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:26.550937 systemd-logind[1445]: New session 9 of user core. Sep 12 17:40:26.560424 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:40:26.611533 containerd[1471]: time="2025-09-12T17:40:26.611433983Z" level=info msg="StopPodSandbox for \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\"" Sep 12 17:40:27.224474 systemd-networkd[1365]: cali197597c15e5: Link UP Sep 12 17:40:27.227495 systemd-networkd[1365]: cali197597c15e5: Gained carrier Sep 12 17:40:27.312566 containerd[1471]: 2025-09-12 17:40:26.822 [WARNING][5218] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"eb8fd32a-7751-44df-ab7f-52654a5a58c4", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749", Pod:"goldmane-54d579b49d-srrrn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1cd7628fc3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:27.312566 containerd[1471]: 2025-09-12 17:40:26.835 [INFO][5218] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Sep 12 17:40:27.312566 containerd[1471]: 2025-09-12 17:40:26.835 [INFO][5218] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" iface="eth0" netns="" Sep 12 17:40:27.312566 containerd[1471]: 2025-09-12 17:40:26.835 [INFO][5218] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Sep 12 17:40:27.312566 containerd[1471]: 2025-09-12 17:40:26.835 [INFO][5218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Sep 12 17:40:27.312566 containerd[1471]: 2025-09-12 17:40:27.128 [INFO][5235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" HandleID="k8s-pod-network.b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:27.312566 containerd[1471]: 2025-09-12 17:40:27.133 [INFO][5235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:27.312566 containerd[1471]: 2025-09-12 17:40:27.199 [INFO][5235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:27.312566 containerd[1471]: 2025-09-12 17:40:27.247 [WARNING][5235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" HandleID="k8s-pod-network.b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:27.312566 containerd[1471]: 2025-09-12 17:40:27.248 [INFO][5235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" HandleID="k8s-pod-network.b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:27.312566 containerd[1471]: 2025-09-12 17:40:27.260 [INFO][5235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:27.312566 containerd[1471]: 2025-09-12 17:40:27.294 [INFO][5218] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Sep 12 17:40:27.312566 containerd[1471]: time="2025-09-12T17:40:27.312289618Z" level=info msg="TearDown network for sandbox \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\" successfully" Sep 12 17:40:27.312566 containerd[1471]: time="2025-09-12T17:40:27.312321425Z" level=info msg="StopPodSandbox for \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\" returns successfully" Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:26.670 [INFO][5193] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--qz29v-eth0 calico-apiserver-74f4d49d55- calico-apiserver acbb60d9-8ffc-42f1-8164-11ab3c9ba7e6 1188 0 2025-09-12 17:40:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:74f4d49d55 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-a-756b4d7dc2 calico-apiserver-74f4d49d55-qz29v eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali197597c15e5 [] [] }} ContainerID="6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-qz29v" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--qz29v-" Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:26.671 [INFO][5193] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-qz29v" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--qz29v-eth0" Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:26.893 [INFO][5227] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" HandleID="k8s-pod-network.6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--qz29v-eth0" Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:26.894 [INFO][5227] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" HandleID="k8s-pod-network.6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--qz29v-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fb70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-a-756b4d7dc2", "pod":"calico-apiserver-74f4d49d55-qz29v", "timestamp":"2025-09-12 17:40:26.893359969 +0000 UTC"}, Hostname:"ci-4081.3.6-a-756b4d7dc2", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:26.895 [INFO][5227] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:26.896 [INFO][5227] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:26.896 [INFO][5227] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-a-756b4d7dc2' Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:26.945 [INFO][5227] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:27.014 [INFO][5227] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:27.050 [INFO][5227] ipam/ipam.go 511: Trying affinity for 192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:27.059 [INFO][5227] ipam/ipam.go 158: Attempting to load block cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:27.074 [INFO][5227] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.64.128/26 host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:27.075 [INFO][5227] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.64.128/26 handle="k8s-pod-network.6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:27.087 [INFO][5227] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:27.134 [INFO][5227] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.64.128/26 handle="k8s-pod-network.6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:27.198 [INFO][5227] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.64.138/26] block=192.168.64.128/26 handle="k8s-pod-network.6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:27.198 [INFO][5227] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.64.138/26] handle="k8s-pod-network.6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" host="ci-4081.3.6-a-756b4d7dc2" Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:27.198 [INFO][5227] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:27.321905 containerd[1471]: 2025-09-12 17:40:27.198 [INFO][5227] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.64.138/26] IPv6=[] ContainerID="6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" HandleID="k8s-pod-network.6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--qz29v-eth0" Sep 12 17:40:27.326868 containerd[1471]: 2025-09-12 17:40:27.215 [INFO][5193] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-qz29v" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--qz29v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--qz29v-eth0", GenerateName:"calico-apiserver-74f4d49d55-", Namespace:"calico-apiserver", SelfLink:"", UID:"acbb60d9-8ffc-42f1-8164-11ab3c9ba7e6", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74f4d49d55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"", Pod:"calico-apiserver-74f4d49d55-qz29v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali197597c15e5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:27.326868 containerd[1471]: 2025-09-12 17:40:27.215 [INFO][5193] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.64.138/32] ContainerID="6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-qz29v" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--qz29v-eth0" Sep 12 17:40:27.326868 containerd[1471]: 2025-09-12 17:40:27.215 [INFO][5193] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali197597c15e5 ContainerID="6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-qz29v" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--qz29v-eth0" Sep 12 17:40:27.326868 containerd[1471]: 2025-09-12 17:40:27.239 [INFO][5193] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-qz29v" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--qz29v-eth0" Sep 12 17:40:27.326868 containerd[1471]: 2025-09-12 17:40:27.248 [INFO][5193] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-qz29v" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--qz29v-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--qz29v-eth0", GenerateName:"calico-apiserver-74f4d49d55-", Namespace:"calico-apiserver", SelfLink:"", UID:"acbb60d9-8ffc-42f1-8164-11ab3c9ba7e6", ResourceVersion:"1188", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 40, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74f4d49d55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de", Pod:"calico-apiserver-74f4d49d55-qz29v", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali197597c15e5", MAC:"0e:50:5b:2d:f5:94", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:27.326868 containerd[1471]: 2025-09-12 17:40:27.287 [INFO][5193] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de" Namespace="calico-apiserver" Pod="calico-apiserver-74f4d49d55-qz29v" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--qz29v-eth0" Sep 12 17:40:27.443258 containerd[1471]: time="2025-09-12T17:40:27.443195542Z" level=info msg="RemovePodSandbox for \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\"" Sep 12 17:40:27.443462 containerd[1471]: time="2025-09-12T17:40:27.443268686Z" level=info msg="Forcibly stopping sandbox \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\"" Sep 12 17:40:27.454934 containerd[1471]: time="2025-09-12T17:40:27.454298151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:40:27.454934 containerd[1471]: time="2025-09-12T17:40:27.454391100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:40:27.454934 containerd[1471]: time="2025-09-12T17:40:27.454408104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:27.454934 containerd[1471]: time="2025-09-12T17:40:27.454531660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:40:27.555329 sshd[5191]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:27.564189 systemd[1]: sshd@8-144.126.222.162:22-147.75.109.163:45534.service: Deactivated successfully. Sep 12 17:40:27.571004 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:40:27.580869 systemd-logind[1445]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:40:27.584263 systemd-logind[1445]: Removed session 9. Sep 12 17:40:27.600451 systemd[1]: Started cri-containerd-6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de.scope - libcontainer container 6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de. Sep 12 17:40:27.746636 containerd[1471]: 2025-09-12 17:40:27.624 [WARNING][5286] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"eb8fd32a-7751-44df-ab7f-52654a5a58c4", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749", Pod:"goldmane-54d579b49d-srrrn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.64.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1cd7628fc3e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:27.746636 containerd[1471]: 2025-09-12 17:40:27.626 [INFO][5286] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Sep 12 17:40:27.746636 containerd[1471]: 2025-09-12 17:40:27.626 [INFO][5286] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" iface="eth0" netns="" Sep 12 17:40:27.746636 containerd[1471]: 2025-09-12 17:40:27.626 [INFO][5286] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Sep 12 17:40:27.746636 containerd[1471]: 2025-09-12 17:40:27.626 [INFO][5286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Sep 12 17:40:27.746636 containerd[1471]: 2025-09-12 17:40:27.712 [INFO][5312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" HandleID="k8s-pod-network.b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:27.746636 containerd[1471]: 2025-09-12 17:40:27.712 [INFO][5312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:27.746636 containerd[1471]: 2025-09-12 17:40:27.712 [INFO][5312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:27.746636 containerd[1471]: 2025-09-12 17:40:27.731 [WARNING][5312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" HandleID="k8s-pod-network.b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:27.746636 containerd[1471]: 2025-09-12 17:40:27.731 [INFO][5312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" HandleID="k8s-pod-network.b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-goldmane--54d579b49d--srrrn-eth0" Sep 12 17:40:27.746636 containerd[1471]: 2025-09-12 17:40:27.735 [INFO][5312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:27.746636 containerd[1471]: 2025-09-12 17:40:27.738 [INFO][5286] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef" Sep 12 17:40:27.748207 containerd[1471]: time="2025-09-12T17:40:27.747309451Z" level=info msg="TearDown network for sandbox \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\" successfully" Sep 12 17:40:27.793745 containerd[1471]: time="2025-09-12T17:40:27.793689852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-74f4d49d55-qz29v,Uid:acbb60d9-8ffc-42f1-8164-11ab3c9ba7e6,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de\"" Sep 12 17:40:27.803796 containerd[1471]: time="2025-09-12T17:40:27.802846512Z" level=info msg="CreateContainer within sandbox \"6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:40:27.837268 containerd[1471]: time="2025-09-12T17:40:27.836882077Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:27.839299 containerd[1471]: time="2025-09-12T17:40:27.839231474Z" level=info msg="CreateContainer within sandbox \"6b15346fd73909c1e88a6ece358c21525d258f92e9deaa5e4fc2c3a7701566de\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"917ff641b5ca68a95aeb01d05748fff60ee4b3047d5a77e278b768d49eff7ace\"" Sep 12 17:40:27.841767 containerd[1471]: time="2025-09-12T17:40:27.841164605Z" level=info msg="StartContainer for \"917ff641b5ca68a95aeb01d05748fff60ee4b3047d5a77e278b768d49eff7ace\"" Sep 12 17:40:27.917525 systemd[1]: Started cri-containerd-917ff641b5ca68a95aeb01d05748fff60ee4b3047d5a77e278b768d49eff7ace.scope - libcontainer container 917ff641b5ca68a95aeb01d05748fff60ee4b3047d5a77e278b768d49eff7ace. Sep 12 17:40:27.932894 containerd[1471]: time="2025-09-12T17:40:27.932834228Z" level=info msg="RemovePodSandbox \"b2fcc2b7799229abc23f136ceff75aceb586eff956381c098edd45905a782eef\" returns successfully" Sep 12 17:40:27.954725 containerd[1471]: time="2025-09-12T17:40:27.954629953Z" level=info msg="StopPodSandbox for \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\"" Sep 12 17:40:28.171982 containerd[1471]: time="2025-09-12T17:40:28.171369712Z" level=info msg="StartContainer for \"917ff641b5ca68a95aeb01d05748fff60ee4b3047d5a77e278b768d49eff7ace\" returns successfully" Sep 12 17:40:28.180669 containerd[1471]: 2025-09-12 17:40:28.045 [WARNING][5365] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0", GenerateName:"calico-kube-controllers-dc9d95797-", Namespace:"calico-system", SelfLink:"", UID:"8e802a89-f9ff-49ae-94df-fd49494b3d9b", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc9d95797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5", Pod:"calico-kube-controllers-dc9d95797-xw78z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2e7d72fd6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:28.180669 containerd[1471]: 2025-09-12 17:40:28.046 [INFO][5365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Sep 12 17:40:28.180669 containerd[1471]: 2025-09-12 17:40:28.046 [INFO][5365] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" iface="eth0" netns="" Sep 12 17:40:28.180669 containerd[1471]: 2025-09-12 17:40:28.046 [INFO][5365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Sep 12 17:40:28.180669 containerd[1471]: 2025-09-12 17:40:28.046 [INFO][5365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Sep 12 17:40:28.180669 containerd[1471]: 2025-09-12 17:40:28.141 [INFO][5372] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" HandleID="k8s-pod-network.17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:28.180669 containerd[1471]: 2025-09-12 17:40:28.141 [INFO][5372] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:28.180669 containerd[1471]: 2025-09-12 17:40:28.142 [INFO][5372] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:28.180669 containerd[1471]: 2025-09-12 17:40:28.162 [WARNING][5372] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" HandleID="k8s-pod-network.17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:28.180669 containerd[1471]: 2025-09-12 17:40:28.163 [INFO][5372] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" HandleID="k8s-pod-network.17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:28.180669 containerd[1471]: 2025-09-12 17:40:28.166 [INFO][5372] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:28.180669 containerd[1471]: 2025-09-12 17:40:28.170 [INFO][5365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Sep 12 17:40:28.181977 containerd[1471]: time="2025-09-12T17:40:28.181588194Z" level=info msg="TearDown network for sandbox \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\" successfully" Sep 12 17:40:28.181977 containerd[1471]: time="2025-09-12T17:40:28.181657076Z" level=info msg="StopPodSandbox for \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\" returns successfully" Sep 12 17:40:28.182598 containerd[1471]: time="2025-09-12T17:40:28.182545501Z" level=info msg="RemovePodSandbox for \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\"" Sep 12 17:40:28.183062 containerd[1471]: time="2025-09-12T17:40:28.182607731Z" level=info msg="Forcibly stopping sandbox \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\"" Sep 12 17:40:28.348478 containerd[1471]: 2025-09-12 17:40:28.263 [WARNING][5395] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0", GenerateName:"calico-kube-controllers-dc9d95797-", Namespace:"calico-system", SelfLink:"", UID:"8e802a89-f9ff-49ae-94df-fd49494b3d9b", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"dc9d95797", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5", Pod:"calico-kube-controllers-dc9d95797-xw78z", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.64.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib2e7d72fd6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:28.348478 containerd[1471]: 2025-09-12 17:40:28.263 [INFO][5395] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Sep 12 17:40:28.348478 containerd[1471]: 2025-09-12 17:40:28.264 [INFO][5395] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" iface="eth0" netns="" Sep 12 17:40:28.348478 containerd[1471]: 2025-09-12 17:40:28.264 [INFO][5395] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Sep 12 17:40:28.348478 containerd[1471]: 2025-09-12 17:40:28.264 [INFO][5395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Sep 12 17:40:28.348478 containerd[1471]: 2025-09-12 17:40:28.328 [INFO][5403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" HandleID="k8s-pod-network.17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:28.348478 containerd[1471]: 2025-09-12 17:40:28.328 [INFO][5403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:28.348478 containerd[1471]: 2025-09-12 17:40:28.328 [INFO][5403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:28.348478 containerd[1471]: 2025-09-12 17:40:28.339 [WARNING][5403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" HandleID="k8s-pod-network.17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:28.348478 containerd[1471]: 2025-09-12 17:40:28.339 [INFO][5403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" HandleID="k8s-pod-network.17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--kube--controllers--dc9d95797--xw78z-eth0" Sep 12 17:40:28.348478 containerd[1471]: 2025-09-12 17:40:28.342 [INFO][5403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:28.348478 containerd[1471]: 2025-09-12 17:40:28.345 [INFO][5395] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263" Sep 12 17:40:28.349788 containerd[1471]: time="2025-09-12T17:40:28.348534703Z" level=info msg="TearDown network for sandbox \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\" successfully" Sep 12 17:40:28.352938 containerd[1471]: time="2025-09-12T17:40:28.352886779Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:28.353122 containerd[1471]: time="2025-09-12T17:40:28.353042606Z" level=info msg="RemovePodSandbox \"17b3f3eea27d903864ced49375891261896f2f767876180c47f4ac2e98df6263\" returns successfully" Sep 12 17:40:28.354546 containerd[1471]: time="2025-09-12T17:40:28.354490769Z" level=info msg="StopPodSandbox for \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\"" Sep 12 17:40:28.492873 containerd[1471]: 2025-09-12 17:40:28.429 [WARNING][5421] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b35527ec-e920-470f-8bab-682f16b3b48b", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce", Pod:"coredns-668d6bf9bc-g49wb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicea41103d7a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:28.492873 containerd[1471]: 2025-09-12 17:40:28.429 [INFO][5421] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Sep 12 17:40:28.492873 containerd[1471]: 2025-09-12 17:40:28.429 [INFO][5421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" iface="eth0" netns="" Sep 12 17:40:28.492873 containerd[1471]: 2025-09-12 17:40:28.429 [INFO][5421] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Sep 12 17:40:28.492873 containerd[1471]: 2025-09-12 17:40:28.429 [INFO][5421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Sep 12 17:40:28.492873 containerd[1471]: 2025-09-12 17:40:28.468 [INFO][5428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" HandleID="k8s-pod-network.c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:28.492873 containerd[1471]: 2025-09-12 17:40:28.468 [INFO][5428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:28.492873 containerd[1471]: 2025-09-12 17:40:28.468 [INFO][5428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:28.492873 containerd[1471]: 2025-09-12 17:40:28.483 [WARNING][5428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" HandleID="k8s-pod-network.c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:28.492873 containerd[1471]: 2025-09-12 17:40:28.483 [INFO][5428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" HandleID="k8s-pod-network.c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:28.492873 containerd[1471]: 2025-09-12 17:40:28.488 [INFO][5428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:28.492873 containerd[1471]: 2025-09-12 17:40:28.490 [INFO][5421] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Sep 12 17:40:28.494656 containerd[1471]: time="2025-09-12T17:40:28.493375316Z" level=info msg="TearDown network for sandbox \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\" successfully" Sep 12 17:40:28.494656 containerd[1471]: time="2025-09-12T17:40:28.493412423Z" level=info msg="StopPodSandbox for \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\" returns successfully" Sep 12 17:40:28.496384 containerd[1471]: time="2025-09-12T17:40:28.496006885Z" level=info msg="RemovePodSandbox for \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\"" Sep 12 17:40:28.496384 containerd[1471]: time="2025-09-12T17:40:28.496051507Z" level=info msg="Forcibly stopping sandbox \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\"" Sep 12 17:40:28.595353 systemd-networkd[1365]: cali197597c15e5: Gained IPv6LL Sep 12 17:40:28.641535 containerd[1471]: 2025-09-12 17:40:28.563 [WARNING][5443] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b35527ec-e920-470f-8bab-682f16b3b48b", ResourceVersion:"1051", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"4783ed4728ce0980672302bf85c9fb4b82f3cedfdf6143e35a0417200795c7ce", Pod:"coredns-668d6bf9bc-g49wb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicea41103d7a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:28.641535 containerd[1471]: 2025-09-12 17:40:28.564 [INFO][5443] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Sep 12 17:40:28.641535 containerd[1471]: 2025-09-12 17:40:28.564 [INFO][5443] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" iface="eth0" netns="" Sep 12 17:40:28.641535 containerd[1471]: 2025-09-12 17:40:28.564 [INFO][5443] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Sep 12 17:40:28.641535 containerd[1471]: 2025-09-12 17:40:28.564 [INFO][5443] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Sep 12 17:40:28.641535 containerd[1471]: 2025-09-12 17:40:28.615 [INFO][5450] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" HandleID="k8s-pod-network.c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:28.641535 containerd[1471]: 2025-09-12 17:40:28.616 [INFO][5450] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:28.641535 containerd[1471]: 2025-09-12 17:40:28.616 [INFO][5450] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:28.641535 containerd[1471]: 2025-09-12 17:40:28.626 [WARNING][5450] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" HandleID="k8s-pod-network.c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:28.641535 containerd[1471]: 2025-09-12 17:40:28.626 [INFO][5450] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" HandleID="k8s-pod-network.c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--g49wb-eth0" Sep 12 17:40:28.641535 containerd[1471]: 2025-09-12 17:40:28.630 [INFO][5450] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:28.641535 containerd[1471]: 2025-09-12 17:40:28.633 [INFO][5443] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0" Sep 12 17:40:28.641535 containerd[1471]: time="2025-09-12T17:40:28.640533178Z" level=info msg="TearDown network for sandbox \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\" successfully" Sep 12 17:40:28.654676 containerd[1471]: time="2025-09-12T17:40:28.654382705Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:28.654676 containerd[1471]: time="2025-09-12T17:40:28.654521727Z" level=info msg="RemovePodSandbox \"c5b67f990fab7e1e7c9d00a5b5e21b1c8386bc740a84fb2f290cac09b154a5b0\" returns successfully" Sep 12 17:40:28.656246 containerd[1471]: time="2025-09-12T17:40:28.655438134Z" level=info msg="StopPodSandbox for \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\"" Sep 12 17:40:28.790449 containerd[1471]: 2025-09-12 17:40:28.725 [WARNING][5464] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0", GenerateName:"calico-apiserver-854494659d-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbf3d71b-4b9a-4508-98f1-871dbdecc9e3", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"854494659d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3", Pod:"calico-apiserver-854494659d-799gf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cf06bf79f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:28.790449 containerd[1471]: 2025-09-12 17:40:28.725 [INFO][5464] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Sep 12 17:40:28.790449 containerd[1471]: 2025-09-12 17:40:28.725 [INFO][5464] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" iface="eth0" netns="" Sep 12 17:40:28.790449 containerd[1471]: 2025-09-12 17:40:28.725 [INFO][5464] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Sep 12 17:40:28.790449 containerd[1471]: 2025-09-12 17:40:28.725 [INFO][5464] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Sep 12 17:40:28.790449 containerd[1471]: 2025-09-12 17:40:28.767 [INFO][5471] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" HandleID="k8s-pod-network.c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:28.790449 containerd[1471]: 2025-09-12 17:40:28.767 [INFO][5471] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:28.790449 containerd[1471]: 2025-09-12 17:40:28.767 [INFO][5471] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:28.790449 containerd[1471]: 2025-09-12 17:40:28.780 [WARNING][5471] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" HandleID="k8s-pod-network.c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:28.790449 containerd[1471]: 2025-09-12 17:40:28.780 [INFO][5471] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" HandleID="k8s-pod-network.c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:28.790449 containerd[1471]: 2025-09-12 17:40:28.784 [INFO][5471] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:28.790449 containerd[1471]: 2025-09-12 17:40:28.787 [INFO][5464] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Sep 12 17:40:28.790449 containerd[1471]: time="2025-09-12T17:40:28.790306585Z" level=info msg="TearDown network for sandbox \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\" successfully" Sep 12 17:40:28.790449 containerd[1471]: time="2025-09-12T17:40:28.790334709Z" level=info msg="StopPodSandbox for \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\" returns successfully" Sep 12 17:40:28.801726 containerd[1471]: time="2025-09-12T17:40:28.801335753Z" level=info msg="RemovePodSandbox for \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\"" Sep 12 17:40:28.801726 containerd[1471]: time="2025-09-12T17:40:28.801391342Z" level=info msg="Forcibly stopping sandbox \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\"" Sep 12 17:40:29.006264 containerd[1471]: 2025-09-12 17:40:28.889 [WARNING][5485] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0", GenerateName:"calico-apiserver-854494659d-", Namespace:"calico-apiserver", SelfLink:"", UID:"dbf3d71b-4b9a-4508-98f1-871dbdecc9e3", ResourceVersion:"1139", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"854494659d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3", Pod:"calico-apiserver-854494659d-799gf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2cf06bf79f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:29.006264 containerd[1471]: 2025-09-12 17:40:28.889 [INFO][5485] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Sep 12 17:40:29.006264 containerd[1471]: 2025-09-12 17:40:28.889 [INFO][5485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" iface="eth0" netns="" Sep 12 17:40:29.006264 containerd[1471]: 2025-09-12 17:40:28.889 [INFO][5485] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Sep 12 17:40:29.006264 containerd[1471]: 2025-09-12 17:40:28.889 [INFO][5485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Sep 12 17:40:29.006264 containerd[1471]: 2025-09-12 17:40:28.959 [INFO][5493] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" HandleID="k8s-pod-network.c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:29.006264 containerd[1471]: 2025-09-12 17:40:28.961 [INFO][5493] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:29.006264 containerd[1471]: 2025-09-12 17:40:28.961 [INFO][5493] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:29.006264 containerd[1471]: 2025-09-12 17:40:28.978 [WARNING][5493] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" HandleID="k8s-pod-network.c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:29.006264 containerd[1471]: 2025-09-12 17:40:28.978 [INFO][5493] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" HandleID="k8s-pod-network.c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:29.006264 containerd[1471]: 2025-09-12 17:40:28.982 [INFO][5493] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:29.006264 containerd[1471]: 2025-09-12 17:40:28.997 [INFO][5485] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76" Sep 12 17:40:29.007856 containerd[1471]: time="2025-09-12T17:40:29.007129791Z" level=info msg="TearDown network for sandbox \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\" successfully" Sep 12 17:40:29.016013 containerd[1471]: time="2025-09-12T17:40:29.015582643Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:29.016013 containerd[1471]: time="2025-09-12T17:40:29.015835936Z" level=info msg="RemovePodSandbox \"c26a955dbdf4b0a1a04c57364de76949e873e2c9e00d0e6b95b3c88395b1af76\" returns successfully" Sep 12 17:40:29.017968 containerd[1471]: time="2025-09-12T17:40:29.017572878Z" level=info msg="StopPodSandbox for \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\"" Sep 12 17:40:29.191134 kubelet[2512]: I0912 17:40:29.190165 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-74f4d49d55-qz29v" podStartSLOduration=4.189468605 podStartE2EDuration="4.189468605s" podCreationTimestamp="2025-09-12 17:40:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:40:29.187944628 +0000 UTC m=+63.144171570" watchObservedRunningTime="2025-09-12 17:40:29.189468605 +0000 UTC m=+63.145695527" Sep 12 17:40:29.283454 containerd[1471]: 2025-09-12 17:40:29.146 [WARNING][5511] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06c23272-db86-4e7e-9c53-92578f077ab6", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493", Pod:"csi-node-driver-4k29j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7ce226be312", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:29.283454 containerd[1471]: 2025-09-12 17:40:29.162 [INFO][5511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Sep 12 17:40:29.283454 containerd[1471]: 2025-09-12 17:40:29.162 [INFO][5511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" iface="eth0" netns="" Sep 12 17:40:29.283454 containerd[1471]: 2025-09-12 17:40:29.162 [INFO][5511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Sep 12 17:40:29.283454 containerd[1471]: 2025-09-12 17:40:29.162 [INFO][5511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Sep 12 17:40:29.283454 containerd[1471]: 2025-09-12 17:40:29.253 [INFO][5518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" HandleID="k8s-pod-network.6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:29.283454 containerd[1471]: 2025-09-12 17:40:29.253 [INFO][5518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:29.283454 containerd[1471]: 2025-09-12 17:40:29.253 [INFO][5518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:29.283454 containerd[1471]: 2025-09-12 17:40:29.266 [WARNING][5518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" HandleID="k8s-pod-network.6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:29.283454 containerd[1471]: 2025-09-12 17:40:29.266 [INFO][5518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" HandleID="k8s-pod-network.6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:29.283454 containerd[1471]: 2025-09-12 17:40:29.272 [INFO][5518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:29.283454 containerd[1471]: 2025-09-12 17:40:29.275 [INFO][5511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Sep 12 17:40:29.283454 containerd[1471]: time="2025-09-12T17:40:29.280405099Z" level=info msg="TearDown network for sandbox \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\" successfully" Sep 12 17:40:29.283454 containerd[1471]: time="2025-09-12T17:40:29.280430934Z" level=info msg="StopPodSandbox for \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\" returns successfully" Sep 12 17:40:29.297778 containerd[1471]: time="2025-09-12T17:40:29.297541259Z" level=info msg="RemovePodSandbox for \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\"" Sep 12 17:40:29.297778 containerd[1471]: time="2025-09-12T17:40:29.297778589Z" level=info msg="Forcibly stopping sandbox \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\"" Sep 12 17:40:29.555142 containerd[1471]: 2025-09-12 17:40:29.431 [WARNING][5535] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06c23272-db86-4e7e-9c53-92578f077ab6", ResourceVersion:"1032", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493", Pod:"csi-node-driver-4k29j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.64.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7ce226be312", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:29.555142 containerd[1471]: 2025-09-12 17:40:29.432 [INFO][5535] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Sep 12 17:40:29.555142 containerd[1471]: 2025-09-12 17:40:29.432 [INFO][5535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" iface="eth0" netns="" Sep 12 17:40:29.555142 containerd[1471]: 2025-09-12 17:40:29.432 [INFO][5535] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Sep 12 17:40:29.555142 containerd[1471]: 2025-09-12 17:40:29.432 [INFO][5535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Sep 12 17:40:29.555142 containerd[1471]: 2025-09-12 17:40:29.525 [INFO][5542] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" HandleID="k8s-pod-network.6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:29.555142 containerd[1471]: 2025-09-12 17:40:29.525 [INFO][5542] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:29.555142 containerd[1471]: 2025-09-12 17:40:29.525 [INFO][5542] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:29.555142 containerd[1471]: 2025-09-12 17:40:29.538 [WARNING][5542] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" HandleID="k8s-pod-network.6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:29.555142 containerd[1471]: 2025-09-12 17:40:29.538 [INFO][5542] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" HandleID="k8s-pod-network.6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-csi--node--driver--4k29j-eth0" Sep 12 17:40:29.555142 containerd[1471]: 2025-09-12 17:40:29.542 [INFO][5542] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:29.555142 containerd[1471]: 2025-09-12 17:40:29.544 [INFO][5535] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be" Sep 12 17:40:29.555142 containerd[1471]: time="2025-09-12T17:40:29.554340008Z" level=info msg="TearDown network for sandbox \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\" successfully" Sep 12 17:40:29.560867 containerd[1471]: time="2025-09-12T17:40:29.560805143Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:29.561133 containerd[1471]: time="2025-09-12T17:40:29.560899777Z" level=info msg="RemovePodSandbox \"6dcbd497d7eb3d4e914320a8da9558d682906ea67bd7b59a4e721e0cc190a6be\" returns successfully" Sep 12 17:40:29.562131 containerd[1471]: time="2025-09-12T17:40:29.561743913Z" level=info msg="StopPodSandbox for \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\"" Sep 12 17:40:29.856227 containerd[1471]: 2025-09-12 17:40:29.676 [WARNING][5557] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0", GenerateName:"calico-apiserver-854494659d-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcab9732-3c6c-4b48-8986-45fc7d97bb57", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"854494659d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f", Pod:"calico-apiserver-854494659d-kvgvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe48816aabe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:29.856227 containerd[1471]: 2025-09-12 17:40:29.679 [INFO][5557] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Sep 12 17:40:29.856227 containerd[1471]: 2025-09-12 17:40:29.679 [INFO][5557] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" iface="eth0" netns="" Sep 12 17:40:29.856227 containerd[1471]: 2025-09-12 17:40:29.679 [INFO][5557] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Sep 12 17:40:29.856227 containerd[1471]: 2025-09-12 17:40:29.680 [INFO][5557] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Sep 12 17:40:29.856227 containerd[1471]: 2025-09-12 17:40:29.806 [INFO][5565] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" HandleID="k8s-pod-network.0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:29.856227 containerd[1471]: 2025-09-12 17:40:29.808 [INFO][5565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:29.856227 containerd[1471]: 2025-09-12 17:40:29.808 [INFO][5565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:29.856227 containerd[1471]: 2025-09-12 17:40:29.841 [WARNING][5565] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" HandleID="k8s-pod-network.0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:29.856227 containerd[1471]: 2025-09-12 17:40:29.841 [INFO][5565] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" HandleID="k8s-pod-network.0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:29.856227 containerd[1471]: 2025-09-12 17:40:29.847 [INFO][5565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:29.856227 containerd[1471]: 2025-09-12 17:40:29.850 [INFO][5557] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Sep 12 17:40:29.856227 containerd[1471]: time="2025-09-12T17:40:29.853777307Z" level=info msg="TearDown network for sandbox \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\" successfully" Sep 12 17:40:29.856227 containerd[1471]: time="2025-09-12T17:40:29.853810613Z" level=info msg="StopPodSandbox for \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\" returns successfully" Sep 12 17:40:29.857831 containerd[1471]: time="2025-09-12T17:40:29.857783906Z" level=info msg="RemovePodSandbox for \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\"" Sep 12 17:40:29.857831 containerd[1471]: time="2025-09-12T17:40:29.857834486Z" level=info msg="Forcibly stopping sandbox \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\"" Sep 12 17:40:30.103864 containerd[1471]: 2025-09-12 17:40:29.957 [WARNING][5579] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0", GenerateName:"calico-apiserver-854494659d-", Namespace:"calico-apiserver", SelfLink:"", UID:"fcab9732-3c6c-4b48-8986-45fc7d97bb57", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"854494659d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f", Pod:"calico-apiserver-854494659d-kvgvx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calibe48816aabe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.103864 containerd[1471]: 2025-09-12 17:40:29.957 [INFO][5579] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Sep 12 17:40:30.103864 containerd[1471]: 2025-09-12 17:40:29.957 [INFO][5579] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" iface="eth0" netns="" Sep 12 17:40:30.103864 containerd[1471]: 2025-09-12 17:40:29.957 [INFO][5579] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Sep 12 17:40:30.103864 containerd[1471]: 2025-09-12 17:40:29.957 [INFO][5579] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Sep 12 17:40:30.103864 containerd[1471]: 2025-09-12 17:40:30.067 [INFO][5586] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" HandleID="k8s-pod-network.0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:30.103864 containerd[1471]: 2025-09-12 17:40:30.068 [INFO][5586] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.103864 containerd[1471]: 2025-09-12 17:40:30.068 [INFO][5586] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.103864 containerd[1471]: 2025-09-12 17:40:30.082 [WARNING][5586] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" HandleID="k8s-pod-network.0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:30.103864 containerd[1471]: 2025-09-12 17:40:30.082 [INFO][5586] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" HandleID="k8s-pod-network.0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:30.103864 containerd[1471]: 2025-09-12 17:40:30.087 [INFO][5586] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.103864 containerd[1471]: 2025-09-12 17:40:30.095 [INFO][5579] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7" Sep 12 17:40:30.103864 containerd[1471]: time="2025-09-12T17:40:30.103523178Z" level=info msg="TearDown network for sandbox \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\" successfully" Sep 12 17:40:30.130883 containerd[1471]: time="2025-09-12T17:40:30.129224101Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:30.130883 containerd[1471]: time="2025-09-12T17:40:30.129347550Z" level=info msg="RemovePodSandbox \"0ddf865068d6768b0fe5b96f8d6a3d694b6120c2e3c9acf46d8588dfa857b4c7\" returns successfully" Sep 12 17:40:30.132820 containerd[1471]: time="2025-09-12T17:40:30.132739150Z" level=info msg="StopPodSandbox for \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\"" Sep 12 17:40:30.480641 containerd[1471]: 2025-09-12 17:40:30.335 [WARNING][5600] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0", GenerateName:"calico-apiserver-74f4d49d55-", Namespace:"calico-apiserver", SelfLink:"", UID:"2abfa125-f454-4039-a6a8-433554adf69c", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74f4d49d55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e", Pod:"calico-apiserver-74f4d49d55-cj5zp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41c90d5c9d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.480641 containerd[1471]: 2025-09-12 17:40:30.337 [INFO][5600] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Sep 12 17:40:30.480641 containerd[1471]: 2025-09-12 17:40:30.337 [INFO][5600] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" iface="eth0" netns="" Sep 12 17:40:30.480641 containerd[1471]: 2025-09-12 17:40:30.337 [INFO][5600] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Sep 12 17:40:30.480641 containerd[1471]: 2025-09-12 17:40:30.337 [INFO][5600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Sep 12 17:40:30.480641 containerd[1471]: 2025-09-12 17:40:30.425 [INFO][5608] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" HandleID="k8s-pod-network.460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:30.480641 containerd[1471]: 2025-09-12 17:40:30.427 [INFO][5608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.480641 containerd[1471]: 2025-09-12 17:40:30.427 [INFO][5608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.480641 containerd[1471]: 2025-09-12 17:40:30.444 [WARNING][5608] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" HandleID="k8s-pod-network.460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:30.480641 containerd[1471]: 2025-09-12 17:40:30.444 [INFO][5608] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" HandleID="k8s-pod-network.460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:30.480641 containerd[1471]: 2025-09-12 17:40:30.450 [INFO][5608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.480641 containerd[1471]: 2025-09-12 17:40:30.466 [INFO][5600] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Sep 12 17:40:30.480641 containerd[1471]: time="2025-09-12T17:40:30.480015634Z" level=info msg="TearDown network for sandbox \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\" successfully" Sep 12 17:40:30.481558 containerd[1471]: time="2025-09-12T17:40:30.480054594Z" level=info msg="StopPodSandbox for \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\" returns successfully" Sep 12 17:40:30.484015 containerd[1471]: time="2025-09-12T17:40:30.483966595Z" level=info msg="RemovePodSandbox for \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\"" Sep 12 17:40:30.484158 containerd[1471]: time="2025-09-12T17:40:30.484032540Z" level=info msg="Forcibly stopping sandbox \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\"" Sep 12 17:40:30.783223 containerd[1471]: 2025-09-12 17:40:30.640 [WARNING][5622] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0", GenerateName:"calico-apiserver-74f4d49d55-", Namespace:"calico-apiserver", SelfLink:"", UID:"2abfa125-f454-4039-a6a8-433554adf69c", ResourceVersion:"1166", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"74f4d49d55", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"ef8df4d1e8df2d9b6dba522f30b1c01cbcd77ae3e0bf3bccc2dd1d2cbf5da78e", Pod:"calico-apiserver-74f4d49d55-cj5zp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.64.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali41c90d5c9d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:30.783223 containerd[1471]: 2025-09-12 17:40:30.642 [INFO][5622] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Sep 12 17:40:30.783223 containerd[1471]: 2025-09-12 17:40:30.642 [INFO][5622] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" iface="eth0" netns="" Sep 12 17:40:30.783223 containerd[1471]: 2025-09-12 17:40:30.642 [INFO][5622] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Sep 12 17:40:30.783223 containerd[1471]: 2025-09-12 17:40:30.642 [INFO][5622] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Sep 12 17:40:30.783223 containerd[1471]: 2025-09-12 17:40:30.747 [INFO][5629] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" HandleID="k8s-pod-network.460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:30.783223 containerd[1471]: 2025-09-12 17:40:30.749 [INFO][5629] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:30.783223 containerd[1471]: 2025-09-12 17:40:30.750 [INFO][5629] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:30.783223 containerd[1471]: 2025-09-12 17:40:30.761 [WARNING][5629] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" HandleID="k8s-pod-network.460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:30.783223 containerd[1471]: 2025-09-12 17:40:30.762 [INFO][5629] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" HandleID="k8s-pod-network.460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--74f4d49d55--cj5zp-eth0" Sep 12 17:40:30.783223 containerd[1471]: 2025-09-12 17:40:30.764 [INFO][5629] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:30.783223 containerd[1471]: 2025-09-12 17:40:30.771 [INFO][5622] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37" Sep 12 17:40:30.785814 containerd[1471]: time="2025-09-12T17:40:30.783819693Z" level=info msg="TearDown network for sandbox \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\" successfully" Sep 12 17:40:30.791500 containerd[1471]: time="2025-09-12T17:40:30.791197242Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:30.791500 containerd[1471]: time="2025-09-12T17:40:30.791314820Z" level=info msg="RemovePodSandbox \"460938db4ce20050c490c200f60ffff50c1d459653c214349896dd7f927b1c37\" returns successfully" Sep 12 17:40:30.794479 containerd[1471]: time="2025-09-12T17:40:30.794115740Z" level=info msg="StopPodSandbox for \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\"" Sep 12 17:40:31.071052 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262189973.mount: Deactivated successfully. Sep 12 17:40:31.096939 containerd[1471]: 2025-09-12 17:40:30.935 [WARNING][5643] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--6994b65855--tsfv5-eth0" Sep 12 17:40:31.096939 containerd[1471]: 2025-09-12 17:40:30.936 [INFO][5643] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Sep 12 17:40:31.096939 containerd[1471]: 2025-09-12 17:40:30.936 [INFO][5643] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" iface="eth0" netns="" Sep 12 17:40:31.096939 containerd[1471]: 2025-09-12 17:40:30.936 [INFO][5643] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Sep 12 17:40:31.096939 containerd[1471]: 2025-09-12 17:40:30.936 [INFO][5643] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Sep 12 17:40:31.096939 containerd[1471]: 2025-09-12 17:40:31.059 [INFO][5650] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" HandleID="k8s-pod-network.ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--6994b65855--tsfv5-eth0" Sep 12 17:40:31.096939 containerd[1471]: 2025-09-12 17:40:31.063 [INFO][5650] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:31.096939 containerd[1471]: 2025-09-12 17:40:31.066 [INFO][5650] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:31.096939 containerd[1471]: 2025-09-12 17:40:31.087 [WARNING][5650] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" HandleID="k8s-pod-network.ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--6994b65855--tsfv5-eth0" Sep 12 17:40:31.096939 containerd[1471]: 2025-09-12 17:40:31.087 [INFO][5650] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" HandleID="k8s-pod-network.ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--6994b65855--tsfv5-eth0" Sep 12 17:40:31.096939 containerd[1471]: 2025-09-12 17:40:31.091 [INFO][5650] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:31.096939 containerd[1471]: 2025-09-12 17:40:31.093 [INFO][5643] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Sep 12 17:40:31.098866 containerd[1471]: time="2025-09-12T17:40:31.097222251Z" level=info msg="TearDown network for sandbox \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\" successfully" Sep 12 17:40:31.098866 containerd[1471]: time="2025-09-12T17:40:31.097283567Z" level=info msg="StopPodSandbox for \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\" returns successfully" Sep 12 17:40:31.099612 containerd[1471]: time="2025-09-12T17:40:31.099399860Z" level=info msg="RemovePodSandbox for \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\"" Sep 12 17:40:31.099612 containerd[1471]: time="2025-09-12T17:40:31.099533119Z" level=info msg="Forcibly stopping sandbox \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\"" Sep 12 17:40:31.113710 containerd[1471]: time="2025-09-12T17:40:31.113660643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:31.137104 containerd[1471]: time="2025-09-12T17:40:31.127282034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:40:31.165139 containerd[1471]: time="2025-09-12T17:40:31.163939978Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:31.167822 containerd[1471]: time="2025-09-12T17:40:31.167766776Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:31.171595 containerd[1471]: time="2025-09-12T17:40:31.171526649Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 5.963802024s" Sep 12 17:40:31.171780 containerd[1471]: time="2025-09-12T17:40:31.171763921Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:40:31.178781 containerd[1471]: time="2025-09-12T17:40:31.178729574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:40:31.181627 containerd[1471]: time="2025-09-12T17:40:31.181554815Z" level=info msg="CreateContainer within sandbox \"00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:40:31.205685 containerd[1471]: time="2025-09-12T17:40:31.205619835Z" level=info msg="CreateContainer within sandbox \"00ff837ebfdbeda23af79570e8fc87e934efebb71d795e020a2fdd69e4f7c9a8\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"53a8b870a94989584539a74dcd11fc1ddf0fc73788d802f58f07821dcfeaa909\"" Sep 12 17:40:31.207334 containerd[1471]: time="2025-09-12T17:40:31.207300584Z" level=info msg="StartContainer for \"53a8b870a94989584539a74dcd11fc1ddf0fc73788d802f58f07821dcfeaa909\"" Sep 12 17:40:31.382969 systemd[1]: run-containerd-runc-k8s.io-53a8b870a94989584539a74dcd11fc1ddf0fc73788d802f58f07821dcfeaa909-runc.nfbm5A.mount: Deactivated successfully. Sep 12 17:40:31.403797 systemd[1]: Started cri-containerd-53a8b870a94989584539a74dcd11fc1ddf0fc73788d802f58f07821dcfeaa909.scope - libcontainer container 53a8b870a94989584539a74dcd11fc1ddf0fc73788d802f58f07821dcfeaa909. Sep 12 17:40:31.551149 containerd[1471]: 2025-09-12 17:40:31.295 [WARNING][5668] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" WorkloadEndpoint="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--6994b65855--tsfv5-eth0" Sep 12 17:40:31.551149 containerd[1471]: 2025-09-12 17:40:31.296 [INFO][5668] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Sep 12 17:40:31.551149 containerd[1471]: 2025-09-12 17:40:31.296 [INFO][5668] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" iface="eth0" netns="" Sep 12 17:40:31.551149 containerd[1471]: 2025-09-12 17:40:31.296 [INFO][5668] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Sep 12 17:40:31.551149 containerd[1471]: 2025-09-12 17:40:31.296 [INFO][5668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Sep 12 17:40:31.551149 containerd[1471]: 2025-09-12 17:40:31.510 [INFO][5688] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" HandleID="k8s-pod-network.ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--6994b65855--tsfv5-eth0" Sep 12 17:40:31.551149 containerd[1471]: 2025-09-12 17:40:31.516 [INFO][5688] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:31.551149 containerd[1471]: 2025-09-12 17:40:31.516 [INFO][5688] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:31.551149 containerd[1471]: 2025-09-12 17:40:31.533 [WARNING][5688] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" HandleID="k8s-pod-network.ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--6994b65855--tsfv5-eth0" Sep 12 17:40:31.551149 containerd[1471]: 2025-09-12 17:40:31.534 [INFO][5688] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" HandleID="k8s-pod-network.ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-whisker--6994b65855--tsfv5-eth0" Sep 12 17:40:31.551149 containerd[1471]: 2025-09-12 17:40:31.537 [INFO][5688] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:31.551149 containerd[1471]: 2025-09-12 17:40:31.544 [INFO][5668] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129" Sep 12 17:40:31.551149 containerd[1471]: time="2025-09-12T17:40:31.550584613Z" level=info msg="TearDown network for sandbox \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\" successfully" Sep 12 17:40:31.569789 containerd[1471]: time="2025-09-12T17:40:31.569723176Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:31.570264 containerd[1471]: time="2025-09-12T17:40:31.570101880Z" level=info msg="RemovePodSandbox \"ae981957708cb5e6d65baed1b963dcf345ac3af7276e0c5dcf43793b1618c129\" returns successfully" Sep 12 17:40:31.572142 containerd[1471]: time="2025-09-12T17:40:31.571709814Z" level=info msg="StopPodSandbox for \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\"" Sep 12 17:40:31.771771 containerd[1471]: 2025-09-12 17:40:31.710 [WARNING][5714] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5925fe00-d8c6-4534-a505-f32b5402931d", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576", Pod:"coredns-668d6bf9bc-t55ll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid40e8a897c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:31.771771 containerd[1471]: 2025-09-12 17:40:31.711 [INFO][5714] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Sep 12 17:40:31.771771 containerd[1471]: 2025-09-12 17:40:31.711 [INFO][5714] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" iface="eth0" netns="" Sep 12 17:40:31.771771 containerd[1471]: 2025-09-12 17:40:31.711 [INFO][5714] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Sep 12 17:40:31.771771 containerd[1471]: 2025-09-12 17:40:31.711 [INFO][5714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Sep 12 17:40:31.771771 containerd[1471]: 2025-09-12 17:40:31.749 [INFO][5723] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" HandleID="k8s-pod-network.22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:31.771771 containerd[1471]: 2025-09-12 17:40:31.749 [INFO][5723] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:31.771771 containerd[1471]: 2025-09-12 17:40:31.750 [INFO][5723] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:31.771771 containerd[1471]: 2025-09-12 17:40:31.760 [WARNING][5723] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" HandleID="k8s-pod-network.22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:31.771771 containerd[1471]: 2025-09-12 17:40:31.760 [INFO][5723] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" HandleID="k8s-pod-network.22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:31.771771 containerd[1471]: 2025-09-12 17:40:31.764 [INFO][5723] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:31.771771 containerd[1471]: 2025-09-12 17:40:31.766 [INFO][5714] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Sep 12 17:40:31.771771 containerd[1471]: time="2025-09-12T17:40:31.771642021Z" level=info msg="TearDown network for sandbox \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\" successfully" Sep 12 17:40:31.771771 containerd[1471]: time="2025-09-12T17:40:31.771675441Z" level=info msg="StopPodSandbox for \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\" returns successfully" Sep 12 17:40:31.773154 containerd[1471]: time="2025-09-12T17:40:31.772923465Z" level=info msg="RemovePodSandbox for \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\"" Sep 12 17:40:31.773154 containerd[1471]: time="2025-09-12T17:40:31.772965044Z" level=info msg="Forcibly stopping sandbox \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\"" Sep 12 17:40:31.939315 containerd[1471]: 2025-09-12 17:40:31.841 [WARNING][5739] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"5925fe00-d8c6-4534-a505-f32b5402931d", ResourceVersion:"1109", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 39, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-a-756b4d7dc2", ContainerID:"437ac3904c6d53d7b8c05f50ca9576b74c7647afc7821420d078cf0d67308576", Pod:"coredns-668d6bf9bc-t55ll", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.64.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid40e8a897c6", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:40:31.939315 containerd[1471]: 2025-09-12 17:40:31.842 [INFO][5739] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Sep 12 17:40:31.939315 containerd[1471]: 2025-09-12 17:40:31.842 [INFO][5739] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" iface="eth0" netns="" Sep 12 17:40:31.939315 containerd[1471]: 2025-09-12 17:40:31.842 [INFO][5739] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Sep 12 17:40:31.939315 containerd[1471]: 2025-09-12 17:40:31.842 [INFO][5739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Sep 12 17:40:31.939315 containerd[1471]: 2025-09-12 17:40:31.897 [INFO][5747] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" HandleID="k8s-pod-network.22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:31.939315 containerd[1471]: 2025-09-12 17:40:31.897 [INFO][5747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:31.939315 containerd[1471]: 2025-09-12 17:40:31.897 [INFO][5747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:31.939315 containerd[1471]: 2025-09-12 17:40:31.924 [WARNING][5747] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" HandleID="k8s-pod-network.22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:31.939315 containerd[1471]: 2025-09-12 17:40:31.924 [INFO][5747] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" HandleID="k8s-pod-network.22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-coredns--668d6bf9bc--t55ll-eth0" Sep 12 17:40:31.939315 containerd[1471]: 2025-09-12 17:40:31.931 [INFO][5747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:31.939315 containerd[1471]: 2025-09-12 17:40:31.936 [INFO][5739] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e" Sep 12 17:40:31.940750 containerd[1471]: time="2025-09-12T17:40:31.940314450Z" level=info msg="TearDown network for sandbox \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\" successfully" Sep 12 17:40:31.948512 containerd[1471]: time="2025-09-12T17:40:31.948365898Z" level=info msg="StartContainer for \"53a8b870a94989584539a74dcd11fc1ddf0fc73788d802f58f07821dcfeaa909\" returns successfully" Sep 12 17:40:31.958548 containerd[1471]: time="2025-09-12T17:40:31.958321731Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:40:31.958548 containerd[1471]: time="2025-09-12T17:40:31.958420808Z" level=info msg="RemovePodSandbox \"22daa910d71992c6e0bee641206ef2a14479d6594c2cbf1f97b8593bebf6187e\" returns successfully" Sep 12 17:40:32.339719 kubelet[2512]: I0912 17:40:32.329524 2512 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:40:32.496811 containerd[1471]: time="2025-09-12T17:40:32.496666057Z" level=info msg="StopContainer for \"3c2c55562187c6c761f040e0254b4df9e00047bddaa7edce0ede79a4da232200\" with timeout 30 (s)" Sep 12 17:40:32.503303 containerd[1471]: time="2025-09-12T17:40:32.502569368Z" level=info msg="Stop container \"3c2c55562187c6c761f040e0254b4df9e00047bddaa7edce0ede79a4da232200\" with signal terminated" Sep 12 17:40:32.581179 systemd[1]: Started sshd@9-144.126.222.162:22-147.75.109.163:38058.service - OpenSSH per-connection server daemon (147.75.109.163:38058). Sep 12 17:40:32.596254 systemd[1]: cri-containerd-3c2c55562187c6c761f040e0254b4df9e00047bddaa7edce0ede79a4da232200.scope: Deactivated successfully. Sep 12 17:40:32.596677 systemd[1]: cri-containerd-3c2c55562187c6c761f040e0254b4df9e00047bddaa7edce0ede79a4da232200.scope: Consumed 1.451s CPU time. Sep 12 17:40:32.736027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c2c55562187c6c761f040e0254b4df9e00047bddaa7edce0ede79a4da232200-rootfs.mount: Deactivated successfully. Sep 12 17:40:32.768508 containerd[1471]: time="2025-09-12T17:40:32.741870872Z" level=info msg="shim disconnected" id=3c2c55562187c6c761f040e0254b4df9e00047bddaa7edce0ede79a4da232200 namespace=k8s.io Sep 12 17:40:32.779900 containerd[1471]: time="2025-09-12T17:40:32.779762590Z" level=warning msg="cleaning up after shim disconnected" id=3c2c55562187c6c761f040e0254b4df9e00047bddaa7edce0ede79a4da232200 namespace=k8s.io Sep 12 17:40:32.780469 containerd[1471]: time="2025-09-12T17:40:32.779862863Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:40:32.838386 sshd[5777]: Accepted publickey for core from 147.75.109.163 port 38058 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:32.842807 sshd[5777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:32.855649 systemd-logind[1445]: New session 10 of user core. Sep 12 17:40:32.866408 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:40:33.028117 containerd[1471]: time="2025-09-12T17:40:33.028042478Z" level=info msg="StopContainer for \"3c2c55562187c6c761f040e0254b4df9e00047bddaa7edce0ede79a4da232200\" returns successfully" Sep 12 17:40:33.049126 containerd[1471]: time="2025-09-12T17:40:33.047460744Z" level=info msg="StopPodSandbox for \"490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3\"" Sep 12 17:40:33.051175 containerd[1471]: time="2025-09-12T17:40:33.049921740Z" level=info msg="Container to stop \"3c2c55562187c6c761f040e0254b4df9e00047bddaa7edce0ede79a4da232200\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:40:33.054491 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3-shm.mount: Deactivated successfully. Sep 12 17:40:33.076109 systemd[1]: cri-containerd-490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3.scope: Deactivated successfully. Sep 12 17:40:33.140647 containerd[1471]: time="2025-09-12T17:40:33.140407937Z" level=info msg="shim disconnected" id=490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3 namespace=k8s.io Sep 12 17:40:33.140647 containerd[1471]: time="2025-09-12T17:40:33.140487220Z" level=warning msg="cleaning up after shim disconnected" id=490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3 namespace=k8s.io Sep 12 17:40:33.140647 containerd[1471]: time="2025-09-12T17:40:33.140502636Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:40:33.152136 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3-rootfs.mount: Deactivated successfully. Sep 12 17:40:33.417018 kubelet[2512]: I0912 17:40:33.414741 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5d9c765474-k2rqc" podStartSLOduration=4.035357299 podStartE2EDuration="21.399273226s" podCreationTimestamp="2025-09-12 17:40:12 +0000 UTC" firstStartedPulling="2025-09-12 17:40:13.811892228 +0000 UTC m=+47.768119132" lastFinishedPulling="2025-09-12 17:40:31.175808139 +0000 UTC m=+65.132035059" observedRunningTime="2025-09-12 17:40:32.531663966 +0000 UTC m=+66.487890897" watchObservedRunningTime="2025-09-12 17:40:33.399273226 +0000 UTC m=+67.355500166" Sep 12 17:40:33.415864 systemd-networkd[1365]: cali2cf06bf79f5: Link DOWN Sep 12 17:40:33.415871 systemd-networkd[1365]: cali2cf06bf79f5: Lost carrier Sep 12 17:40:33.535920 kubelet[2512]: I0912 17:40:33.514872 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Sep 12 17:40:33.740525 sshd[5777]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:33.759123 systemd[1]: sshd@9-144.126.222.162:22-147.75.109.163:38058.service: Deactivated successfully. Sep 12 17:40:33.768768 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:40:33.771893 systemd-logind[1445]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:40:33.786714 systemd[1]: Started sshd@10-144.126.222.162:22-147.75.109.163:38064.service - OpenSSH per-connection server daemon (147.75.109.163:38064). Sep 12 17:40:33.790999 systemd-logind[1445]: Removed session 10. Sep 12 17:40:33.822120 containerd[1471]: 2025-09-12 17:40:33.402 [INFO][5863] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Sep 12 17:40:33.822120 containerd[1471]: 2025-09-12 17:40:33.402 [INFO][5863] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" iface="eth0" netns="/var/run/netns/cni-8de37339-4a73-169f-d735-8a8e3deac2fe" Sep 12 17:40:33.822120 containerd[1471]: 2025-09-12 17:40:33.409 [INFO][5863] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" iface="eth0" netns="/var/run/netns/cni-8de37339-4a73-169f-d735-8a8e3deac2fe" Sep 12 17:40:33.822120 containerd[1471]: 2025-09-12 17:40:33.457 [INFO][5863] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" after=54.275721ms iface="eth0" netns="/var/run/netns/cni-8de37339-4a73-169f-d735-8a8e3deac2fe" Sep 12 17:40:33.822120 containerd[1471]: 2025-09-12 17:40:33.471 [INFO][5863] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Sep 12 17:40:33.822120 containerd[1471]: 2025-09-12 17:40:33.471 [INFO][5863] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Sep 12 17:40:33.822120 containerd[1471]: 2025-09-12 17:40:33.664 [INFO][5877] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" HandleID="k8s-pod-network.490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:33.822120 containerd[1471]: 2025-09-12 17:40:33.665 [INFO][5877] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:33.822120 containerd[1471]: 2025-09-12 17:40:33.665 [INFO][5877] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:33.822120 containerd[1471]: 2025-09-12 17:40:33.793 [INFO][5877] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" HandleID="k8s-pod-network.490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:33.822120 containerd[1471]: 2025-09-12 17:40:33.793 [INFO][5877] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" HandleID="k8s-pod-network.490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--799gf-eth0" Sep 12 17:40:33.822120 containerd[1471]: 2025-09-12 17:40:33.801 [INFO][5877] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:33.822120 containerd[1471]: 2025-09-12 17:40:33.815 [INFO][5863] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3" Sep 12 17:40:33.830924 containerd[1471]: time="2025-09-12T17:40:33.828218584Z" level=info msg="TearDown network for sandbox \"490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3\" successfully" Sep 12 17:40:33.830924 containerd[1471]: time="2025-09-12T17:40:33.828281629Z" level=info msg="StopPodSandbox for \"490f1cf0fe2e660ddead1cc2a7a27603d97d2420c34a7bb9f4bae0b27dd0bed3\" returns successfully" Sep 12 17:40:33.832906 systemd[1]: run-netns-cni\x2d8de37339\x2d4a73\x2d169f\x2dd735\x2d8a8e3deac2fe.mount: Deactivated successfully. Sep 12 17:40:33.934046 sshd[5890]: Accepted publickey for core from 147.75.109.163 port 38064 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:33.938126 sshd[5890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:33.954427 systemd-logind[1445]: New session 11 of user core. Sep 12 17:40:33.960442 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:40:34.169424 kubelet[2512]: I0912 17:40:34.169371 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nvg22\" (UniqueName: \"kubernetes.io/projected/dbf3d71b-4b9a-4508-98f1-871dbdecc9e3-kube-api-access-nvg22\") pod \"dbf3d71b-4b9a-4508-98f1-871dbdecc9e3\" (UID: \"dbf3d71b-4b9a-4508-98f1-871dbdecc9e3\") " Sep 12 17:40:34.169856 kubelet[2512]: I0912 17:40:34.169711 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dbf3d71b-4b9a-4508-98f1-871dbdecc9e3-calico-apiserver-certs\") pod \"dbf3d71b-4b9a-4508-98f1-871dbdecc9e3\" (UID: \"dbf3d71b-4b9a-4508-98f1-871dbdecc9e3\") " Sep 12 17:40:34.254034 systemd[1]: var-lib-kubelet-pods-dbf3d71b\x2d4b9a\x2d4508\x2d98f1\x2d871dbdecc9e3-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 12 17:40:34.270527 systemd[1]: var-lib-kubelet-pods-dbf3d71b\x2d4b9a\x2d4508\x2d98f1\x2d871dbdecc9e3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnvg22.mount: Deactivated successfully. Sep 12 17:40:34.382774 kubelet[2512]: I0912 17:40:34.376230 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf3d71b-4b9a-4508-98f1-871dbdecc9e3-kube-api-access-nvg22" (OuterVolumeSpecName: "kube-api-access-nvg22") pod "dbf3d71b-4b9a-4508-98f1-871dbdecc9e3" (UID: "dbf3d71b-4b9a-4508-98f1-871dbdecc9e3"). InnerVolumeSpecName "kube-api-access-nvg22". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:40:34.386781 kubelet[2512]: I0912 17:40:34.375969 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf3d71b-4b9a-4508-98f1-871dbdecc9e3-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "dbf3d71b-4b9a-4508-98f1-871dbdecc9e3" (UID: "dbf3d71b-4b9a-4508-98f1-871dbdecc9e3"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:40:34.394698 sshd[5890]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:34.421965 systemd[1]: sshd@10-144.126.222.162:22-147.75.109.163:38064.service: Deactivated successfully. Sep 12 17:40:34.433484 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:40:34.445222 systemd-logind[1445]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:40:34.455742 systemd[1]: Started sshd@11-144.126.222.162:22-147.75.109.163:38072.service - OpenSSH per-connection server daemon (147.75.109.163:38072). Sep 12 17:40:34.475194 systemd-logind[1445]: Removed session 11. Sep 12 17:40:34.489709 kubelet[2512]: I0912 17:40:34.488866 2512 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/dbf3d71b-4b9a-4508-98f1-871dbdecc9e3-calico-apiserver-certs\") on node \"ci-4081.3.6-a-756b4d7dc2\" DevicePath \"\"" Sep 12 17:40:34.489709 kubelet[2512]: I0912 17:40:34.488966 2512 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nvg22\" (UniqueName: \"kubernetes.io/projected/dbf3d71b-4b9a-4508-98f1-871dbdecc9e3-kube-api-access-nvg22\") on node \"ci-4081.3.6-a-756b4d7dc2\" DevicePath \"\"" Sep 12 17:40:34.603161 systemd[1]: Removed slice kubepods-besteffort-poddbf3d71b_4b9a_4508_98f1_871dbdecc9e3.slice - libcontainer container kubepods-besteffort-poddbf3d71b_4b9a_4508_98f1_871dbdecc9e3.slice. Sep 12 17:40:34.603361 systemd[1]: kubepods-besteffort-poddbf3d71b_4b9a_4508_98f1_871dbdecc9e3.slice: Consumed 1.499s CPU time. Sep 12 17:40:34.686608 sshd[5903]: Accepted publickey for core from 147.75.109.163 port 38072 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:34.691949 sshd[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:34.709601 systemd-logind[1445]: New session 12 of user core. Sep 12 17:40:34.714561 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:40:34.993285 sshd[5903]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:35.011949 systemd[1]: sshd@11-144.126.222.162:22-147.75.109.163:38072.service: Deactivated successfully. Sep 12 17:40:35.019482 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:40:35.022554 systemd-logind[1445]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:40:35.026552 systemd-logind[1445]: Removed session 12. Sep 12 17:40:36.256125 containerd[1471]: time="2025-09-12T17:40:36.254977338Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:36.260209 containerd[1471]: time="2025-09-12T17:40:36.258680294Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:40:36.262799 containerd[1471]: time="2025-09-12T17:40:36.261919486Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:36.271003 containerd[1471]: time="2025-09-12T17:40:36.270939937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.091548511s" Sep 12 17:40:36.271003 containerd[1471]: time="2025-09-12T17:40:36.270992070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:40:36.275876 containerd[1471]: time="2025-09-12T17:40:36.274854852Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:36.377627 kubelet[2512]: I0912 17:40:36.376760 2512 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf3d71b-4b9a-4508-98f1-871dbdecc9e3" path="/var/lib/kubelet/pods/dbf3d71b-4b9a-4508-98f1-871dbdecc9e3/volumes" Sep 12 17:40:36.528241 containerd[1471]: time="2025-09-12T17:40:36.528187489Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:40:36.690122 containerd[1471]: time="2025-09-12T17:40:36.689585801Z" level=info msg="CreateContainer within sandbox \"597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:40:36.715105 containerd[1471]: time="2025-09-12T17:40:36.713812691Z" level=info msg="CreateContainer within sandbox \"597737858d441eaccca0accd1576100e8799561107ca56bcf30833e0cc9517a5\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1dc8e8f4bcb81660dc6433c521c466e91df1bc37c82302037df4452d96c66c81\"" Sep 12 17:40:36.736206 containerd[1471]: time="2025-09-12T17:40:36.736153901Z" level=info msg="StartContainer for \"1dc8e8f4bcb81660dc6433c521c466e91df1bc37c82302037df4452d96c66c81\"" Sep 12 17:40:36.861309 systemd[1]: Started cri-containerd-1dc8e8f4bcb81660dc6433c521c466e91df1bc37c82302037df4452d96c66c81.scope - libcontainer container 1dc8e8f4bcb81660dc6433c521c466e91df1bc37c82302037df4452d96c66c81. Sep 12 17:40:37.140019 containerd[1471]: time="2025-09-12T17:40:37.137955113Z" level=info msg="StartContainer for \"1dc8e8f4bcb81660dc6433c521c466e91df1bc37c82302037df4452d96c66c81\" returns successfully" Sep 12 17:40:37.754190 kubelet[2512]: I0912 17:40:37.711601 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-dc9d95797-xw78z" podStartSLOduration=29.555596904 podStartE2EDuration="47.709641122s" podCreationTimestamp="2025-09-12 17:39:50 +0000 UTC" firstStartedPulling="2025-09-12 17:40:18.289418796 +0000 UTC m=+52.245645729" lastFinishedPulling="2025-09-12 17:40:36.443462989 +0000 UTC m=+70.399689947" observedRunningTime="2025-09-12 17:40:37.708959553 +0000 UTC m=+71.665186488" watchObservedRunningTime="2025-09-12 17:40:37.709641122 +0000 UTC m=+71.665868064" Sep 12 17:40:40.027478 systemd[1]: Started sshd@12-144.126.222.162:22-147.75.109.163:40388.service - OpenSSH per-connection server daemon (147.75.109.163:40388). Sep 12 17:40:40.190529 sshd[5993]: Accepted publickey for core from 147.75.109.163 port 40388 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:40.192145 sshd[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:40.201977 systemd-logind[1445]: New session 13 of user core. Sep 12 17:40:40.214480 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:40:40.972281 sshd[5993]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:40.980984 systemd[1]: sshd@12-144.126.222.162:22-147.75.109.163:40388.service: Deactivated successfully. Sep 12 17:40:40.985822 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:40:40.992141 systemd-logind[1445]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:40:40.998995 systemd-logind[1445]: Removed session 13. Sep 12 17:40:41.475243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3362943703.mount: Deactivated successfully. Sep 12 17:40:42.383828 containerd[1471]: time="2025-09-12T17:40:42.383721612Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:42.447532 containerd[1471]: time="2025-09-12T17:40:42.425281986Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:40:42.450877 containerd[1471]: time="2025-09-12T17:40:42.450820046Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:42.456165 containerd[1471]: time="2025-09-12T17:40:42.455543692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:42.456834 containerd[1471]: time="2025-09-12T17:40:42.456791945Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 5.928553356s" Sep 12 17:40:42.461154 containerd[1471]: time="2025-09-12T17:40:42.461053850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:40:42.487475 containerd[1471]: time="2025-09-12T17:40:42.487417521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:40:42.645854 containerd[1471]: time="2025-09-12T17:40:42.645674710Z" level=info msg="CreateContainer within sandbox \"6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:40:42.692806 containerd[1471]: time="2025-09-12T17:40:42.692624937Z" level=info msg="CreateContainer within sandbox \"6627127dd828342e5ad976e8b730faa735a486a416e57dd6f40b130995b43749\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"dca7c57263a59d13eab940150f00c3a884a3352b67ec0dbbf2f8c7fd8e0ee07d\"" Sep 12 17:40:42.702061 containerd[1471]: time="2025-09-12T17:40:42.698575769Z" level=info msg="StartContainer for \"dca7c57263a59d13eab940150f00c3a884a3352b67ec0dbbf2f8c7fd8e0ee07d\"" Sep 12 17:40:42.931939 containerd[1471]: time="2025-09-12T17:40:42.931747778Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:42.933120 containerd[1471]: time="2025-09-12T17:40:42.933021502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:40:42.937571 containerd[1471]: time="2025-09-12T17:40:42.937504678Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 450.027645ms" Sep 12 17:40:42.937571 containerd[1471]: time="2025-09-12T17:40:42.937572088Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:40:42.940623 containerd[1471]: time="2025-09-12T17:40:42.939554792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:40:42.950695 containerd[1471]: time="2025-09-12T17:40:42.950092248Z" level=info msg="CreateContainer within sandbox \"d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:40:42.984817 containerd[1471]: time="2025-09-12T17:40:42.984709037Z" level=info msg="CreateContainer within sandbox \"d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"eb83eafa75e4436155a53bfc2caee98b7b45d7ac2eb9efbdbf191433e5e84f98\"" Sep 12 17:40:42.987711 containerd[1471]: time="2025-09-12T17:40:42.987509519Z" level=info msg="StartContainer for \"eb83eafa75e4436155a53bfc2caee98b7b45d7ac2eb9efbdbf191433e5e84f98\"" Sep 12 17:40:42.997376 systemd[1]: Started cri-containerd-dca7c57263a59d13eab940150f00c3a884a3352b67ec0dbbf2f8c7fd8e0ee07d.scope - libcontainer container dca7c57263a59d13eab940150f00c3a884a3352b67ec0dbbf2f8c7fd8e0ee07d. Sep 12 17:40:43.109414 systemd[1]: Started cri-containerd-eb83eafa75e4436155a53bfc2caee98b7b45d7ac2eb9efbdbf191433e5e84f98.scope - libcontainer container eb83eafa75e4436155a53bfc2caee98b7b45d7ac2eb9efbdbf191433e5e84f98. Sep 12 17:40:43.168190 containerd[1471]: time="2025-09-12T17:40:43.168047837Z" level=info msg="StartContainer for \"dca7c57263a59d13eab940150f00c3a884a3352b67ec0dbbf2f8c7fd8e0ee07d\" returns successfully" Sep 12 17:40:43.214759 containerd[1471]: time="2025-09-12T17:40:43.214556974Z" level=info msg="StartContainer for \"eb83eafa75e4436155a53bfc2caee98b7b45d7ac2eb9efbdbf191433e5e84f98\" returns successfully" Sep 12 17:40:44.052009 containerd[1471]: time="2025-09-12T17:40:44.051016171Z" level=info msg="StopContainer for \"eb83eafa75e4436155a53bfc2caee98b7b45d7ac2eb9efbdbf191433e5e84f98\" with timeout 30 (s)" Sep 12 17:40:44.055735 containerd[1471]: time="2025-09-12T17:40:44.055560181Z" level=info msg="Stop container \"eb83eafa75e4436155a53bfc2caee98b7b45d7ac2eb9efbdbf191433e5e84f98\" with signal terminated" Sep 12 17:40:44.097830 kubelet[2512]: I0912 17:40:44.095952 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-854494659d-kvgvx" podStartSLOduration=38.872655001 podStartE2EDuration="1m0.077136194s" podCreationTimestamp="2025-09-12 17:39:44 +0000 UTC" firstStartedPulling="2025-09-12 17:40:21.734440223 +0000 UTC m=+55.690667126" lastFinishedPulling="2025-09-12 17:40:42.938921389 +0000 UTC m=+76.895148319" observedRunningTime="2025-09-12 17:40:44.07401527 +0000 UTC m=+78.030242205" watchObservedRunningTime="2025-09-12 17:40:44.077136194 +0000 UTC m=+78.033363131" Sep 12 17:40:44.120378 kubelet[2512]: I0912 17:40:44.119407 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-srrrn" podStartSLOduration=31.246276237 podStartE2EDuration="55.119372722s" podCreationTimestamp="2025-09-12 17:39:49 +0000 UTC" firstStartedPulling="2025-09-12 17:40:18.611798145 +0000 UTC m=+52.568025054" lastFinishedPulling="2025-09-12 17:40:42.484894597 +0000 UTC m=+76.441121539" observedRunningTime="2025-09-12 17:40:44.116742363 +0000 UTC m=+78.072969294" watchObservedRunningTime="2025-09-12 17:40:44.119372722 +0000 UTC m=+78.075599657" Sep 12 17:40:44.178431 systemd[1]: cri-containerd-eb83eafa75e4436155a53bfc2caee98b7b45d7ac2eb9efbdbf191433e5e84f98.scope: Deactivated successfully. Sep 12 17:40:44.253964 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eb83eafa75e4436155a53bfc2caee98b7b45d7ac2eb9efbdbf191433e5e84f98-rootfs.mount: Deactivated successfully. Sep 12 17:40:44.370979 containerd[1471]: time="2025-09-12T17:40:44.311926031Z" level=info msg="shim disconnected" id=eb83eafa75e4436155a53bfc2caee98b7b45d7ac2eb9efbdbf191433e5e84f98 namespace=k8s.io Sep 12 17:40:44.370979 containerd[1471]: time="2025-09-12T17:40:44.370108857Z" level=warning msg="cleaning up after shim disconnected" id=eb83eafa75e4436155a53bfc2caee98b7b45d7ac2eb9efbdbf191433e5e84f98 namespace=k8s.io Sep 12 17:40:44.370979 containerd[1471]: time="2025-09-12T17:40:44.370133326Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:40:44.503915 containerd[1471]: time="2025-09-12T17:40:44.503854709Z" level=info msg="StopContainer for \"eb83eafa75e4436155a53bfc2caee98b7b45d7ac2eb9efbdbf191433e5e84f98\" returns successfully" Sep 12 17:40:44.509253 containerd[1471]: time="2025-09-12T17:40:44.509194041Z" level=info msg="StopPodSandbox for \"d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f\"" Sep 12 17:40:44.513110 containerd[1471]: time="2025-09-12T17:40:44.510534405Z" level=info msg="Container to stop \"eb83eafa75e4436155a53bfc2caee98b7b45d7ac2eb9efbdbf191433e5e84f98\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:40:44.515730 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f-shm.mount: Deactivated successfully. Sep 12 17:40:44.527430 systemd[1]: cri-containerd-d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f.scope: Deactivated successfully. Sep 12 17:40:44.567683 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f-rootfs.mount: Deactivated successfully. Sep 12 17:40:44.570310 containerd[1471]: time="2025-09-12T17:40:44.568281186Z" level=info msg="shim disconnected" id=d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f namespace=k8s.io Sep 12 17:40:44.570310 containerd[1471]: time="2025-09-12T17:40:44.568352033Z" level=warning msg="cleaning up after shim disconnected" id=d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f namespace=k8s.io Sep 12 17:40:44.570310 containerd[1471]: time="2025-09-12T17:40:44.568365198Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:40:44.927797 systemd-networkd[1365]: calibe48816aabe: Link DOWN Sep 12 17:40:44.927811 systemd-networkd[1365]: calibe48816aabe: Lost carrier Sep 12 17:40:45.026008 kubelet[2512]: I0912 17:40:45.016125 2512 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Sep 12 17:40:45.093483 systemd[1]: run-containerd-runc-k8s.io-dca7c57263a59d13eab940150f00c3a884a3352b67ec0dbbf2f8c7fd8e0ee07d-runc.64abHm.mount: Deactivated successfully. Sep 12 17:40:45.258187 kubelet[2512]: E0912 17:40:45.257889 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:45.431311 containerd[1471]: 2025-09-12 17:40:44.895 [INFO][6202] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Sep 12 17:40:45.431311 containerd[1471]: 2025-09-12 17:40:44.898 [INFO][6202] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" iface="eth0" netns="/var/run/netns/cni-3169ffc3-ae86-c9b8-34fb-a887e283431b" Sep 12 17:40:45.431311 containerd[1471]: 2025-09-12 17:40:44.899 [INFO][6202] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" iface="eth0" netns="/var/run/netns/cni-3169ffc3-ae86-c9b8-34fb-a887e283431b" Sep 12 17:40:45.431311 containerd[1471]: 2025-09-12 17:40:44.921 [INFO][6202] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" after=23.140333ms iface="eth0" netns="/var/run/netns/cni-3169ffc3-ae86-c9b8-34fb-a887e283431b" Sep 12 17:40:45.431311 containerd[1471]: 2025-09-12 17:40:44.922 [INFO][6202] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Sep 12 17:40:45.431311 containerd[1471]: 2025-09-12 17:40:44.922 [INFO][6202] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Sep 12 17:40:45.431311 containerd[1471]: 2025-09-12 17:40:45.223 [INFO][6214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" HandleID="k8s-pod-network.d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:45.431311 containerd[1471]: 2025-09-12 17:40:45.225 [INFO][6214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:40:45.431311 containerd[1471]: 2025-09-12 17:40:45.225 [INFO][6214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:40:45.431311 containerd[1471]: 2025-09-12 17:40:45.405 [INFO][6214] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" HandleID="k8s-pod-network.d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:45.431311 containerd[1471]: 2025-09-12 17:40:45.405 [INFO][6214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" HandleID="k8s-pod-network.d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Workload="ci--4081.3.6--a--756b4d7dc2-k8s-calico--apiserver--854494659d--kvgvx-eth0" Sep 12 17:40:45.431311 containerd[1471]: 2025-09-12 17:40:45.411 [INFO][6214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:40:45.431311 containerd[1471]: 2025-09-12 17:40:45.419 [INFO][6202] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f" Sep 12 17:40:45.435585 containerd[1471]: time="2025-09-12T17:40:45.432823218Z" level=info msg="TearDown network for sandbox \"d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f\" successfully" Sep 12 17:40:45.435585 containerd[1471]: time="2025-09-12T17:40:45.433286806Z" level=info msg="StopPodSandbox for \"d671fcc37e66c4734737bee71097b30c017a9eb17d999e32b3dd931e0fac480f\" returns successfully" Sep 12 17:40:45.438513 systemd[1]: run-netns-cni\x2d3169ffc3\x2dae86\x2dc9b8\x2d34fb\x2da887e283431b.mount: Deactivated successfully. Sep 12 17:40:45.479975 containerd[1471]: time="2025-09-12T17:40:45.479911314Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:45.482293 containerd[1471]: time="2025-09-12T17:40:45.482167668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:40:45.483434 containerd[1471]: time="2025-09-12T17:40:45.483286975Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:45.490508 containerd[1471]: time="2025-09-12T17:40:45.490314814Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:40:45.493455 containerd[1471]: time="2025-09-12T17:40:45.493316478Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.55369955s" Sep 12 17:40:45.493455 containerd[1471]: time="2025-09-12T17:40:45.493399148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:40:45.514982 containerd[1471]: time="2025-09-12T17:40:45.514848022Z" level=info msg="CreateContainer within sandbox \"8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:40:45.597823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1511194309.mount: Deactivated successfully. Sep 12 17:40:45.614727 containerd[1471]: time="2025-09-12T17:40:45.614648894Z" level=info msg="CreateContainer within sandbox \"8643ec49635a1a7e848f93b17ccb4d321d8095ebcad20ea0a80532c5ea2b6493\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"c22e6d195271305fbe157024742c03322fd13f443cc2c25cabf8e42a97833cc5\"" Sep 12 17:40:45.621524 kubelet[2512]: I0912 17:40:45.621467 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fcab9732-3c6c-4b48-8986-45fc7d97bb57-calico-apiserver-certs\") pod \"fcab9732-3c6c-4b48-8986-45fc7d97bb57\" (UID: \"fcab9732-3c6c-4b48-8986-45fc7d97bb57\") " Sep 12 17:40:45.621709 kubelet[2512]: I0912 17:40:45.621588 2512 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jp4x\" (UniqueName: \"kubernetes.io/projected/fcab9732-3c6c-4b48-8986-45fc7d97bb57-kube-api-access-2jp4x\") pod \"fcab9732-3c6c-4b48-8986-45fc7d97bb57\" (UID: \"fcab9732-3c6c-4b48-8986-45fc7d97bb57\") " Sep 12 17:40:45.635672 containerd[1471]: time="2025-09-12T17:40:45.635625263Z" level=info msg="StartContainer for \"c22e6d195271305fbe157024742c03322fd13f443cc2c25cabf8e42a97833cc5\"" Sep 12 17:40:45.701330 kubelet[2512]: I0912 17:40:45.697355 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcab9732-3c6c-4b48-8986-45fc7d97bb57-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "fcab9732-3c6c-4b48-8986-45fc7d97bb57" (UID: "fcab9732-3c6c-4b48-8986-45fc7d97bb57"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:40:45.709190 kubelet[2512]: I0912 17:40:45.709049 2512 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcab9732-3c6c-4b48-8986-45fc7d97bb57-kube-api-access-2jp4x" (OuterVolumeSpecName: "kube-api-access-2jp4x") pod "fcab9732-3c6c-4b48-8986-45fc7d97bb57" (UID: "fcab9732-3c6c-4b48-8986-45fc7d97bb57"). InnerVolumeSpecName "kube-api-access-2jp4x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:40:45.730300 kubelet[2512]: I0912 17:40:45.730175 2512 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2jp4x\" (UniqueName: \"kubernetes.io/projected/fcab9732-3c6c-4b48-8986-45fc7d97bb57-kube-api-access-2jp4x\") on node \"ci-4081.3.6-a-756b4d7dc2\" DevicePath \"\"" Sep 12 17:40:45.730300 kubelet[2512]: I0912 17:40:45.730267 2512 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fcab9732-3c6c-4b48-8986-45fc7d97bb57-calico-apiserver-certs\") on node \"ci-4081.3.6-a-756b4d7dc2\" DevicePath \"\"" Sep 12 17:40:45.750666 systemd[1]: Started cri-containerd-c22e6d195271305fbe157024742c03322fd13f443cc2c25cabf8e42a97833cc5.scope - libcontainer container c22e6d195271305fbe157024742c03322fd13f443cc2c25cabf8e42a97833cc5. Sep 12 17:40:45.811593 containerd[1471]: time="2025-09-12T17:40:45.811325487Z" level=info msg="StartContainer for \"c22e6d195271305fbe157024742c03322fd13f443cc2c25cabf8e42a97833cc5\" returns successfully" Sep 12 17:40:46.006058 systemd[1]: Started sshd@13-144.126.222.162:22-147.75.109.163:40404.service - OpenSSH per-connection server daemon (147.75.109.163:40404). Sep 12 17:40:46.102986 systemd[1]: var-lib-kubelet-pods-fcab9732\x2d3c6c\x2d4b48\x2d8986\x2d45fc7d97bb57-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2jp4x.mount: Deactivated successfully. Sep 12 17:40:46.103585 systemd[1]: var-lib-kubelet-pods-fcab9732\x2d3c6c\x2d4b48\x2d8986\x2d45fc7d97bb57-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Sep 12 17:40:46.115811 systemd[1]: Removed slice kubepods-besteffort-podfcab9732_3c6c_4b48_8986_45fc7d97bb57.slice - libcontainer container kubepods-besteffort-podfcab9732_3c6c_4b48_8986_45fc7d97bb57.slice. Sep 12 17:40:46.288392 kubelet[2512]: I0912 17:40:46.288253 2512 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4k29j" podStartSLOduration=28.266730146 podStartE2EDuration="56.288210321s" podCreationTimestamp="2025-09-12 17:39:50 +0000 UTC" firstStartedPulling="2025-09-12 17:40:17.482279796 +0000 UTC m=+51.438506714" lastFinishedPulling="2025-09-12 17:40:45.503759952 +0000 UTC m=+79.459986889" observedRunningTime="2025-09-12 17:40:46.155154185 +0000 UTC m=+80.111381159" watchObservedRunningTime="2025-09-12 17:40:46.288210321 +0000 UTC m=+80.244437251" Sep 12 17:40:46.394739 sshd[6286]: Accepted publickey for core from 147.75.109.163 port 40404 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:46.406418 sshd[6286]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:46.421563 systemd-logind[1445]: New session 14 of user core. Sep 12 17:40:46.426441 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:40:46.922127 kubelet[2512]: I0912 17:40:46.921006 2512 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:40:46.922127 kubelet[2512]: I0912 17:40:46.921246 2512 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:40:47.286968 kubelet[2512]: E0912 17:40:47.285455 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:47.489522 sshd[6286]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:47.494952 systemd[1]: sshd@13-144.126.222.162:22-147.75.109.163:40404.service: Deactivated successfully. Sep 12 17:40:47.500063 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:40:47.503025 systemd-logind[1445]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:40:47.505314 systemd-logind[1445]: Removed session 14. Sep 12 17:40:48.236177 kubelet[2512]: I0912 17:40:48.235932 2512 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcab9732-3c6c-4b48-8986-45fc7d97bb57" path="/var/lib/kubelet/pods/fcab9732-3c6c-4b48-8986-45fc7d97bb57/volumes" Sep 12 17:40:49.110558 systemd[1]: run-containerd-runc-k8s.io-021d9dfb44e12b75ce1e4afad6f62f474d6766f1938c6ce26645cac8a026e3dd-runc.VuVl2X.mount: Deactivated successfully. Sep 12 17:40:50.232699 kubelet[2512]: E0912 17:40:50.232470 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:40:52.526210 systemd[1]: Started sshd@14-144.126.222.162:22-147.75.109.163:37870.service - OpenSSH per-connection server daemon (147.75.109.163:37870). Sep 12 17:40:52.692842 sshd[6347]: Accepted publickey for core from 147.75.109.163 port 37870 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:52.696714 sshd[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:52.707202 systemd-logind[1445]: New session 15 of user core. Sep 12 17:40:52.713030 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:40:53.235475 sshd[6347]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:53.242040 systemd[1]: sshd@14-144.126.222.162:22-147.75.109.163:37870.service: Deactivated successfully. Sep 12 17:40:53.247525 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:40:53.249531 systemd-logind[1445]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:40:53.250984 systemd-logind[1445]: Removed session 15. Sep 12 17:40:58.256623 systemd[1]: Started sshd@15-144.126.222.162:22-147.75.109.163:37880.service - OpenSSH per-connection server daemon (147.75.109.163:37880). Sep 12 17:40:58.354573 sshd[6361]: Accepted publickey for core from 147.75.109.163 port 37880 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:58.357791 sshd[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:58.364490 systemd-logind[1445]: New session 16 of user core. Sep 12 17:40:58.368591 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:40:58.652136 sshd[6361]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:58.668390 systemd[1]: sshd@15-144.126.222.162:22-147.75.109.163:37880.service: Deactivated successfully. Sep 12 17:40:58.681484 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:40:58.685439 systemd-logind[1445]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:40:58.691564 systemd[1]: Started sshd@16-144.126.222.162:22-147.75.109.163:37892.service - OpenSSH per-connection server daemon (147.75.109.163:37892). Sep 12 17:40:58.700055 systemd-logind[1445]: Removed session 16. Sep 12 17:40:58.774717 sshd[6374]: Accepted publickey for core from 147.75.109.163 port 37892 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:58.776603 sshd[6374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:58.784139 systemd-logind[1445]: New session 17 of user core. Sep 12 17:40:58.789347 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:40:59.087988 sshd[6374]: pam_unix(sshd:session): session closed for user core Sep 12 17:40:59.102664 systemd[1]: sshd@16-144.126.222.162:22-147.75.109.163:37892.service: Deactivated successfully. Sep 12 17:40:59.108380 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:40:59.111139 systemd-logind[1445]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:40:59.125063 systemd[1]: Started sshd@17-144.126.222.162:22-147.75.109.163:37896.service - OpenSSH per-connection server daemon (147.75.109.163:37896). Sep 12 17:40:59.126707 systemd-logind[1445]: Removed session 17. Sep 12 17:40:59.257129 sshd[6385]: Accepted publickey for core from 147.75.109.163 port 37896 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:40:59.259807 sshd[6385]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:40:59.269253 systemd-logind[1445]: New session 18 of user core. Sep 12 17:40:59.275406 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:41:00.178498 sshd[6385]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:00.194223 systemd[1]: sshd@17-144.126.222.162:22-147.75.109.163:37896.service: Deactivated successfully. Sep 12 17:41:00.199274 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:41:00.201943 systemd-logind[1445]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:41:00.216658 systemd[1]: Started sshd@18-144.126.222.162:22-147.75.109.163:54674.service - OpenSSH per-connection server daemon (147.75.109.163:54674). Sep 12 17:41:00.218963 systemd-logind[1445]: Removed session 18. Sep 12 17:41:00.236320 kubelet[2512]: E0912 17:41:00.236052 2512 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:41:00.393127 sshd[6404]: Accepted publickey for core from 147.75.109.163 port 54674 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:41:00.397221 sshd[6404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:00.405136 systemd-logind[1445]: New session 19 of user core. Sep 12 17:41:00.411600 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:41:01.030670 sshd[6404]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:01.046188 systemd[1]: sshd@18-144.126.222.162:22-147.75.109.163:54674.service: Deactivated successfully. Sep 12 17:41:01.051625 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:41:01.054605 systemd-logind[1445]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:41:01.061378 systemd-logind[1445]: Removed session 19. Sep 12 17:41:01.074450 systemd[1]: Started sshd@19-144.126.222.162:22-147.75.109.163:54688.service - OpenSSH per-connection server daemon (147.75.109.163:54688). Sep 12 17:41:01.191343 sshd[6416]: Accepted publickey for core from 147.75.109.163 port 54688 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:41:01.194065 sshd[6416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:01.204015 systemd-logind[1445]: New session 20 of user core. Sep 12 17:41:01.211429 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:41:01.433250 sshd[6416]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:01.439579 systemd[1]: sshd@19-144.126.222.162:22-147.75.109.163:54688.service: Deactivated successfully. Sep 12 17:41:01.445759 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:41:01.449939 systemd-logind[1445]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:41:01.451656 systemd-logind[1445]: Removed session 20. Sep 12 17:41:06.460565 systemd[1]: Started sshd@20-144.126.222.162:22-147.75.109.163:54690.service - OpenSSH per-connection server daemon (147.75.109.163:54690). Sep 12 17:41:06.526957 sshd[6441]: Accepted publickey for core from 147.75.109.163 port 54690 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:41:06.529521 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:06.538198 systemd-logind[1445]: New session 21 of user core. Sep 12 17:41:06.542468 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:41:06.755506 sshd[6441]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:06.760252 systemd[1]: sshd@20-144.126.222.162:22-147.75.109.163:54690.service: Deactivated successfully. Sep 12 17:41:06.760691 systemd-logind[1445]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:41:06.763853 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:41:06.766929 systemd-logind[1445]: Removed session 21. Sep 12 17:41:09.216168 systemd[1]: run-containerd-runc-k8s.io-1dc8e8f4bcb81660dc6433c521c466e91df1bc37c82302037df4452d96c66c81-runc.XhoUb0.mount: Deactivated successfully. Sep 12 17:41:11.774439 systemd[1]: Started sshd@21-144.126.222.162:22-147.75.109.163:42780.service - OpenSSH per-connection server daemon (147.75.109.163:42780). Sep 12 17:41:11.881334 sshd[6492]: Accepted publickey for core from 147.75.109.163 port 42780 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:41:11.884464 sshd[6492]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:11.893656 systemd-logind[1445]: New session 22 of user core. Sep 12 17:41:11.898359 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:41:12.184933 sshd[6492]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:12.191660 systemd[1]: sshd@21-144.126.222.162:22-147.75.109.163:42780.service: Deactivated successfully. Sep 12 17:41:12.195366 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:41:12.196729 systemd-logind[1445]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:41:12.198863 systemd-logind[1445]: Removed session 22. Sep 12 17:41:17.207896 systemd[1]: Started sshd@22-144.126.222.162:22-147.75.109.163:42788.service - OpenSSH per-connection server daemon (147.75.109.163:42788). Sep 12 17:41:17.411208 sshd[6524]: Accepted publickey for core from 147.75.109.163 port 42788 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:41:17.417811 sshd[6524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:41:17.435571 systemd-logind[1445]: New session 23 of user core. Sep 12 17:41:17.445615 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:41:18.394162 sshd[6524]: pam_unix(sshd:session): session closed for user core Sep 12 17:41:18.400947 systemd-logind[1445]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:41:18.401383 systemd[1]: sshd@22-144.126.222.162:22-147.75.109.163:42788.service: Deactivated successfully. Sep 12 17:41:18.408484 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:41:18.413794 systemd-logind[1445]: Removed session 23.