Sep 12 17:33:44.035157 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 17:33:44.035205 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:33:44.035226 kernel: BIOS-provided physical RAM map: Sep 12 17:33:44.035238 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 17:33:44.035247 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 17:33:44.035258 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 17:33:44.035269 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Sep 12 17:33:44.035279 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Sep 12 17:33:44.035288 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 17:33:44.035302 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 17:33:44.035312 kernel: NX (Execute Disable) protection: active Sep 12 17:33:44.035322 kernel: APIC: Static calls initialized Sep 12 17:33:44.035340 kernel: SMBIOS 2.8 present. Sep 12 17:33:44.035351 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Sep 12 17:33:44.035364 kernel: Hypervisor detected: KVM Sep 12 17:33:44.035379 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 17:33:44.035396 kernel: kvm-clock: using sched offset of 3295553924 cycles Sep 12 17:33:44.035408 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 17:33:44.035420 kernel: tsc: Detected 1999.999 MHz processor Sep 12 17:33:44.035432 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 17:33:44.035444 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 17:33:44.035457 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Sep 12 17:33:44.035468 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 17:33:44.035480 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 17:33:44.035496 kernel: ACPI: Early table checksum verification disabled Sep 12 17:33:44.035507 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Sep 12 17:33:44.035519 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:44.035531 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:44.035545 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:44.035557 kernel: ACPI: FACS 0x000000007FFE0000 000040 Sep 12 17:33:44.035569 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:44.035581 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:44.035594 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:44.035611 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:33:44.035623 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Sep 12 17:33:44.035635 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Sep 12 17:33:44.035647 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Sep 12 17:33:44.035660 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Sep 12 17:33:44.035673 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Sep 12 17:33:44.035686 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Sep 12 17:33:44.035709 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Sep 12 17:33:44.035723 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 17:33:44.035731 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 17:33:44.035739 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 17:33:44.035747 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 12 17:33:44.035762 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Sep 12 17:33:44.035770 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Sep 12 17:33:44.035782 kernel: Zone ranges: Sep 12 17:33:44.035790 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 17:33:44.035798 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Sep 12 17:33:44.035806 kernel: Normal empty Sep 12 17:33:44.035814 kernel: Movable zone start for each node Sep 12 17:33:44.035823 kernel: Early memory node ranges Sep 12 17:33:44.035831 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 17:33:44.035839 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Sep 12 17:33:44.035847 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Sep 12 17:33:44.035858 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 17:33:44.035866 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 17:33:44.035878 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Sep 12 17:33:44.035886 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 17:33:44.035894 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 17:33:44.035902 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 17:33:44.035910 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 17:33:44.035917 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 17:33:44.035926 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 17:33:44.035947 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 17:33:44.035959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 17:33:44.035972 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 17:33:44.036031 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Sep 12 17:33:44.036043 kernel: TSC deadline timer available Sep 12 17:33:44.036056 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Sep 12 17:33:44.036069 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 17:33:44.036081 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Sep 12 17:33:44.036100 kernel: Booting paravirtualized kernel on KVM Sep 12 17:33:44.036113 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 17:33:44.036133 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Sep 12 17:33:44.036147 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u1048576 Sep 12 17:33:44.036160 kernel: pcpu-alloc: s197160 r8192 d32216 u1048576 alloc=1*2097152 Sep 12 17:33:44.036174 kernel: pcpu-alloc: [0] 0 1 Sep 12 17:33:44.036188 kernel: kvm-guest: PV spinlocks disabled, no host support Sep 12 17:33:44.036205 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:33:44.036216 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:33:44.036224 kernel: random: crng init done Sep 12 17:33:44.036237 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:33:44.036253 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 17:33:44.036267 kernel: Fallback order for Node 0: 0 Sep 12 17:33:44.036280 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Sep 12 17:33:44.036292 kernel: Policy zone: DMA32 Sep 12 17:33:44.036306 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:33:44.036321 kernel: Memory: 1971200K/2096612K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 125152K reserved, 0K cma-reserved) Sep 12 17:33:44.036336 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Sep 12 17:33:44.036349 kernel: Kernel/User page tables isolation: enabled Sep 12 17:33:44.036359 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 17:33:44.036373 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 17:33:44.036387 kernel: Dynamic Preempt: voluntary Sep 12 17:33:44.036402 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:33:44.036419 kernel: rcu: RCU event tracing is enabled. Sep 12 17:33:44.036430 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Sep 12 17:33:44.036438 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:33:44.036448 kernel: Rude variant of Tasks RCU enabled. Sep 12 17:33:44.036462 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:33:44.036483 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:33:44.036497 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Sep 12 17:33:44.036512 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Sep 12 17:33:44.036526 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:33:44.036548 kernel: Console: colour VGA+ 80x25 Sep 12 17:33:44.036562 kernel: printk: console [tty0] enabled Sep 12 17:33:44.036577 kernel: printk: console [ttyS0] enabled Sep 12 17:33:44.036591 kernel: ACPI: Core revision 20230628 Sep 12 17:33:44.036606 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Sep 12 17:33:44.036626 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 17:33:44.036638 kernel: x2apic enabled Sep 12 17:33:44.036651 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 17:33:44.036663 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Sep 12 17:33:44.036675 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Sep 12 17:33:44.036686 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Sep 12 17:33:44.036698 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 12 17:33:44.036711 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 12 17:33:44.036741 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 17:33:44.036753 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 17:33:44.036767 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 17:33:44.036786 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 12 17:33:44.036800 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 17:33:44.036814 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 17:33:44.036829 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 17:33:44.036844 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Sep 12 17:33:44.036860 kernel: active return thunk: its_return_thunk Sep 12 17:33:44.036886 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 17:33:44.036902 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 17:33:44.036917 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 17:33:44.036934 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 17:33:44.036946 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 17:33:44.036955 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 17:33:44.036964 kernel: Freeing SMP alternatives memory: 32K Sep 12 17:33:44.037058 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:33:44.037082 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 17:33:44.037095 kernel: landlock: Up and running. Sep 12 17:33:44.037109 kernel: SELinux: Initializing. Sep 12 17:33:44.037123 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:33:44.037136 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 17:33:44.037150 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Sep 12 17:33:44.037164 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:33:44.037179 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:33:44.037193 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Sep 12 17:33:44.037211 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Sep 12 17:33:44.037223 kernel: signal: max sigframe size: 1776 Sep 12 17:33:44.037237 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:33:44.037252 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:33:44.037265 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 17:33:44.037278 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:33:44.037291 kernel: smpboot: x86: Booting SMP configuration: Sep 12 17:33:44.037303 kernel: .... node #0, CPUs: #1 Sep 12 17:33:44.037323 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 17:33:44.037342 kernel: smpboot: Max logical packages: 1 Sep 12 17:33:44.037356 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Sep 12 17:33:44.037369 kernel: devtmpfs: initialized Sep 12 17:33:44.037383 kernel: x86/mm: Memory block size: 128MB Sep 12 17:33:44.037397 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:33:44.037412 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Sep 12 17:33:44.037425 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:33:44.037439 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:33:44.037452 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:33:44.037489 kernel: audit: type=2000 audit(1757698423.100:1): state=initialized audit_enabled=0 res=1 Sep 12 17:33:44.037503 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:33:44.037517 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 17:33:44.037531 kernel: cpuidle: using governor menu Sep 12 17:33:44.037546 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:33:44.037558 kernel: dca service started, version 1.12.1 Sep 12 17:33:44.037570 kernel: PCI: Using configuration type 1 for base access Sep 12 17:33:44.037584 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 17:33:44.037597 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:33:44.037617 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:33:44.037630 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:33:44.037644 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:33:44.037658 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:33:44.037673 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:33:44.037686 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 17:33:44.037700 kernel: ACPI: Interpreter enabled Sep 12 17:33:44.037714 kernel: ACPI: PM: (supports S0 S5) Sep 12 17:33:44.037729 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 17:33:44.037749 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 17:33:44.037764 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 17:33:44.037778 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Sep 12 17:33:44.037793 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:33:44.038158 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:33:44.038280 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Sep 12 17:33:44.038420 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Sep 12 17:33:44.038446 kernel: acpiphp: Slot [3] registered Sep 12 17:33:44.038461 kernel: acpiphp: Slot [4] registered Sep 12 17:33:44.038477 kernel: acpiphp: Slot [5] registered Sep 12 17:33:44.038491 kernel: acpiphp: Slot [6] registered Sep 12 17:33:44.038506 kernel: acpiphp: Slot [7] registered Sep 12 17:33:44.038521 kernel: acpiphp: Slot [8] registered Sep 12 17:33:44.038534 kernel: acpiphp: Slot [9] registered Sep 12 17:33:44.038548 kernel: acpiphp: Slot [10] registered Sep 12 17:33:44.038564 kernel: acpiphp: Slot [11] registered Sep 12 17:33:44.038581 kernel: acpiphp: Slot [12] registered Sep 12 17:33:44.038590 kernel: acpiphp: Slot [13] registered Sep 12 17:33:44.038599 kernel: acpiphp: Slot [14] registered Sep 12 17:33:44.038610 kernel: acpiphp: Slot [15] registered Sep 12 17:33:44.038625 kernel: acpiphp: Slot [16] registered Sep 12 17:33:44.038640 kernel: acpiphp: Slot [17] registered Sep 12 17:33:44.038655 kernel: acpiphp: Slot [18] registered Sep 12 17:33:44.038670 kernel: acpiphp: Slot [19] registered Sep 12 17:33:44.038685 kernel: acpiphp: Slot [20] registered Sep 12 17:33:44.038695 kernel: acpiphp: Slot [21] registered Sep 12 17:33:44.038714 kernel: acpiphp: Slot [22] registered Sep 12 17:33:44.038729 kernel: acpiphp: Slot [23] registered Sep 12 17:33:44.038764 kernel: acpiphp: Slot [24] registered Sep 12 17:33:44.038799 kernel: acpiphp: Slot [25] registered Sep 12 17:33:44.038817 kernel: acpiphp: Slot [26] registered Sep 12 17:33:44.038829 kernel: acpiphp: Slot [27] registered Sep 12 17:33:44.038844 kernel: acpiphp: Slot [28] registered Sep 12 17:33:44.038859 kernel: acpiphp: Slot [29] registered Sep 12 17:33:44.038874 kernel: acpiphp: Slot [30] registered Sep 12 17:33:44.038895 kernel: acpiphp: Slot [31] registered Sep 12 17:33:44.038910 kernel: PCI host bridge to bus 0000:00 Sep 12 17:33:44.039110 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 17:33:44.039211 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 17:33:44.039307 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 17:33:44.039400 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Sep 12 17:33:44.039525 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Sep 12 17:33:44.039645 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:33:44.039899 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Sep 12 17:33:44.042241 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Sep 12 17:33:44.042436 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Sep 12 17:33:44.042551 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Sep 12 17:33:44.042706 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Sep 12 17:33:44.042811 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Sep 12 17:33:44.042940 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Sep 12 17:33:44.045198 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Sep 12 17:33:44.045365 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Sep 12 17:33:44.045596 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Sep 12 17:33:44.045766 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Sep 12 17:33:44.045869 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Sep 12 17:33:44.046261 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Sep 12 17:33:44.046425 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Sep 12 17:33:44.046587 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Sep 12 17:33:44.046747 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Sep 12 17:33:44.046905 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Sep 12 17:33:44.049928 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Sep 12 17:33:44.050174 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 17:33:44.050335 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:33:44.050476 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Sep 12 17:33:44.050595 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Sep 12 17:33:44.050712 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Sep 12 17:33:44.050855 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Sep 12 17:33:44.051002 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Sep 12 17:33:44.051118 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Sep 12 17:33:44.051262 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Sep 12 17:33:44.051375 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Sep 12 17:33:44.051478 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Sep 12 17:33:44.051578 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Sep 12 17:33:44.051681 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Sep 12 17:33:44.051824 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:33:44.051927 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Sep 12 17:33:44.052048 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Sep 12 17:33:44.052161 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Sep 12 17:33:44.052342 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Sep 12 17:33:44.052473 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Sep 12 17:33:44.052572 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Sep 12 17:33:44.052725 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Sep 12 17:33:44.052905 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Sep 12 17:33:44.055193 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Sep 12 17:33:44.055403 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Sep 12 17:33:44.055421 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 17:33:44.055432 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 17:33:44.055442 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 17:33:44.055451 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 17:33:44.055470 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Sep 12 17:33:44.055479 kernel: iommu: Default domain type: Translated Sep 12 17:33:44.055488 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 17:33:44.055497 kernel: PCI: Using ACPI for IRQ routing Sep 12 17:33:44.055506 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 17:33:44.055515 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 17:33:44.055524 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Sep 12 17:33:44.055655 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Sep 12 17:33:44.055761 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Sep 12 17:33:44.055868 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 17:33:44.055880 kernel: vgaarb: loaded Sep 12 17:33:44.055889 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Sep 12 17:33:44.055898 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Sep 12 17:33:44.055907 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 17:33:44.055916 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:33:44.055925 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:33:44.055934 kernel: pnp: PnP ACPI init Sep 12 17:33:44.055943 kernel: pnp: PnP ACPI: found 4 devices Sep 12 17:33:44.055955 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 17:33:44.055964 kernel: NET: Registered PF_INET protocol family Sep 12 17:33:44.056301 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:33:44.056332 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 17:33:44.056347 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:33:44.056362 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 17:33:44.056377 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 17:33:44.056393 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 17:33:44.056406 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:33:44.056422 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 17:33:44.056431 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:33:44.056440 kernel: NET: Registered PF_XDP protocol family Sep 12 17:33:44.056599 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 17:33:44.056727 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 17:33:44.056877 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 17:33:44.057064 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Sep 12 17:33:44.057200 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Sep 12 17:33:44.057344 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Sep 12 17:33:44.057458 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Sep 12 17:33:44.057547 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Sep 12 17:33:44.057709 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 35445 usecs Sep 12 17:33:44.057724 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:33:44.057734 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 17:33:44.057743 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Sep 12 17:33:44.057753 kernel: Initialise system trusted keyrings Sep 12 17:33:44.057770 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 17:33:44.057779 kernel: Key type asymmetric registered Sep 12 17:33:44.057788 kernel: Asymmetric key parser 'x509' registered Sep 12 17:33:44.057798 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 17:33:44.057808 kernel: io scheduler mq-deadline registered Sep 12 17:33:44.057823 kernel: io scheduler kyber registered Sep 12 17:33:44.057832 kernel: io scheduler bfq registered Sep 12 17:33:44.057841 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 17:33:44.057850 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Sep 12 17:33:44.057859 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Sep 12 17:33:44.057871 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Sep 12 17:33:44.057880 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:33:44.057889 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 17:33:44.057898 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 17:33:44.057907 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 17:33:44.057916 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 17:33:44.057926 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 17:33:44.058105 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 12 17:33:44.058207 kernel: rtc_cmos 00:03: registered as rtc0 Sep 12 17:33:44.058306 kernel: rtc_cmos 00:03: setting system clock to 2025-09-12T17:33:43 UTC (1757698423) Sep 12 17:33:44.058399 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 12 17:33:44.058410 kernel: intel_pstate: CPU model not supported Sep 12 17:33:44.058419 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:33:44.058428 kernel: Segment Routing with IPv6 Sep 12 17:33:44.058437 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:33:44.058445 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:33:44.058457 kernel: Key type dns_resolver registered Sep 12 17:33:44.058466 kernel: IPI shorthand broadcast: enabled Sep 12 17:33:44.058475 kernel: sched_clock: Marking stable (1069005168, 138081477)->(1335276154, -128189509) Sep 12 17:33:44.058483 kernel: registered taskstats version 1 Sep 12 17:33:44.058492 kernel: Loading compiled-in X.509 certificates Sep 12 17:33:44.058502 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 17:33:44.058516 kernel: Key type .fscrypt registered Sep 12 17:33:44.058528 kernel: Key type fscrypt-provisioning registered Sep 12 17:33:44.058542 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:33:44.058561 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:33:44.058574 kernel: ima: No architecture policies found Sep 12 17:33:44.058588 kernel: clk: Disabling unused clocks Sep 12 17:33:44.058602 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 17:33:44.058615 kernel: Write protecting the kernel read-only data: 36864k Sep 12 17:33:44.058653 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 17:33:44.058672 kernel: Run /init as init process Sep 12 17:33:44.058682 kernel: with arguments: Sep 12 17:33:44.058691 kernel: /init Sep 12 17:33:44.058703 kernel: with environment: Sep 12 17:33:44.058714 kernel: HOME=/ Sep 12 17:33:44.058724 kernel: TERM=linux Sep 12 17:33:44.058733 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:33:44.058745 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:33:44.058758 systemd[1]: Detected virtualization kvm. Sep 12 17:33:44.058767 systemd[1]: Detected architecture x86-64. Sep 12 17:33:44.058777 systemd[1]: Running in initrd. Sep 12 17:33:44.058789 systemd[1]: No hostname configured, using default hostname. Sep 12 17:33:44.058799 systemd[1]: Hostname set to . Sep 12 17:33:44.058809 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:33:44.058819 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:33:44.058829 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:33:44.058838 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:33:44.058849 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:33:44.058859 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:33:44.058871 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:33:44.058882 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:33:44.058893 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:33:44.058903 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:33:44.058912 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:33:44.058922 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:33:44.058931 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:33:44.058944 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:33:44.058954 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:33:44.058965 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:33:44.058991 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:33:44.059016 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:33:44.059030 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:33:44.059039 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 17:33:44.059049 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:33:44.059059 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:33:44.059068 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:33:44.059078 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:33:44.059087 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:33:44.059096 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:33:44.059106 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:33:44.059118 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:33:44.059128 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:33:44.059137 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:33:44.059147 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:44.059156 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:33:44.059166 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:33:44.059213 systemd-journald[185]: Collecting audit messages is disabled. Sep 12 17:33:44.059241 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:33:44.059252 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:33:44.059267 systemd-journald[185]: Journal started Sep 12 17:33:44.059289 systemd-journald[185]: Runtime Journal (/run/log/journal/4a40b43142984bae8822ceb80081581d) is 4.9M, max 39.3M, 34.4M free. Sep 12 17:33:44.043506 systemd-modules-load[186]: Inserted module 'overlay' Sep 12 17:33:44.112807 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:33:44.112867 kernel: Bridge firewalling registered Sep 12 17:33:44.112887 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:33:44.088737 systemd-modules-load[186]: Inserted module 'br_netfilter' Sep 12 17:33:44.120185 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:33:44.126700 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:44.128250 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:33:44.135416 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:33:44.143297 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:33:44.147514 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:33:44.148883 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:33:44.170456 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:33:44.180046 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:33:44.181859 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:33:44.188409 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:33:44.189576 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:33:44.195356 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:33:44.212757 dracut-cmdline[216]: dracut-dracut-053 Sep 12 17:33:44.219018 dracut-cmdline[216]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 17:33:44.240540 systemd-resolved[220]: Positive Trust Anchors: Sep 12 17:33:44.240560 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:33:44.240596 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:33:44.244670 systemd-resolved[220]: Defaulting to hostname 'linux'. Sep 12 17:33:44.246297 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:33:44.247874 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:33:44.335057 kernel: SCSI subsystem initialized Sep 12 17:33:44.347045 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:33:44.362052 kernel: iscsi: registered transport (tcp) Sep 12 17:33:44.390039 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:33:44.390162 kernel: QLogic iSCSI HBA Driver Sep 12 17:33:44.453109 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:33:44.462399 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:33:44.502035 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:33:44.504420 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:33:44.504551 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 17:33:44.562094 kernel: raid6: avx2x4 gen() 13986 MB/s Sep 12 17:33:44.580052 kernel: raid6: avx2x2 gen() 13766 MB/s Sep 12 17:33:44.598606 kernel: raid6: avx2x1 gen() 9836 MB/s Sep 12 17:33:44.598715 kernel: raid6: using algorithm avx2x4 gen() 13986 MB/s Sep 12 17:33:44.617666 kernel: raid6: .... xor() 5846 MB/s, rmw enabled Sep 12 17:33:44.617781 kernel: raid6: using avx2x2 recovery algorithm Sep 12 17:33:44.654039 kernel: xor: automatically using best checksumming function avx Sep 12 17:33:44.916035 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:33:44.932932 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:33:44.940286 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:33:44.971429 systemd-udevd[403]: Using default interface naming scheme 'v255'. Sep 12 17:33:44.978375 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:33:44.985159 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:33:45.005045 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Sep 12 17:33:45.044439 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:33:45.050353 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:33:45.140822 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:33:45.150422 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:33:45.171042 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:33:45.175023 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:33:45.177023 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:33:45.177613 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:33:45.185206 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:33:45.214161 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:33:45.235714 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Sep 12 17:33:45.258393 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 12 17:33:45.274666 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:33:45.274755 kernel: GPT:9289727 != 125829119 Sep 12 17:33:45.274774 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:33:45.274793 kernel: GPT:9289727 != 125829119 Sep 12 17:33:45.274810 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:33:45.274829 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:33:45.277011 kernel: scsi host0: Virtio SCSI HBA Sep 12 17:33:45.283163 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 17:33:45.305019 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Sep 12 17:33:45.308628 kernel: virtio_blk virtio5: [vdb] 980 512-byte logical blocks (502 kB/490 KiB) Sep 12 17:33:45.320642 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:33:45.325408 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:33:45.327340 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:33:45.328007 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:33:45.328955 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:45.330366 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:45.340444 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:45.343255 kernel: libata version 3.00 loaded. Sep 12 17:33:45.351173 kernel: ata_piix 0000:00:01.1: version 2.13 Sep 12 17:33:45.358049 kernel: AVX2 version of gcm_enc/dec engaged. Sep 12 17:33:45.358142 kernel: scsi host1: ata_piix Sep 12 17:33:45.362863 kernel: scsi host2: ata_piix Sep 12 17:33:45.363202 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Sep 12 17:33:45.363217 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Sep 12 17:33:45.370028 kernel: AES CTR mode by8 optimization enabled Sep 12 17:33:45.372018 kernel: ACPI: bus type USB registered Sep 12 17:33:45.390028 kernel: usbcore: registered new interface driver usbfs Sep 12 17:33:45.390116 kernel: usbcore: registered new interface driver hub Sep 12 17:33:45.390141 kernel: usbcore: registered new device driver usb Sep 12 17:33:45.396029 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (463) Sep 12 17:33:45.421020 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (448) Sep 12 17:33:45.456346 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:33:45.495528 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:45.512854 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:33:45.523256 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:33:45.524158 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:33:45.536199 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:33:45.545256 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:33:45.549700 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:33:45.556144 disk-uuid[538]: Primary Header is updated. Sep 12 17:33:45.556144 disk-uuid[538]: Secondary Entries is updated. Sep 12 17:33:45.556144 disk-uuid[538]: Secondary Header is updated. Sep 12 17:33:45.567031 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:33:45.574012 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:33:45.587926 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Sep 12 17:33:45.588348 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Sep 12 17:33:45.593428 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Sep 12 17:33:45.594635 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:33:45.600181 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Sep 12 17:33:45.606069 kernel: hub 1-0:1.0: USB hub found Sep 12 17:33:45.606508 kernel: hub 1-0:1.0: 2 ports detected Sep 12 17:33:46.585116 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:33:46.586836 disk-uuid[544]: The operation has completed successfully. Sep 12 17:33:46.636052 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:33:46.636201 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:33:46.641223 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:33:46.657876 sh[565]: Success Sep 12 17:33:46.675049 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Sep 12 17:33:46.756267 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:33:46.760148 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:33:46.764009 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:33:46.792134 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 17:33:46.792276 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:33:46.794063 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 17:33:46.796161 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:33:46.798504 kernel: BTRFS info (device dm-0): using free space tree Sep 12 17:33:46.809730 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:33:46.810948 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:33:46.817277 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:33:46.820195 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:33:46.838737 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:33:46.838822 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:33:46.838835 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:33:46.843008 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:33:46.856400 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 17:33:46.859242 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:33:46.865554 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:33:46.873269 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:33:47.015542 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:33:47.031465 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:33:47.071106 ignition[659]: Ignition 2.19.0 Sep 12 17:33:47.072242 ignition[659]: Stage: fetch-offline Sep 12 17:33:47.072379 ignition[659]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:33:47.072397 ignition[659]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:33:47.072613 ignition[659]: parsed url from cmdline: "" Sep 12 17:33:47.072620 ignition[659]: no config URL provided Sep 12 17:33:47.072629 ignition[659]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:33:47.072642 ignition[659]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:33:47.076856 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:33:47.072651 ignition[659]: failed to fetch config: resource requires networking Sep 12 17:33:47.078038 systemd-networkd[754]: lo: Link UP Sep 12 17:33:47.072991 ignition[659]: Ignition finished successfully Sep 12 17:33:47.078051 systemd-networkd[754]: lo: Gained carrier Sep 12 17:33:47.084062 systemd-networkd[754]: Enumeration completed Sep 12 17:33:47.084232 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:33:47.085155 systemd[1]: Reached target network.target - Network. Sep 12 17:33:47.085891 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 17:33:47.085897 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Sep 12 17:33:47.087609 systemd-networkd[754]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:47.087615 systemd-networkd[754]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:33:47.088450 systemd-networkd[754]: eth0: Link UP Sep 12 17:33:47.088455 systemd-networkd[754]: eth0: Gained carrier Sep 12 17:33:47.088466 systemd-networkd[754]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Sep 12 17:33:47.095683 systemd-networkd[754]: eth1: Link UP Sep 12 17:33:47.095689 systemd-networkd[754]: eth1: Gained carrier Sep 12 17:33:47.095711 systemd-networkd[754]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:33:47.096573 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 17:33:47.112143 systemd-networkd[754]: eth1: DHCPv4 address 10.124.0.19/20 acquired from 169.254.169.253 Sep 12 17:33:47.117172 systemd-networkd[754]: eth0: DHCPv4 address 64.227.109.162/20, gateway 64.227.96.1 acquired from 169.254.169.253 Sep 12 17:33:47.138584 ignition[759]: Ignition 2.19.0 Sep 12 17:33:47.138603 ignition[759]: Stage: fetch Sep 12 17:33:47.138927 ignition[759]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:33:47.138947 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:33:47.140543 ignition[759]: parsed url from cmdline: "" Sep 12 17:33:47.140551 ignition[759]: no config URL provided Sep 12 17:33:47.140563 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:33:47.140582 ignition[759]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:33:47.140616 ignition[759]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Sep 12 17:33:47.229748 ignition[759]: GET result: OK Sep 12 17:33:47.232177 ignition[759]: parsing config with SHA512: b1c8c14d3239a22dc8ed69da3508396d532a2246f7f1ff93ec7c20543849fb879e2a3b01b94a25b462d36f653b188eac5541c400eeb5d6626b7f3a26f8622ac2 Sep 12 17:33:47.295269 unknown[759]: fetched base config from "system" Sep 12 17:33:47.295303 unknown[759]: fetched base config from "system" Sep 12 17:33:47.295314 unknown[759]: fetched user config from "digitalocean" Sep 12 17:33:47.325729 ignition[759]: fetch: fetch complete Sep 12 17:33:47.325747 ignition[759]: fetch: fetch passed Sep 12 17:33:47.337530 ignition[759]: Ignition finished successfully Sep 12 17:33:47.352645 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 17:33:47.360458 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:33:47.407841 ignition[766]: Ignition 2.19.0 Sep 12 17:33:47.407860 ignition[766]: Stage: kargs Sep 12 17:33:47.409646 ignition[766]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:33:47.409673 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:33:47.414761 ignition[766]: kargs: kargs passed Sep 12 17:33:47.414855 ignition[766]: Ignition finished successfully Sep 12 17:33:47.416545 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:33:47.424355 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:33:47.469133 ignition[772]: Ignition 2.19.0 Sep 12 17:33:47.469158 ignition[772]: Stage: disks Sep 12 17:33:47.469674 ignition[772]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:33:47.469700 ignition[772]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:33:47.471410 ignition[772]: disks: disks passed Sep 12 17:33:47.472535 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:33:47.471480 ignition[772]: Ignition finished successfully Sep 12 17:33:47.478759 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:33:47.480020 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:33:47.481372 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:33:47.482822 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:33:47.484216 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:33:47.498328 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:33:47.517587 systemd-fsck[780]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 12 17:33:47.521228 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:33:47.531148 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:33:47.654150 kernel: EXT4-fs (vda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 17:33:47.655227 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:33:47.656571 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:33:47.664181 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:33:47.667140 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:33:47.672255 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Sep 12 17:33:47.681249 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Sep 12 17:33:47.684099 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:33:47.684161 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:33:47.689757 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (788) Sep 12 17:33:47.701209 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:33:47.705118 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:33:47.705199 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:33:47.710045 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:33:47.712400 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:33:47.716948 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:33:47.725311 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:33:47.800793 initrd-setup-root[818]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:33:47.805029 coreos-metadata[790]: Sep 12 17:33:47.803 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:33:47.816433 initrd-setup-root[825]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:33:47.818415 coreos-metadata[790]: Sep 12 17:33:47.818 INFO Fetch successful Sep 12 17:33:47.819462 coreos-metadata[791]: Sep 12 17:33:47.819 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:33:47.830507 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Sep 12 17:33:47.830706 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Sep 12 17:33:47.835748 initrd-setup-root[833]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:33:47.837316 coreos-metadata[791]: Sep 12 17:33:47.832 INFO Fetch successful Sep 12 17:33:47.843086 coreos-metadata[791]: Sep 12 17:33:47.842 INFO wrote hostname ci-4081.3.6-8-31c29e3945 to /sysroot/etc/hostname Sep 12 17:33:47.844577 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:33:47.849580 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:33:47.984557 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:33:47.989189 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:33:47.993277 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:33:48.011073 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:33:48.011098 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:33:48.040442 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:33:48.053669 ignition[909]: INFO : Ignition 2.19.0 Sep 12 17:33:48.053669 ignition[909]: INFO : Stage: mount Sep 12 17:33:48.055280 ignition[909]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:33:48.055280 ignition[909]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:33:48.055280 ignition[909]: INFO : mount: mount passed Sep 12 17:33:48.055280 ignition[909]: INFO : Ignition finished successfully Sep 12 17:33:48.058739 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:33:48.069237 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:33:48.080347 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:33:48.105027 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (920) Sep 12 17:33:48.109046 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 17:33:48.109157 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 17:33:48.111295 kernel: BTRFS info (device vda6): using free space tree Sep 12 17:33:48.115024 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 17:33:48.117961 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:33:48.155266 ignition[937]: INFO : Ignition 2.19.0 Sep 12 17:33:48.156997 ignition[937]: INFO : Stage: files Sep 12 17:33:48.156997 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:33:48.156997 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:33:48.159005 ignition[937]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:33:48.160672 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:33:48.160672 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:33:48.164070 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:33:48.165046 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:33:48.166273 unknown[937]: wrote ssh authorized keys file for user: core Sep 12 17:33:48.167364 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:33:48.168791 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:33:48.170060 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 17:33:48.224192 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:33:48.406062 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 17:33:48.406062 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:33:48.406062 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:33:48.406062 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:33:48.406062 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:33:48.406062 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:33:48.413590 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:33:48.413590 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:33:48.413590 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:33:48.413590 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:33:48.413590 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:33:48.413590 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:33:48.413590 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:33:48.413590 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:33:48.413590 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 17:33:48.480438 systemd-networkd[754]: eth0: Gained IPv6LL Sep 12 17:33:48.544396 systemd-networkd[754]: eth1: Gained IPv6LL Sep 12 17:33:48.890266 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 17:33:49.402883 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 17:33:49.402883 ignition[937]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 17:33:49.405944 ignition[937]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:33:49.405944 ignition[937]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:33:49.405944 ignition[937]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 17:33:49.405944 ignition[937]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:33:49.405944 ignition[937]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:33:49.405944 ignition[937]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:33:49.405944 ignition[937]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:33:49.405944 ignition[937]: INFO : files: files passed Sep 12 17:33:49.405944 ignition[937]: INFO : Ignition finished successfully Sep 12 17:33:49.405808 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:33:49.413219 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:33:49.416386 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:33:49.429317 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:33:49.429543 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:33:49.440943 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:33:49.440943 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:33:49.444239 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:33:49.446606 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:33:49.447608 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:33:49.456232 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:33:49.499325 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:33:49.499511 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:33:49.501535 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:33:49.502832 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:33:49.503442 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:33:49.507202 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:33:49.524887 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:33:49.532302 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:33:49.552770 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:33:49.554482 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:33:49.555999 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:33:49.556574 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:33:49.556702 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:33:49.558640 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:33:49.559372 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:33:49.560795 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:33:49.561960 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:33:49.563278 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:33:49.564535 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:33:49.565795 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:33:49.567197 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:33:49.568667 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:33:49.570260 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:33:49.571488 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:33:49.571711 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:33:49.573148 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:33:49.574120 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:33:49.575382 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:33:49.575774 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:33:49.576967 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:33:49.577213 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:33:49.579198 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:33:49.579384 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:33:49.581102 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:33:49.581334 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:33:49.582714 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Sep 12 17:33:49.582903 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Sep 12 17:33:49.594121 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:33:49.598378 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:33:49.599772 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:33:49.600067 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:33:49.604290 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:33:49.604507 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:33:49.620382 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:33:49.620546 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:33:49.637495 ignition[989]: INFO : Ignition 2.19.0 Sep 12 17:33:49.640120 ignition[989]: INFO : Stage: umount Sep 12 17:33:49.640120 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:33:49.640120 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Sep 12 17:33:49.675558 ignition[989]: INFO : umount: umount passed Sep 12 17:33:49.675558 ignition[989]: INFO : Ignition finished successfully Sep 12 17:33:49.646364 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:33:49.661876 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:33:49.662083 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:33:49.677155 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:33:49.677319 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:33:49.678633 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:33:49.678729 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:33:49.680768 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 17:33:49.680834 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 17:33:49.706806 systemd[1]: Stopped target network.target - Network. Sep 12 17:33:49.707528 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:33:49.707659 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:33:49.709047 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:33:49.710151 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:33:49.717152 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:33:49.746722 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:33:49.747256 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:33:49.747842 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:33:49.747899 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:33:49.750201 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:33:49.750255 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:33:49.750873 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:33:49.750926 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:33:49.752402 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:33:49.752455 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:33:49.753865 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:33:49.755092 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:33:49.756638 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:33:49.756780 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:33:49.758090 systemd-networkd[754]: eth0: DHCPv6 lease lost Sep 12 17:33:49.758408 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:33:49.758514 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:33:49.762145 systemd-networkd[754]: eth1: DHCPv6 lease lost Sep 12 17:33:49.764336 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:33:49.764477 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:33:49.766115 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:33:49.766156 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:33:49.772172 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:33:49.773261 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:33:49.773334 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:33:49.781104 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:33:49.782638 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:33:49.783687 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:33:49.791737 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:33:49.791876 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:33:49.795580 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:33:49.795636 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:33:49.796947 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:33:49.797062 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:33:49.800393 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:33:49.800621 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:33:49.803458 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:33:49.803539 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:33:49.804849 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:33:49.804892 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:33:49.806512 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:33:49.806569 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:33:49.808216 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:33:49.808297 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:33:49.810006 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:33:49.810075 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:33:49.819258 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:33:49.819960 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:33:49.820200 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:33:49.824164 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 17:33:49.824260 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:33:49.826569 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:33:49.826676 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:33:49.832750 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:33:49.832851 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:49.834587 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:33:49.834773 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:33:49.836683 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:33:49.836847 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:33:49.839578 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:33:49.846382 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:33:49.861490 systemd[1]: Switching root. Sep 12 17:33:49.947171 systemd-journald[185]: Journal stopped Sep 12 17:33:51.196952 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Sep 12 17:33:51.199453 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:33:51.199479 kernel: SELinux: policy capability open_perms=1 Sep 12 17:33:51.199503 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:33:51.199514 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:33:51.199525 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:33:51.199550 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:33:51.199567 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:33:51.199578 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:33:51.199589 kernel: audit: type=1403 audit(1757698430.098:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:33:51.199610 systemd[1]: Successfully loaded SELinux policy in 45.007ms. Sep 12 17:33:51.199635 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.033ms. Sep 12 17:33:51.199649 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 17:33:51.199666 systemd[1]: Detected virtualization kvm. Sep 12 17:33:51.199684 systemd[1]: Detected architecture x86-64. Sep 12 17:33:51.199696 systemd[1]: Detected first boot. Sep 12 17:33:51.199708 systemd[1]: Hostname set to . Sep 12 17:33:51.199720 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:33:51.199732 zram_generator::config[1036]: No configuration found. Sep 12 17:33:51.199746 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:33:51.199757 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:33:51.199769 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:33:51.199787 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:33:51.199800 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:33:51.199816 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:33:51.199828 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:33:51.199839 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:33:51.199851 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:33:51.199862 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:33:51.199874 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:33:51.199892 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:33:51.199906 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:33:51.199917 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:33:51.199929 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:33:51.199940 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:33:51.199951 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:33:51.199963 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:33:51.200142 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 17:33:51.200159 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:33:51.200181 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:33:51.200193 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:33:51.200204 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:33:51.200216 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:33:51.200227 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:33:51.200239 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:33:51.200250 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:33:51.200269 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:33:51.200281 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:33:51.200292 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:33:51.200303 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:33:51.200319 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:33:51.200330 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:33:51.200342 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:33:51.200354 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:33:51.200366 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:33:51.200384 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:33:51.200396 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:51.200408 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:33:51.200419 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:33:51.200431 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:33:51.200442 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:33:51.200454 systemd[1]: Reached target machines.target - Containers. Sep 12 17:33:51.200465 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:33:51.200484 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:33:51.200495 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:33:51.200506 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:33:51.200517 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:33:51.200528 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:33:51.200540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:33:51.200551 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:33:51.200562 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:33:51.200574 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:33:51.200593 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:33:51.200605 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:33:51.200616 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:33:51.200627 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:33:51.200638 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:33:51.200650 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:33:51.200665 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:33:51.200684 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:33:51.200715 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:33:51.200738 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:33:51.200755 systemd[1]: Stopped verity-setup.service. Sep 12 17:33:51.200774 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:51.200792 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:33:51.200809 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:33:51.200828 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:33:51.200847 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:33:51.200906 systemd-journald[1105]: Collecting audit messages is disabled. Sep 12 17:33:51.200934 systemd-journald[1105]: Journal started Sep 12 17:33:51.201006 systemd-journald[1105]: Runtime Journal (/run/log/journal/4a40b43142984bae8822ceb80081581d) is 4.9M, max 39.3M, 34.4M free. Sep 12 17:33:50.848397 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:33:50.870383 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:33:50.870936 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:33:51.205023 kernel: fuse: init (API version 7.39) Sep 12 17:33:51.207002 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:33:51.210404 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:33:51.212626 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:33:51.213701 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:33:51.215468 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:33:51.215665 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:33:51.217737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:33:51.217909 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:33:51.219513 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:33:51.219667 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:33:51.221689 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:33:51.223192 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:33:51.227452 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:33:51.228509 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:33:51.229644 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:33:51.260626 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:33:51.272011 kernel: loop: module loaded Sep 12 17:33:51.272415 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:33:51.284062 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:33:51.284722 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:33:51.284781 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:33:51.289687 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 17:33:51.312776 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:33:51.323443 kernel: ACPI: bus type drm_connector registered Sep 12 17:33:51.322322 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:33:51.323251 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:33:51.326193 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:33:51.343281 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:33:51.344017 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:33:51.350122 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:33:51.364330 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:33:51.371295 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:33:51.375761 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:33:51.382481 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:33:51.390386 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:33:51.390634 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:33:51.396671 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:33:51.396873 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:33:51.398445 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:33:51.400392 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:33:51.402280 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:33:51.420334 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:33:51.434740 systemd-journald[1105]: Time spent on flushing to /var/log/journal/4a40b43142984bae8822ceb80081581d is 50.542ms for 988 entries. Sep 12 17:33:51.434740 systemd-journald[1105]: System Journal (/var/log/journal/4a40b43142984bae8822ceb80081581d) is 8.0M, max 195.6M, 187.6M free. Sep 12 17:33:51.507087 systemd-journald[1105]: Received client request to flush runtime journal. Sep 12 17:33:51.485506 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:33:51.487630 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:33:51.498244 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 17:33:51.515566 kernel: loop0: detected capacity change from 0 to 8 Sep 12 17:33:51.511921 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:33:51.532123 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:33:51.555222 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:33:51.571627 kernel: loop1: detected capacity change from 0 to 142488 Sep 12 17:33:51.576242 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:33:51.581454 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 17:33:51.586576 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:33:51.599447 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 17:33:51.624614 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Sep 12 17:33:51.626071 systemd-tmpfiles[1150]: ACLs are not supported, ignoring. Sep 12 17:33:51.635058 kernel: loop2: detected capacity change from 0 to 140768 Sep 12 17:33:51.660933 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:33:51.697135 kernel: loop3: detected capacity change from 0 to 221472 Sep 12 17:33:51.704492 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:33:51.705880 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Sep 12 17:33:51.752480 kernel: loop4: detected capacity change from 0 to 8 Sep 12 17:33:51.766036 kernel: loop5: detected capacity change from 0 to 142488 Sep 12 17:33:51.798089 kernel: loop6: detected capacity change from 0 to 140768 Sep 12 17:33:51.835290 kernel: loop7: detected capacity change from 0 to 221472 Sep 12 17:33:51.863959 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Sep 12 17:33:51.866613 (sd-merge)[1175]: Merged extensions into '/usr'. Sep 12 17:33:51.882692 systemd[1]: Reloading requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:33:51.882720 systemd[1]: Reloading... Sep 12 17:33:52.103053 zram_generator::config[1203]: No configuration found. Sep 12 17:33:52.308908 ldconfig[1144]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:33:52.420345 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:33:52.510341 systemd[1]: Reloading finished in 625 ms. Sep 12 17:33:52.548432 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:33:52.550222 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:33:52.555422 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:33:52.567265 systemd[1]: Starting ensure-sysext.service... Sep 12 17:33:52.577146 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:33:52.585425 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:33:52.591537 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:33:52.591555 systemd[1]: Reloading... Sep 12 17:33:52.691883 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:33:52.693400 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:33:52.699252 systemd-tmpfiles[1249]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:33:52.699737 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 12 17:33:52.699857 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Sep 12 17:33:52.701361 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 12 17:33:52.701390 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Sep 12 17:33:52.719636 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:33:52.719659 systemd-tmpfiles[1249]: Skipping /boot Sep 12 17:33:52.763009 zram_generator::config[1279]: No configuration found. Sep 12 17:33:52.762499 systemd-tmpfiles[1249]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:33:52.762508 systemd-tmpfiles[1249]: Skipping /boot Sep 12 17:33:52.943853 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:33:52.994285 systemd[1]: Reloading finished in 402 ms. Sep 12 17:33:53.014957 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:33:53.022150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:33:53.023620 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:33:53.040756 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:33:53.046364 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:33:53.050832 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:33:53.058184 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:33:53.067180 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:33:53.079302 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:33:53.082404 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:53.082581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:33:53.088590 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:33:53.099480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:33:53.108593 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:33:53.111229 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:33:53.111384 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:53.112374 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:33:53.124438 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:33:53.132331 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:33:53.134432 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:53.134667 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:33:53.134837 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:33:53.134944 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:53.137892 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:53.139269 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:33:53.145341 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:33:53.147343 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:33:53.147807 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:53.152757 systemd-udevd[1333]: Using default interface naming scheme 'v255'. Sep 12 17:33:53.153752 systemd[1]: Finished ensure-sysext.service. Sep 12 17:33:53.169135 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:33:53.172550 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:33:53.179378 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:33:53.179608 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:33:53.181011 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:33:53.181160 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:33:53.182532 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:33:53.184950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:33:53.185148 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:33:53.186200 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:33:53.188473 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:33:53.196027 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:33:53.196494 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:33:53.217153 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:33:53.230380 augenrules[1363]: No rules Sep 12 17:33:53.231561 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:33:53.233547 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:33:53.253114 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:33:53.255911 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:33:53.264797 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:33:53.406117 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:33:53.407517 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:33:53.453681 systemd-networkd[1361]: lo: Link UP Sep 12 17:33:53.453695 systemd-networkd[1361]: lo: Gained carrier Sep 12 17:33:53.455927 systemd-networkd[1361]: Enumeration completed Sep 12 17:33:53.456168 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:33:53.467387 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:33:53.489625 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 17:33:53.501314 systemd-resolved[1327]: Positive Trust Anchors: Sep 12 17:33:53.501332 systemd-resolved[1327]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:33:53.501393 systemd-resolved[1327]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:33:53.512590 systemd-resolved[1327]: Using system hostname 'ci-4081.3.6-8-31c29e3945'. Sep 12 17:33:53.519777 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Sep 12 17:33:53.521112 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:53.521252 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:33:53.529260 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:33:53.539597 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:33:53.544292 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:33:53.546171 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:33:53.546231 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:33:53.546255 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 17:33:53.546494 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:33:53.549051 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:33:53.549701 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:33:53.559716 systemd[1]: Reached target network.target - Network. Sep 12 17:33:53.560361 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:33:53.574124 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:33:53.574349 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:33:53.575268 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:33:53.592770 kernel: ISO 9660 Extensions: RRIP_1991A Sep 12 17:33:53.592548 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:33:53.593087 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:33:53.599422 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Sep 12 17:33:53.604201 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:33:53.638033 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1376) Sep 12 17:33:53.655820 systemd-networkd[1361]: eth0: Configuring with /run/systemd/network/10-fa:7c:1b:84:08:09.network. Sep 12 17:33:53.658075 systemd-networkd[1361]: eth0: Link UP Sep 12 17:33:53.658089 systemd-networkd[1361]: eth0: Gained carrier Sep 12 17:33:53.665673 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 12 17:33:53.677334 systemd-networkd[1361]: eth1: Configuring with /run/systemd/network/10-f2:2a:cf:04:f6:e9.network. Sep 12 17:33:53.679264 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 12 17:33:53.679618 systemd-networkd[1361]: eth1: Link UP Sep 12 17:33:53.679631 systemd-networkd[1361]: eth1: Gained carrier Sep 12 17:33:53.683847 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 12 17:33:53.684064 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 12 17:33:53.714838 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Sep 12 17:33:53.714936 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Sep 12 17:33:53.728231 kernel: ACPI: button: Power Button [PWRF] Sep 12 17:33:53.747275 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Sep 12 17:33:53.801436 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:33:53.814265 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:33:53.835035 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 17:33:53.846122 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:53.860744 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:33:53.905031 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Sep 12 17:33:53.907287 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Sep 12 17:33:53.924074 kernel: Console: switching to colour dummy device 80x25 Sep 12 17:33:53.926056 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Sep 12 17:33:53.926161 kernel: [drm] features: -context_init Sep 12 17:33:53.933590 kernel: [drm] number of scanouts: 1 Sep 12 17:33:53.932559 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:33:53.932903 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:53.936024 kernel: [drm] number of cap sets: 0 Sep 12 17:33:53.969207 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Sep 12 17:33:53.969297 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:53.979645 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Sep 12 17:33:53.979760 kernel: Console: switching to colour frame buffer device 128x48 Sep 12 17:33:53.987026 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Sep 12 17:33:54.028297 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:33:54.028498 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:54.055586 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:33:54.094641 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:33:54.131012 kernel: EDAC MC: Ver: 3.0.0 Sep 12 17:33:54.163684 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 17:33:54.176315 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 17:33:54.193611 lvm[1438]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:33:54.234480 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 17:33:54.236486 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:33:54.236723 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:33:54.237225 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:33:54.238175 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:33:54.238488 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:33:54.238682 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:33:54.238753 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:33:54.238812 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:33:54.238848 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:33:54.238920 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:33:54.241158 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:33:54.243261 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:33:54.251109 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:33:54.253769 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 17:33:54.256520 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:33:54.258236 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:33:54.259594 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:33:54.262545 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:33:54.262617 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:33:54.271734 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:33:54.277140 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 17:33:54.277862 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 17:33:54.288334 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:33:54.294743 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:33:54.300017 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:33:54.302639 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:33:54.310043 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:33:54.328222 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:33:54.339846 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:33:54.348256 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:33:54.351536 jq[1446]: false Sep 12 17:33:54.357403 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:33:54.358574 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:33:54.359837 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:33:54.367380 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:33:54.384025 coreos-metadata[1444]: Sep 12 17:33:54.383 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:33:54.386541 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:33:54.389676 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 17:33:54.396244 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:33:54.396507 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:33:54.413125 coreos-metadata[1444]: Sep 12 17:33:54.406 INFO Fetch successful Sep 12 17:33:54.421502 jq[1457]: true Sep 12 17:33:54.436101 update_engine[1456]: I20250912 17:33:54.435874 1456 main.cc:92] Flatcar Update Engine starting Sep 12 17:33:54.447499 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:33:54.447746 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:33:54.461659 dbus-daemon[1445]: [system] SELinux support is enabled Sep 12 17:33:54.461925 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:33:54.466426 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:33:54.466464 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:33:54.468836 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:33:54.468924 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Sep 12 17:33:54.468951 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:33:54.481152 extend-filesystems[1447]: Found loop4 Sep 12 17:33:54.493950 extend-filesystems[1447]: Found loop5 Sep 12 17:33:54.493950 extend-filesystems[1447]: Found loop6 Sep 12 17:33:54.493950 extend-filesystems[1447]: Found loop7 Sep 12 17:33:54.493950 extend-filesystems[1447]: Found vda Sep 12 17:33:54.493950 extend-filesystems[1447]: Found vda1 Sep 12 17:33:54.493950 extend-filesystems[1447]: Found vda2 Sep 12 17:33:54.493950 extend-filesystems[1447]: Found vda3 Sep 12 17:33:54.493950 extend-filesystems[1447]: Found usr Sep 12 17:33:54.493950 extend-filesystems[1447]: Found vda4 Sep 12 17:33:54.493950 extend-filesystems[1447]: Found vda6 Sep 12 17:33:54.493950 extend-filesystems[1447]: Found vda7 Sep 12 17:33:54.493950 extend-filesystems[1447]: Found vda9 Sep 12 17:33:54.493950 extend-filesystems[1447]: Checking size of /dev/vda9 Sep 12 17:33:54.481466 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:33:54.586145 update_engine[1456]: I20250912 17:33:54.481961 1456 update_check_scheduler.cc:74] Next update check in 8m21s Sep 12 17:33:54.586330 tar[1460]: linux-amd64/helm Sep 12 17:33:54.494417 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:33:54.497911 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:33:54.590305 jq[1468]: true Sep 12 17:33:54.513908 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:33:54.516059 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:33:54.608812 extend-filesystems[1447]: Resized partition /dev/vda9 Sep 12 17:33:54.616037 extend-filesystems[1491]: resize2fs 1.47.1 (20-May-2024) Sep 12 17:33:54.634546 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Sep 12 17:33:54.633572 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 17:33:54.634476 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:33:54.656891 systemd-logind[1455]: New seat seat0. Sep 12 17:33:54.688398 systemd-networkd[1361]: eth1: Gained IPv6LL Sep 12 17:33:54.688960 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 12 17:33:54.715057 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:33:54.811127 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:33:54.811616 systemd-logind[1455]: Watching system buttons on /dev/input/event1 (Power Button) Sep 12 17:33:54.811642 systemd-logind[1455]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 17:33:54.822244 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:33:54.847385 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 12 17:33:54.890080 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1373) Sep 12 17:33:54.858377 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:33:54.861964 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:33:54.904017 extend-filesystems[1491]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:33:54.904017 extend-filesystems[1491]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 12 17:33:54.904017 extend-filesystems[1491]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 12 17:33:54.903968 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:33:54.927245 bash[1507]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:33:54.927375 extend-filesystems[1447]: Resized filesystem in /dev/vda9 Sep 12 17:33:54.927375 extend-filesystems[1447]: Found vdb Sep 12 17:33:54.905472 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:33:54.913181 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:33:54.978395 systemd[1]: Starting sshkeys.service... Sep 12 17:33:55.050573 locksmithd[1480]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:33:55.056415 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:33:55.073491 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 17:33:55.090594 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 17:33:55.220780 coreos-metadata[1527]: Sep 12 17:33:55.218 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Sep 12 17:33:55.234890 coreos-metadata[1527]: Sep 12 17:33:55.233 INFO Fetch successful Sep 12 17:33:55.269867 unknown[1527]: wrote ssh authorized keys file for user: core Sep 12 17:33:55.353932 update-ssh-keys[1535]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:33:55.355138 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 17:33:55.368458 systemd[1]: Finished sshkeys.service. Sep 12 17:33:55.387334 containerd[1470]: time="2025-09-12T17:33:55.387173773Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 17:33:55.460743 containerd[1470]: time="2025-09-12T17:33:55.459035944Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:55.468353 containerd[1470]: time="2025-09-12T17:33:55.468273222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:55.469113 containerd[1470]: time="2025-09-12T17:33:55.468524002Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 17:33:55.469113 containerd[1470]: time="2025-09-12T17:33:55.468567576Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 17:33:55.469113 containerd[1470]: time="2025-09-12T17:33:55.468826471Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 17:33:55.469113 containerd[1470]: time="2025-09-12T17:33:55.468867557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:55.469113 containerd[1470]: time="2025-09-12T17:33:55.468954499Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:55.469113 containerd[1470]: time="2025-09-12T17:33:55.469012558Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:55.471295 containerd[1470]: time="2025-09-12T17:33:55.471180364Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:55.473703 containerd[1470]: time="2025-09-12T17:33:55.472859441Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:55.473703 containerd[1470]: time="2025-09-12T17:33:55.472919928Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:55.473703 containerd[1470]: time="2025-09-12T17:33:55.472940254Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:55.473703 containerd[1470]: time="2025-09-12T17:33:55.473179742Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:55.473703 containerd[1470]: time="2025-09-12T17:33:55.473581335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 17:33:55.476281 containerd[1470]: time="2025-09-12T17:33:55.475795732Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 17:33:55.476281 containerd[1470]: time="2025-09-12T17:33:55.475835154Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 17:33:55.476281 containerd[1470]: time="2025-09-12T17:33:55.476107664Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 17:33:55.476281 containerd[1470]: time="2025-09-12T17:33:55.476209481Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:33:55.481693 sshd_keygen[1483]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:33:55.487792 containerd[1470]: time="2025-09-12T17:33:55.487666387Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 17:33:55.489026 containerd[1470]: time="2025-09-12T17:33:55.488046366Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 17:33:55.489026 containerd[1470]: time="2025-09-12T17:33:55.488092091Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 17:33:55.489026 containerd[1470]: time="2025-09-12T17:33:55.488559268Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 17:33:55.489026 containerd[1470]: time="2025-09-12T17:33:55.488612681Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 17:33:55.489026 containerd[1470]: time="2025-09-12T17:33:55.488885980Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 17:33:55.491340 containerd[1470]: time="2025-09-12T17:33:55.491291630Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 17:33:55.491762 containerd[1470]: time="2025-09-12T17:33:55.491729187Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 17:33:55.494006 containerd[1470]: time="2025-09-12T17:33:55.493143965Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 17:33:55.494006 containerd[1470]: time="2025-09-12T17:33:55.493192727Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 17:33:55.494006 containerd[1470]: time="2025-09-12T17:33:55.493219974Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 17:33:55.494006 containerd[1470]: time="2025-09-12T17:33:55.493244459Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 17:33:55.494006 containerd[1470]: time="2025-09-12T17:33:55.493267280Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 17:33:55.494006 containerd[1470]: time="2025-09-12T17:33:55.493292717Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 17:33:55.494006 containerd[1470]: time="2025-09-12T17:33:55.493333069Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 17:33:55.494006 containerd[1470]: time="2025-09-12T17:33:55.493356420Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 17:33:55.494006 containerd[1470]: time="2025-09-12T17:33:55.493376784Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 17:33:55.494006 containerd[1470]: time="2025-09-12T17:33:55.493425206Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 17:33:55.494006 containerd[1470]: time="2025-09-12T17:33:55.493465580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494006 containerd[1470]: time="2025-09-12T17:33:55.493489858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494006 containerd[1470]: time="2025-09-12T17:33:55.493511358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494006 containerd[1470]: time="2025-09-12T17:33:55.493535829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494558 containerd[1470]: time="2025-09-12T17:33:55.493556097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494558 containerd[1470]: time="2025-09-12T17:33:55.493577691Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494558 containerd[1470]: time="2025-09-12T17:33:55.493598689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494558 containerd[1470]: time="2025-09-12T17:33:55.493622775Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494558 containerd[1470]: time="2025-09-12T17:33:55.493644429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494558 containerd[1470]: time="2025-09-12T17:33:55.493671776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494558 containerd[1470]: time="2025-09-12T17:33:55.493701873Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494558 containerd[1470]: time="2025-09-12T17:33:55.493764158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494558 containerd[1470]: time="2025-09-12T17:33:55.493788466Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494558 containerd[1470]: time="2025-09-12T17:33:55.493818292Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 17:33:55.494558 containerd[1470]: time="2025-09-12T17:33:55.493885958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494558 containerd[1470]: time="2025-09-12T17:33:55.493908653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.494558 containerd[1470]: time="2025-09-12T17:33:55.493927670Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 17:33:55.497054 containerd[1470]: time="2025-09-12T17:33:55.496706685Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 17:33:55.497054 containerd[1470]: time="2025-09-12T17:33:55.496782554Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 17:33:55.497054 containerd[1470]: time="2025-09-12T17:33:55.496803272Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 17:33:55.497054 containerd[1470]: time="2025-09-12T17:33:55.496824550Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 17:33:55.497054 containerd[1470]: time="2025-09-12T17:33:55.496842624Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.497054 containerd[1470]: time="2025-09-12T17:33:55.496866812Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 17:33:55.497054 containerd[1470]: time="2025-09-12T17:33:55.496884459Z" level=info msg="NRI interface is disabled by configuration." Sep 12 17:33:55.497054 containerd[1470]: time="2025-09-12T17:33:55.496904575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 17:33:55.498876 containerd[1470]: time="2025-09-12T17:33:55.497858003Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 17:33:55.498876 containerd[1470]: time="2025-09-12T17:33:55.498023598Z" level=info msg="Connect containerd service" Sep 12 17:33:55.498876 containerd[1470]: time="2025-09-12T17:33:55.498100452Z" level=info msg="using legacy CRI server" Sep 12 17:33:55.498876 containerd[1470]: time="2025-09-12T17:33:55.498115438Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:33:55.498876 containerd[1470]: time="2025-09-12T17:33:55.498319371Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 17:33:55.506022 containerd[1470]: time="2025-09-12T17:33:55.503699605Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:33:55.506022 containerd[1470]: time="2025-09-12T17:33:55.504570518Z" level=info msg="Start subscribing containerd event" Sep 12 17:33:55.508179 containerd[1470]: time="2025-09-12T17:33:55.504971615Z" level=info msg="Start recovering state" Sep 12 17:33:55.508355 containerd[1470]: time="2025-09-12T17:33:55.508287130Z" level=info msg="Start event monitor" Sep 12 17:33:55.508355 containerd[1470]: time="2025-09-12T17:33:55.508324203Z" level=info msg="Start snapshots syncer" Sep 12 17:33:55.508355 containerd[1470]: time="2025-09-12T17:33:55.508344720Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:33:55.508483 containerd[1470]: time="2025-09-12T17:33:55.508367845Z" level=info msg="Start streaming server" Sep 12 17:33:55.512174 containerd[1470]: time="2025-09-12T17:33:55.512104154Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:33:55.513628 containerd[1470]: time="2025-09-12T17:33:55.513580273Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:33:55.514050 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:33:55.522038 containerd[1470]: time="2025-09-12T17:33:55.521958526Z" level=info msg="containerd successfully booted in 0.136870s" Sep 12 17:33:55.570808 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:33:55.582841 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:33:55.618941 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:33:55.619216 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:33:55.635461 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:33:55.701269 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:33:55.713214 systemd-networkd[1361]: eth0: Gained IPv6LL Sep 12 17:33:55.713771 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 12 17:33:55.715529 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:33:55.725611 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 17:33:55.728663 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:33:56.017066 tar[1460]: linux-amd64/LICENSE Sep 12 17:33:56.017066 tar[1460]: linux-amd64/README.md Sep 12 17:33:56.036245 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:33:56.690686 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:33:56.691800 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:33:56.695378 systemd[1]: Startup finished in 1.226s (kernel) + 6.367s (initrd) + 6.639s (userspace) = 14.234s. Sep 12 17:33:56.702484 (kubelet)[1567]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:33:57.465427 kubelet[1567]: E0912 17:33:57.463232 1567 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:33:57.467758 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:33:57.467934 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:33:57.468905 systemd[1]: kubelet.service: Consumed 1.672s CPU time. Sep 12 17:33:58.026531 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:33:58.029476 systemd[1]: Started sshd@0-64.227.109.162:22-147.75.109.163:34252.service - OpenSSH per-connection server daemon (147.75.109.163:34252). Sep 12 17:33:58.123049 sshd[1579]: Accepted publickey for core from 147.75.109.163 port 34252 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:33:58.125076 sshd[1579]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:58.141930 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:33:58.149679 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:33:58.154663 systemd-logind[1455]: New session 1 of user core. Sep 12 17:33:58.190439 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:33:58.203600 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:33:58.212022 (systemd)[1583]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:33:58.381202 systemd[1583]: Queued start job for default target default.target. Sep 12 17:33:58.388912 systemd[1583]: Created slice app.slice - User Application Slice. Sep 12 17:33:58.388968 systemd[1583]: Reached target paths.target - Paths. Sep 12 17:33:58.389011 systemd[1583]: Reached target timers.target - Timers. Sep 12 17:33:58.391331 systemd[1583]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:33:58.411387 systemd[1583]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:33:58.411590 systemd[1583]: Reached target sockets.target - Sockets. Sep 12 17:33:58.411614 systemd[1583]: Reached target basic.target - Basic System. Sep 12 17:33:58.411695 systemd[1583]: Reached target default.target - Main User Target. Sep 12 17:33:58.411744 systemd[1583]: Startup finished in 188ms. Sep 12 17:33:58.412340 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:33:58.424437 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:33:58.500351 systemd[1]: Started sshd@1-64.227.109.162:22-147.75.109.163:34268.service - OpenSSH per-connection server daemon (147.75.109.163:34268). Sep 12 17:33:58.572277 sshd[1594]: Accepted publickey for core from 147.75.109.163 port 34268 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:33:58.574772 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:58.582358 systemd-logind[1455]: New session 2 of user core. Sep 12 17:33:58.593552 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:33:58.662259 sshd[1594]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:58.676061 systemd[1]: sshd@1-64.227.109.162:22-147.75.109.163:34268.service: Deactivated successfully. Sep 12 17:33:58.679308 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:33:58.682288 systemd-logind[1455]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:33:58.688657 systemd[1]: Started sshd@2-64.227.109.162:22-147.75.109.163:34278.service - OpenSSH per-connection server daemon (147.75.109.163:34278). Sep 12 17:33:58.690904 systemd-logind[1455]: Removed session 2. Sep 12 17:33:58.741657 sshd[1601]: Accepted publickey for core from 147.75.109.163 port 34278 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:33:58.743618 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:58.750903 systemd-logind[1455]: New session 3 of user core. Sep 12 17:33:58.759432 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:33:58.820866 sshd[1601]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:58.833660 systemd[1]: sshd@2-64.227.109.162:22-147.75.109.163:34278.service: Deactivated successfully. Sep 12 17:33:58.836523 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:33:58.838391 systemd-logind[1455]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:33:58.854284 systemd[1]: Started sshd@3-64.227.109.162:22-147.75.109.163:34288.service - OpenSSH per-connection server daemon (147.75.109.163:34288). Sep 12 17:33:58.856419 systemd-logind[1455]: Removed session 3. Sep 12 17:33:58.899390 sshd[1608]: Accepted publickey for core from 147.75.109.163 port 34288 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:33:58.901763 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:58.910823 systemd-logind[1455]: New session 4 of user core. Sep 12 17:33:58.914338 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:33:58.979731 sshd[1608]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:58.993108 systemd[1]: sshd@3-64.227.109.162:22-147.75.109.163:34288.service: Deactivated successfully. Sep 12 17:33:58.995185 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:33:58.998242 systemd-logind[1455]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:33:59.001530 systemd[1]: Started sshd@4-64.227.109.162:22-147.75.109.163:34298.service - OpenSSH per-connection server daemon (147.75.109.163:34298). Sep 12 17:33:59.003267 systemd-logind[1455]: Removed session 4. Sep 12 17:33:59.056286 sshd[1615]: Accepted publickey for core from 147.75.109.163 port 34298 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:33:59.058227 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:59.064078 systemd-logind[1455]: New session 5 of user core. Sep 12 17:33:59.076372 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:33:59.150889 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:33:59.151256 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:33:59.167333 sudo[1618]: pam_unix(sudo:session): session closed for user root Sep 12 17:33:59.171396 sshd[1615]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:59.189214 systemd[1]: sshd@4-64.227.109.162:22-147.75.109.163:34298.service: Deactivated successfully. Sep 12 17:33:59.191749 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:33:59.195255 systemd-logind[1455]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:33:59.199521 systemd[1]: Started sshd@5-64.227.109.162:22-147.75.109.163:59434.service - OpenSSH per-connection server daemon (147.75.109.163:59434). Sep 12 17:33:59.202648 systemd-logind[1455]: Removed session 5. Sep 12 17:33:59.325854 sshd[1623]: Accepted publickey for core from 147.75.109.163 port 59434 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:33:59.329816 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:59.339236 systemd-logind[1455]: New session 6 of user core. Sep 12 17:33:59.343393 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:33:59.424708 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:33:59.425396 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:33:59.432813 sudo[1627]: pam_unix(sudo:session): session closed for user root Sep 12 17:33:59.443803 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 17:33:59.448802 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:33:59.475579 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 17:33:59.491237 auditctl[1630]: No rules Sep 12 17:33:59.492776 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:33:59.493942 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 17:33:59.512413 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 17:33:59.572032 augenrules[1648]: No rules Sep 12 17:33:59.574057 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 17:33:59.580533 sudo[1626]: pam_unix(sudo:session): session closed for user root Sep 12 17:33:59.590392 sshd[1623]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:59.609071 systemd[1]: sshd@5-64.227.109.162:22-147.75.109.163:59434.service: Deactivated successfully. Sep 12 17:33:59.612936 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:33:59.616427 systemd-logind[1455]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:33:59.626643 systemd[1]: Started sshd@6-64.227.109.162:22-147.75.109.163:59448.service - OpenSSH per-connection server daemon (147.75.109.163:59448). Sep 12 17:33:59.629406 systemd-logind[1455]: Removed session 6. Sep 12 17:33:59.681064 sshd[1656]: Accepted publickey for core from 147.75.109.163 port 59448 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:33:59.683914 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:59.692438 systemd-logind[1455]: New session 7 of user core. Sep 12 17:33:59.702405 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:33:59.768592 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:33:59.769492 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:34:00.608062 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:34:00.610670 (dockerd)[1676]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:34:01.414509 dockerd[1676]: time="2025-09-12T17:34:01.414424894Z" level=info msg="Starting up" Sep 12 17:34:01.664592 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1739383846-merged.mount: Deactivated successfully. Sep 12 17:34:01.705790 dockerd[1676]: time="2025-09-12T17:34:01.705646368Z" level=info msg="Loading containers: start." Sep 12 17:34:01.937184 kernel: Initializing XFRM netlink socket Sep 12 17:34:01.993051 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 12 17:34:01.997634 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 12 17:34:02.012895 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 12 17:34:02.099360 systemd-networkd[1361]: docker0: Link UP Sep 12 17:34:02.099829 systemd-timesyncd[1345]: Network configuration changed, trying to establish connection. Sep 12 17:34:02.132172 dockerd[1676]: time="2025-09-12T17:34:02.131929938Z" level=info msg="Loading containers: done." Sep 12 17:34:02.184800 dockerd[1676]: time="2025-09-12T17:34:02.184591482Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:34:02.186402 dockerd[1676]: time="2025-09-12T17:34:02.184806356Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 17:34:02.186402 dockerd[1676]: time="2025-09-12T17:34:02.185020824Z" level=info msg="Daemon has completed initialization" Sep 12 17:34:02.187673 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck825598450-merged.mount: Deactivated successfully. Sep 12 17:34:02.288781 dockerd[1676]: time="2025-09-12T17:34:02.286014224Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:34:02.291364 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:34:03.523706 containerd[1470]: time="2025-09-12T17:34:03.523638298Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 17:34:04.252957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1007794158.mount: Deactivated successfully. Sep 12 17:34:06.063489 containerd[1470]: time="2025-09-12T17:34:06.063406306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:06.066161 containerd[1470]: time="2025-09-12T17:34:06.065035207Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117124" Sep 12 17:34:06.068148 containerd[1470]: time="2025-09-12T17:34:06.068044897Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:06.080029 containerd[1470]: time="2025-09-12T17:34:06.078995064Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:06.085187 containerd[1470]: time="2025-09-12T17:34:06.085112742Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 2.561424193s" Sep 12 17:34:06.085481 containerd[1470]: time="2025-09-12T17:34:06.085427082Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 17:34:06.086464 containerd[1470]: time="2025-09-12T17:34:06.086390596Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 17:34:07.610765 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:34:07.618468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:08.047024 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:08.058540 (kubelet)[1893]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:34:08.178026 containerd[1470]: time="2025-09-12T17:34:08.177929044Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:08.181289 containerd[1470]: time="2025-09-12T17:34:08.181104594Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716632" Sep 12 17:34:08.182513 containerd[1470]: time="2025-09-12T17:34:08.182420726Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:08.184192 kubelet[1893]: E0912 17:34:08.184141 1893 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:34:08.193445 containerd[1470]: time="2025-09-12T17:34:08.193035063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:08.193859 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:34:08.194112 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:34:08.197952 containerd[1470]: time="2025-09-12T17:34:08.197845647Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 2.11137999s" Sep 12 17:34:08.197952 containerd[1470]: time="2025-09-12T17:34:08.197933679Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 17:34:08.198887 containerd[1470]: time="2025-09-12T17:34:08.198807947Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 17:34:09.747379 containerd[1470]: time="2025-09-12T17:34:09.747273448Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:09.749513 containerd[1470]: time="2025-09-12T17:34:09.749376464Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787698" Sep 12 17:34:09.753019 containerd[1470]: time="2025-09-12T17:34:09.751363879Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:09.756553 containerd[1470]: time="2025-09-12T17:34:09.756483949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:09.758666 containerd[1470]: time="2025-09-12T17:34:09.758591993Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.559686639s" Sep 12 17:34:09.758666 containerd[1470]: time="2025-09-12T17:34:09.758665744Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 17:34:09.759534 containerd[1470]: time="2025-09-12T17:34:09.759452324Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 17:34:09.761778 systemd-resolved[1327]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Sep 12 17:34:11.262266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4106443735.mount: Deactivated successfully. Sep 12 17:34:12.171368 containerd[1470]: time="2025-09-12T17:34:12.171289402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:12.173390 containerd[1470]: time="2025-09-12T17:34:12.173307571Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410252" Sep 12 17:34:12.190784 containerd[1470]: time="2025-09-12T17:34:12.190687897Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:12.193876 containerd[1470]: time="2025-09-12T17:34:12.193794216Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:12.196759 containerd[1470]: time="2025-09-12T17:34:12.195885174Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.436384697s" Sep 12 17:34:12.196759 containerd[1470]: time="2025-09-12T17:34:12.195961795Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 17:34:12.199281 containerd[1470]: time="2025-09-12T17:34:12.199236088Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:34:12.769913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4273930819.mount: Deactivated successfully. Sep 12 17:34:12.864230 systemd-resolved[1327]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Sep 12 17:34:14.036545 containerd[1470]: time="2025-09-12T17:34:14.036480475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:14.039093 containerd[1470]: time="2025-09-12T17:34:14.039020684Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Sep 12 17:34:14.039093 containerd[1470]: time="2025-09-12T17:34:14.039059834Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:14.044231 containerd[1470]: time="2025-09-12T17:34:14.044165191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:14.047132 containerd[1470]: time="2025-09-12T17:34:14.046480926Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.846714253s" Sep 12 17:34:14.047132 containerd[1470]: time="2025-09-12T17:34:14.046549256Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 17:34:14.048109 containerd[1470]: time="2025-09-12T17:34:14.048057533Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:34:14.662011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441172354.mount: Deactivated successfully. Sep 12 17:34:14.668817 containerd[1470]: time="2025-09-12T17:34:14.668705233Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:14.670365 containerd[1470]: time="2025-09-12T17:34:14.670278412Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Sep 12 17:34:14.671401 containerd[1470]: time="2025-09-12T17:34:14.671295059Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:14.674108 containerd[1470]: time="2025-09-12T17:34:14.674037878Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:14.675112 containerd[1470]: time="2025-09-12T17:34:14.675064576Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 626.792723ms" Sep 12 17:34:14.675112 containerd[1470]: time="2025-09-12T17:34:14.675111839Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 17:34:14.677340 containerd[1470]: time="2025-09-12T17:34:14.676733700Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 17:34:15.157024 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520336749.mount: Deactivated successfully. Sep 12 17:34:17.875719 containerd[1470]: time="2025-09-12T17:34:17.875581497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:17.880199 containerd[1470]: time="2025-09-12T17:34:17.878262737Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910709" Sep 12 17:34:17.885148 containerd[1470]: time="2025-09-12T17:34:17.882640666Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:17.894016 containerd[1470]: time="2025-09-12T17:34:17.893314131Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:17.895948 containerd[1470]: time="2025-09-12T17:34:17.895866649Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.219088395s" Sep 12 17:34:17.896245 containerd[1470]: time="2025-09-12T17:34:17.896210024Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 17:34:18.369891 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:34:18.382999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:18.684428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:18.697452 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:34:18.830313 kubelet[2043]: E0912 17:34:18.830216 2043 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:34:18.834537 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:34:18.834717 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:34:22.544817 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:22.552469 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:22.612295 systemd[1]: Reloading requested from client PID 2064 ('systemctl') (unit session-7.scope)... Sep 12 17:34:22.612322 systemd[1]: Reloading... Sep 12 17:34:22.814024 zram_generator::config[2103]: No configuration found. Sep 12 17:34:22.987589 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:34:23.081900 systemd[1]: Reloading finished in 468 ms. Sep 12 17:34:23.170309 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:34:23.170618 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:34:23.171177 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:23.178645 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:23.405209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:23.420032 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:34:23.490604 kubelet[2157]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:34:23.493022 kubelet[2157]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:34:23.493022 kubelet[2157]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:34:23.493022 kubelet[2157]: I0912 17:34:23.491346 2157 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:34:23.724137 kubelet[2157]: I0912 17:34:23.723329 2157 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:34:23.724137 kubelet[2157]: I0912 17:34:23.723385 2157 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:34:23.724137 kubelet[2157]: I0912 17:34:23.723785 2157 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:34:23.754826 kubelet[2157]: I0912 17:34:23.754340 2157 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:34:23.754826 kubelet[2157]: E0912 17:34:23.754754 2157 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://64.227.109.162:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 64.227.109.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:23.770659 kubelet[2157]: E0912 17:34:23.770586 2157 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:34:23.770659 kubelet[2157]: I0912 17:34:23.770639 2157 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:34:23.779917 kubelet[2157]: I0912 17:34:23.779766 2157 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:34:23.782187 kubelet[2157]: I0912 17:34:23.781311 2157 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:34:23.782187 kubelet[2157]: I0912 17:34:23.781617 2157 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:34:23.782187 kubelet[2157]: I0912 17:34:23.781678 2157 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-8-31c29e3945","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:34:23.782187 kubelet[2157]: I0912 17:34:23.782027 2157 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:34:23.782602 kubelet[2157]: I0912 17:34:23.782050 2157 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:34:23.782602 kubelet[2157]: I0912 17:34:23.782361 2157 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:34:23.787720 kubelet[2157]: I0912 17:34:23.785734 2157 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:34:23.787720 kubelet[2157]: I0912 17:34:23.785788 2157 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:34:23.787720 kubelet[2157]: I0912 17:34:23.785839 2157 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:34:23.787720 kubelet[2157]: I0912 17:34:23.785891 2157 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:34:23.791943 kubelet[2157]: W0912 17:34:23.791034 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.109.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.227.109.162:6443: connect: connection refused Sep 12 17:34:23.791943 kubelet[2157]: E0912 17:34:23.791558 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.227.109.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.109.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:23.791943 kubelet[2157]: W0912 17:34:23.791835 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.109.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-8-31c29e3945&limit=500&resourceVersion=0": dial tcp 64.227.109.162:6443: connect: connection refused Sep 12 17:34:23.791943 kubelet[2157]: E0912 17:34:23.791905 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.227.109.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-8-31c29e3945&limit=500&resourceVersion=0\": dial tcp 64.227.109.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:23.792886 kubelet[2157]: I0912 17:34:23.792729 2157 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:34:23.799035 kubelet[2157]: I0912 17:34:23.797105 2157 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:34:23.799035 kubelet[2157]: W0912 17:34:23.797224 2157 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:34:23.799035 kubelet[2157]: I0912 17:34:23.798825 2157 server.go:1274] "Started kubelet" Sep 12 17:34:23.804004 kubelet[2157]: I0912 17:34:23.803280 2157 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:34:23.804629 kubelet[2157]: I0912 17:34:23.804568 2157 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:34:23.805355 kubelet[2157]: I0912 17:34:23.805317 2157 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:34:23.805487 kubelet[2157]: I0912 17:34:23.805331 2157 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:34:23.807621 kubelet[2157]: E0912 17:34:23.805833 2157 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.227.109.162:6443/api/v1/namespaces/default/events\": dial tcp 64.227.109.162:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-8-31c29e3945.18649970acf9eeff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-8-31c29e3945,UID:ci-4081.3.6-8-31c29e3945,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-8-31c29e3945,},FirstTimestamp:2025-09-12 17:34:23.798783743 +0000 UTC m=+0.372009976,LastTimestamp:2025-09-12 17:34:23.798783743 +0000 UTC m=+0.372009976,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-8-31c29e3945,}" Sep 12 17:34:23.810291 kubelet[2157]: I0912 17:34:23.810256 2157 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:34:23.818654 kubelet[2157]: I0912 17:34:23.818606 2157 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:34:23.820900 kubelet[2157]: I0912 17:34:23.820866 2157 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:34:23.821631 kubelet[2157]: E0912 17:34:23.821592 2157 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-8-31c29e3945\" not found" Sep 12 17:34:23.826392 kubelet[2157]: I0912 17:34:23.826347 2157 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:34:23.826574 kubelet[2157]: I0912 17:34:23.826528 2157 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:34:23.827153 kubelet[2157]: E0912 17:34:23.827103 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.109.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-8-31c29e3945?timeout=10s\": dial tcp 64.227.109.162:6443: connect: connection refused" interval="200ms" Sep 12 17:34:23.827320 kubelet[2157]: E0912 17:34:23.827293 2157 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:34:23.827670 kubelet[2157]: I0912 17:34:23.827649 2157 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:34:23.828064 kubelet[2157]: I0912 17:34:23.828045 2157 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:34:23.829698 kubelet[2157]: I0912 17:34:23.829658 2157 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:34:23.837109 kubelet[2157]: W0912 17:34:23.835308 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.109.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.109.162:6443: connect: connection refused Sep 12 17:34:23.839116 kubelet[2157]: E0912 17:34:23.837579 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.227.109.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.109.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:23.848965 kubelet[2157]: I0912 17:34:23.848694 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:34:23.851772 kubelet[2157]: I0912 17:34:23.851133 2157 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:34:23.851772 kubelet[2157]: I0912 17:34:23.851172 2157 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:34:23.851772 kubelet[2157]: I0912 17:34:23.851204 2157 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:34:23.851772 kubelet[2157]: E0912 17:34:23.851281 2157 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:34:23.874654 kubelet[2157]: W0912 17:34:23.874576 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.109.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.109.162:6443: connect: connection refused Sep 12 17:34:23.874949 kubelet[2157]: E0912 17:34:23.874918 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.227.109.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.109.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:23.878207 kubelet[2157]: I0912 17:34:23.878171 2157 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:34:23.878897 kubelet[2157]: I0912 17:34:23.878534 2157 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:34:23.878897 kubelet[2157]: I0912 17:34:23.878568 2157 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:34:23.882422 kubelet[2157]: I0912 17:34:23.882278 2157 policy_none.go:49] "None policy: Start" Sep 12 17:34:23.883890 kubelet[2157]: I0912 17:34:23.883857 2157 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:34:23.884599 kubelet[2157]: I0912 17:34:23.884158 2157 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:34:23.898334 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:34:23.914698 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:34:23.922045 kubelet[2157]: E0912 17:34:23.921903 2157 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.6-8-31c29e3945\" not found" Sep 12 17:34:23.923961 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:34:23.936071 kubelet[2157]: I0912 17:34:23.934649 2157 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:34:23.936071 kubelet[2157]: I0912 17:34:23.935027 2157 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:34:23.936071 kubelet[2157]: I0912 17:34:23.935047 2157 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:34:23.939019 kubelet[2157]: I0912 17:34:23.937413 2157 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:34:23.939581 kubelet[2157]: E0912 17:34:23.939298 2157 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.6-8-31c29e3945\" not found" Sep 12 17:34:23.970188 systemd[1]: Created slice kubepods-burstable-pod41fcb295d0c29d940d1f6175f4e5bab4.slice - libcontainer container kubepods-burstable-pod41fcb295d0c29d940d1f6175f4e5bab4.slice. Sep 12 17:34:24.002837 systemd[1]: Created slice kubepods-burstable-pod8956fd7ee68c603b537ce69d5f6795f6.slice - libcontainer container kubepods-burstable-pod8956fd7ee68c603b537ce69d5f6795f6.slice. Sep 12 17:34:24.015439 systemd[1]: Created slice kubepods-burstable-pod3cb63db9b98a2212fb1227ed58ecea4b.slice - libcontainer container kubepods-burstable-pod3cb63db9b98a2212fb1227ed58ecea4b.slice. Sep 12 17:34:24.028449 kubelet[2157]: E0912 17:34:24.028256 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.109.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-8-31c29e3945?timeout=10s\": dial tcp 64.227.109.162:6443: connect: connection refused" interval="400ms" Sep 12 17:34:24.037675 kubelet[2157]: I0912 17:34:24.037587 2157 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.038326 kubelet[2157]: E0912 17:34:24.038278 2157 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.227.109.162:6443/api/v1/nodes\": dial tcp 64.227.109.162:6443: connect: connection refused" node="ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.130259 kubelet[2157]: I0912 17:34:24.130014 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41fcb295d0c29d940d1f6175f4e5bab4-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-8-31c29e3945\" (UID: \"41fcb295d0c29d940d1f6175f4e5bab4\") " pod="kube-system/kube-apiserver-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.130259 kubelet[2157]: I0912 17:34:24.130067 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41fcb295d0c29d940d1f6175f4e5bab4-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-8-31c29e3945\" (UID: \"41fcb295d0c29d940d1f6175f4e5bab4\") " pod="kube-system/kube-apiserver-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.130259 kubelet[2157]: I0912 17:34:24.130090 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8956fd7ee68c603b537ce69d5f6795f6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-8-31c29e3945\" (UID: \"8956fd7ee68c603b537ce69d5f6795f6\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.130259 kubelet[2157]: I0912 17:34:24.130108 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8956fd7ee68c603b537ce69d5f6795f6-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-8-31c29e3945\" (UID: \"8956fd7ee68c603b537ce69d5f6795f6\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.130259 kubelet[2157]: I0912 17:34:24.130129 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8956fd7ee68c603b537ce69d5f6795f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-8-31c29e3945\" (UID: \"8956fd7ee68c603b537ce69d5f6795f6\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.130599 kubelet[2157]: I0912 17:34:24.130148 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41fcb295d0c29d940d1f6175f4e5bab4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-8-31c29e3945\" (UID: \"41fcb295d0c29d940d1f6175f4e5bab4\") " pod="kube-system/kube-apiserver-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.130599 kubelet[2157]: I0912 17:34:24.130166 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8956fd7ee68c603b537ce69d5f6795f6-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-8-31c29e3945\" (UID: \"8956fd7ee68c603b537ce69d5f6795f6\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.130599 kubelet[2157]: I0912 17:34:24.130183 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8956fd7ee68c603b537ce69d5f6795f6-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-8-31c29e3945\" (UID: \"8956fd7ee68c603b537ce69d5f6795f6\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.130599 kubelet[2157]: I0912 17:34:24.130201 2157 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cb63db9b98a2212fb1227ed58ecea4b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-8-31c29e3945\" (UID: \"3cb63db9b98a2212fb1227ed58ecea4b\") " pod="kube-system/kube-scheduler-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.239938 kubelet[2157]: I0912 17:34:24.239885 2157 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.240353 kubelet[2157]: E0912 17:34:24.240307 2157 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.227.109.162:6443/api/v1/nodes\": dial tcp 64.227.109.162:6443: connect: connection refused" node="ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.300780 kubelet[2157]: E0912 17:34:24.300603 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:24.301690 containerd[1470]: time="2025-09-12T17:34:24.301616929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-8-31c29e3945,Uid:41fcb295d0c29d940d1f6175f4e5bab4,Namespace:kube-system,Attempt:0,}" Sep 12 17:34:24.304426 systemd-resolved[1327]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Sep 12 17:34:24.313314 kubelet[2157]: E0912 17:34:24.313245 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:24.314333 containerd[1470]: time="2025-09-12T17:34:24.313904850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-8-31c29e3945,Uid:8956fd7ee68c603b537ce69d5f6795f6,Namespace:kube-system,Attempt:0,}" Sep 12 17:34:24.320473 kubelet[2157]: E0912 17:34:24.320428 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:24.321467 containerd[1470]: time="2025-09-12T17:34:24.321408249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-8-31c29e3945,Uid:3cb63db9b98a2212fb1227ed58ecea4b,Namespace:kube-system,Attempt:0,}" Sep 12 17:34:24.429185 kubelet[2157]: E0912 17:34:24.428912 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.109.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-8-31c29e3945?timeout=10s\": dial tcp 64.227.109.162:6443: connect: connection refused" interval="800ms" Sep 12 17:34:24.587120 kubelet[2157]: E0912 17:34:24.586827 2157 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://64.227.109.162:6443/api/v1/namespaces/default/events\": dial tcp 64.227.109.162:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.6-8-31c29e3945.18649970acf9eeff default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.6-8-31c29e3945,UID:ci-4081.3.6-8-31c29e3945,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.6-8-31c29e3945,},FirstTimestamp:2025-09-12 17:34:23.798783743 +0000 UTC m=+0.372009976,LastTimestamp:2025-09-12 17:34:23.798783743 +0000 UTC m=+0.372009976,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.6-8-31c29e3945,}" Sep 12 17:34:24.642424 kubelet[2157]: I0912 17:34:24.642378 2157 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.642789 kubelet[2157]: E0912 17:34:24.642758 2157 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.227.109.162:6443/api/v1/nodes\": dial tcp 64.227.109.162:6443: connect: connection refused" node="ci-4081.3.6-8-31c29e3945" Sep 12 17:34:24.808478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2594396725.mount: Deactivated successfully. Sep 12 17:34:24.812482 kubelet[2157]: W0912 17:34:24.812396 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://64.227.109.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 64.227.109.162:6443: connect: connection refused Sep 12 17:34:24.812482 kubelet[2157]: E0912 17:34:24.812449 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://64.227.109.162:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 64.227.109.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:24.814219 containerd[1470]: time="2025-09-12T17:34:24.814155092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:24.815873 containerd[1470]: time="2025-09-12T17:34:24.815773731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Sep 12 17:34:24.819066 containerd[1470]: time="2025-09-12T17:34:24.818776121Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:24.820837 containerd[1470]: time="2025-09-12T17:34:24.820754642Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:34:24.823812 containerd[1470]: time="2025-09-12T17:34:24.823168481Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:24.823812 containerd[1470]: time="2025-09-12T17:34:24.823612453Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 17:34:24.827867 containerd[1470]: time="2025-09-12T17:34:24.827810525Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:24.830241 containerd[1470]: time="2025-09-12T17:34:24.830181618Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 528.464315ms" Sep 12 17:34:24.833776 containerd[1470]: time="2025-09-12T17:34:24.833251280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:34:24.835942 containerd[1470]: time="2025-09-12T17:34:24.835865514Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 514.292038ms" Sep 12 17:34:24.837027 containerd[1470]: time="2025-09-12T17:34:24.836770047Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 522.747503ms" Sep 12 17:34:24.879089 kubelet[2157]: W0912 17:34:24.876574 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://64.227.109.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-8-31c29e3945&limit=500&resourceVersion=0": dial tcp 64.227.109.162:6443: connect: connection refused Sep 12 17:34:24.879089 kubelet[2157]: E0912 17:34:24.876724 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://64.227.109.162:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.6-8-31c29e3945&limit=500&resourceVersion=0\": dial tcp 64.227.109.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:25.027742 containerd[1470]: time="2025-09-12T17:34:25.027409748Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:25.027742 containerd[1470]: time="2025-09-12T17:34:25.027502164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:25.027742 containerd[1470]: time="2025-09-12T17:34:25.027529834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:25.027742 containerd[1470]: time="2025-09-12T17:34:25.027665973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:25.035847 kubelet[2157]: W0912 17:34:25.035688 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://64.227.109.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 64.227.109.162:6443: connect: connection refused Sep 12 17:34:25.035847 kubelet[2157]: E0912 17:34:25.035759 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://64.227.109.162:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 64.227.109.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:25.039555 containerd[1470]: time="2025-09-12T17:34:25.039154178Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:25.039555 containerd[1470]: time="2025-09-12T17:34:25.039246830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:25.039555 containerd[1470]: time="2025-09-12T17:34:25.039304438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:25.039555 containerd[1470]: time="2025-09-12T17:34:25.039442323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:25.058284 containerd[1470]: time="2025-09-12T17:34:25.056023985Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:25.058284 containerd[1470]: time="2025-09-12T17:34:25.056123302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:25.058284 containerd[1470]: time="2025-09-12T17:34:25.056159575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:25.063003 containerd[1470]: time="2025-09-12T17:34:25.062558864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:25.077825 systemd[1]: Started cri-containerd-0f5aad993ad8db3f4bcdd0a330c946190c7d7ad2a8c2d033eb891eaa55fdfbd4.scope - libcontainer container 0f5aad993ad8db3f4bcdd0a330c946190c7d7ad2a8c2d033eb891eaa55fdfbd4. Sep 12 17:34:25.088288 systemd[1]: Started cri-containerd-696508fcc2995819e7bb008c800e3cf7179c32ee7758177e90a2d59d9430ee8d.scope - libcontainer container 696508fcc2995819e7bb008c800e3cf7179c32ee7758177e90a2d59d9430ee8d. Sep 12 17:34:25.126238 systemd[1]: Started cri-containerd-8961950a4abab7006386cc736f41d3cecfd180e4abb7b5b5b5ae3201aabee024.scope - libcontainer container 8961950a4abab7006386cc736f41d3cecfd180e4abb7b5b5b5ae3201aabee024. Sep 12 17:34:25.184864 containerd[1470]: time="2025-09-12T17:34:25.183668110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.6-8-31c29e3945,Uid:41fcb295d0c29d940d1f6175f4e5bab4,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f5aad993ad8db3f4bcdd0a330c946190c7d7ad2a8c2d033eb891eaa55fdfbd4\"" Sep 12 17:34:25.189904 kubelet[2157]: E0912 17:34:25.189794 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:25.196814 containerd[1470]: time="2025-09-12T17:34:25.196758831Z" level=info msg="CreateContainer within sandbox \"0f5aad993ad8db3f4bcdd0a330c946190c7d7ad2a8c2d033eb891eaa55fdfbd4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:34:25.231619 kubelet[2157]: E0912 17:34:25.231542 2157 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://64.227.109.162:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.6-8-31c29e3945?timeout=10s\": dial tcp 64.227.109.162:6443: connect: connection refused" interval="1.6s" Sep 12 17:34:25.242014 containerd[1470]: time="2025-09-12T17:34:25.239326189Z" level=info msg="CreateContainer within sandbox \"0f5aad993ad8db3f4bcdd0a330c946190c7d7ad2a8c2d033eb891eaa55fdfbd4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a3518fbf996fdf3211759ba95c68b859e995d9ad69a5ae6a0427b2e430ba0ae1\"" Sep 12 17:34:25.244484 containerd[1470]: time="2025-09-12T17:34:25.244308015Z" level=info msg="StartContainer for \"a3518fbf996fdf3211759ba95c68b859e995d9ad69a5ae6a0427b2e430ba0ae1\"" Sep 12 17:34:25.261870 containerd[1470]: time="2025-09-12T17:34:25.261712288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.6-8-31c29e3945,Uid:3cb63db9b98a2212fb1227ed58ecea4b,Namespace:kube-system,Attempt:0,} returns sandbox id \"696508fcc2995819e7bb008c800e3cf7179c32ee7758177e90a2d59d9430ee8d\"" Sep 12 17:34:25.262854 kubelet[2157]: E0912 17:34:25.262820 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:25.266112 containerd[1470]: time="2025-09-12T17:34:25.266051855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.6-8-31c29e3945,Uid:8956fd7ee68c603b537ce69d5f6795f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8961950a4abab7006386cc736f41d3cecfd180e4abb7b5b5b5ae3201aabee024\"" Sep 12 17:34:25.266655 containerd[1470]: time="2025-09-12T17:34:25.266611306Z" level=info msg="CreateContainer within sandbox \"696508fcc2995819e7bb008c800e3cf7179c32ee7758177e90a2d59d9430ee8d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:34:25.267570 kubelet[2157]: E0912 17:34:25.267540 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:25.282480 containerd[1470]: time="2025-09-12T17:34:25.282417910Z" level=info msg="CreateContainer within sandbox \"8961950a4abab7006386cc736f41d3cecfd180e4abb7b5b5b5ae3201aabee024\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:34:25.289972 containerd[1470]: time="2025-09-12T17:34:25.289911227Z" level=info msg="CreateContainer within sandbox \"696508fcc2995819e7bb008c800e3cf7179c32ee7758177e90a2d59d9430ee8d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5a768633ef22a0d9504acbeadba3f2500b14f9e7bb67cc49995fb3b78d55a18d\"" Sep 12 17:34:25.291023 containerd[1470]: time="2025-09-12T17:34:25.290555882Z" level=info msg="StartContainer for \"5a768633ef22a0d9504acbeadba3f2500b14f9e7bb67cc49995fb3b78d55a18d\"" Sep 12 17:34:25.302945 containerd[1470]: time="2025-09-12T17:34:25.302879704Z" level=info msg="CreateContainer within sandbox \"8961950a4abab7006386cc736f41d3cecfd180e4abb7b5b5b5ae3201aabee024\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7309e041384e5f58202b3ed4775550e810281392dbebb320dc69fe8ce29d32b6\"" Sep 12 17:34:25.304865 containerd[1470]: time="2025-09-12T17:34:25.304806439Z" level=info msg="StartContainer for \"7309e041384e5f58202b3ed4775550e810281392dbebb320dc69fe8ce29d32b6\"" Sep 12 17:34:25.319328 systemd[1]: Started cri-containerd-a3518fbf996fdf3211759ba95c68b859e995d9ad69a5ae6a0427b2e430ba0ae1.scope - libcontainer container a3518fbf996fdf3211759ba95c68b859e995d9ad69a5ae6a0427b2e430ba0ae1. Sep 12 17:34:25.332024 kubelet[2157]: W0912 17:34:25.330668 2157 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://64.227.109.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 64.227.109.162:6443: connect: connection refused Sep 12 17:34:25.332024 kubelet[2157]: E0912 17:34:25.330770 2157 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://64.227.109.162:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 64.227.109.162:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:34:25.364798 systemd[1]: Started cri-containerd-5a768633ef22a0d9504acbeadba3f2500b14f9e7bb67cc49995fb3b78d55a18d.scope - libcontainer container 5a768633ef22a0d9504acbeadba3f2500b14f9e7bb67cc49995fb3b78d55a18d. Sep 12 17:34:25.390030 systemd[1]: Started cri-containerd-7309e041384e5f58202b3ed4775550e810281392dbebb320dc69fe8ce29d32b6.scope - libcontainer container 7309e041384e5f58202b3ed4775550e810281392dbebb320dc69fe8ce29d32b6. Sep 12 17:34:25.436767 containerd[1470]: time="2025-09-12T17:34:25.435553885Z" level=info msg="StartContainer for \"a3518fbf996fdf3211759ba95c68b859e995d9ad69a5ae6a0427b2e430ba0ae1\" returns successfully" Sep 12 17:34:25.445852 kubelet[2157]: I0912 17:34:25.445813 2157 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-8-31c29e3945" Sep 12 17:34:25.447613 kubelet[2157]: E0912 17:34:25.447565 2157 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://64.227.109.162:6443/api/v1/nodes\": dial tcp 64.227.109.162:6443: connect: connection refused" node="ci-4081.3.6-8-31c29e3945" Sep 12 17:34:25.494853 containerd[1470]: time="2025-09-12T17:34:25.494326106Z" level=info msg="StartContainer for \"5a768633ef22a0d9504acbeadba3f2500b14f9e7bb67cc49995fb3b78d55a18d\" returns successfully" Sep 12 17:34:25.523712 containerd[1470]: time="2025-09-12T17:34:25.523645661Z" level=info msg="StartContainer for \"7309e041384e5f58202b3ed4775550e810281392dbebb320dc69fe8ce29d32b6\" returns successfully" Sep 12 17:34:25.910754 kubelet[2157]: E0912 17:34:25.910584 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:25.916396 kubelet[2157]: E0912 17:34:25.915238 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:25.916947 kubelet[2157]: E0912 17:34:25.916864 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:26.922665 kubelet[2157]: E0912 17:34:26.922037 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:26.922665 kubelet[2157]: E0912 17:34:26.922581 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:27.049155 kubelet[2157]: I0912 17:34:27.049100 2157 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-8-31c29e3945" Sep 12 17:34:27.529519 kubelet[2157]: E0912 17:34:27.528998 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:28.091114 kubelet[2157]: E0912 17:34:28.089897 2157 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:28.118543 kubelet[2157]: E0912 17:34:28.118483 2157 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.6-8-31c29e3945\" not found" node="ci-4081.3.6-8-31c29e3945" Sep 12 17:34:28.276281 kubelet[2157]: I0912 17:34:28.275971 2157 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.6-8-31c29e3945" Sep 12 17:34:28.276281 kubelet[2157]: E0912 17:34:28.276037 2157 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-4081.3.6-8-31c29e3945\": node \"ci-4081.3.6-8-31c29e3945\" not found" Sep 12 17:34:28.793254 kubelet[2157]: I0912 17:34:28.793089 2157 apiserver.go:52] "Watching apiserver" Sep 12 17:34:28.828665 kubelet[2157]: I0912 17:34:28.828587 2157 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:34:30.407165 systemd[1]: Reloading requested from client PID 2431 ('systemctl') (unit session-7.scope)... Sep 12 17:34:30.407679 systemd[1]: Reloading... Sep 12 17:34:30.552093 zram_generator::config[2466]: No configuration found. Sep 12 17:34:30.789041 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 17:34:30.922909 systemd[1]: Reloading finished in 514 ms. Sep 12 17:34:30.980123 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:30.993944 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:34:30.994497 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:31.002573 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:34:31.257497 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:34:31.273590 (kubelet)[2521]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:34:31.383047 kubelet[2521]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:34:31.383047 kubelet[2521]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 17:34:31.383047 kubelet[2521]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:34:31.383047 kubelet[2521]: I0912 17:34:31.382572 2521 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:34:31.397009 kubelet[2521]: I0912 17:34:31.396884 2521 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 17:34:31.397257 kubelet[2521]: I0912 17:34:31.396966 2521 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:34:31.398557 kubelet[2521]: I0912 17:34:31.398072 2521 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 17:34:31.403013 kubelet[2521]: I0912 17:34:31.400908 2521 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:34:31.406764 kubelet[2521]: I0912 17:34:31.406585 2521 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:34:31.421639 kubelet[2521]: E0912 17:34:31.421085 2521 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 17:34:31.421973 kubelet[2521]: I0912 17:34:31.421836 2521 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 17:34:31.428743 kubelet[2521]: I0912 17:34:31.428608 2521 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:34:31.429049 kubelet[2521]: I0912 17:34:31.429028 2521 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 17:34:31.431023 kubelet[2521]: I0912 17:34:31.429312 2521 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:34:31.431023 kubelet[2521]: I0912 17:34:31.429361 2521 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.6-8-31c29e3945","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:34:31.431023 kubelet[2521]: I0912 17:34:31.429762 2521 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:34:31.431023 kubelet[2521]: I0912 17:34:31.429776 2521 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 17:34:31.431448 kubelet[2521]: I0912 17:34:31.429815 2521 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:34:31.431448 kubelet[2521]: I0912 17:34:31.430064 2521 kubelet.go:408] "Attempting to sync node with API server" Sep 12 17:34:31.431448 kubelet[2521]: I0912 17:34:31.430086 2521 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:34:31.431448 kubelet[2521]: I0912 17:34:31.430122 2521 kubelet.go:314] "Adding apiserver pod source" Sep 12 17:34:31.431448 kubelet[2521]: I0912 17:34:31.430133 2521 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:34:31.447351 kubelet[2521]: I0912 17:34:31.447310 2521 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 17:34:31.455127 kubelet[2521]: I0912 17:34:31.454468 2521 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:34:31.455965 kubelet[2521]: I0912 17:34:31.455802 2521 server.go:1274] "Started kubelet" Sep 12 17:34:31.462756 kubelet[2521]: I0912 17:34:31.460889 2521 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:34:31.470859 kubelet[2521]: I0912 17:34:31.469831 2521 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:34:31.474205 kubelet[2521]: I0912 17:34:31.473808 2521 server.go:449] "Adding debug handlers to kubelet server" Sep 12 17:34:31.477053 kubelet[2521]: I0912 17:34:31.476944 2521 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:34:31.478314 kubelet[2521]: I0912 17:34:31.478288 2521 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 17:34:31.479462 kubelet[2521]: I0912 17:34:31.479419 2521 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 17:34:31.482885 kubelet[2521]: I0912 17:34:31.482854 2521 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:34:31.483159 kubelet[2521]: I0912 17:34:31.480388 2521 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:34:31.485395 kubelet[2521]: I0912 17:34:31.480101 2521 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:34:31.486971 kubelet[2521]: I0912 17:34:31.486929 2521 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:34:31.489607 kubelet[2521]: I0912 17:34:31.488954 2521 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:34:31.496168 kubelet[2521]: E0912 17:34:31.496129 2521 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:34:31.501055 kubelet[2521]: I0912 17:34:31.500653 2521 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:34:31.525318 kubelet[2521]: I0912 17:34:31.525175 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:34:31.533491 kubelet[2521]: I0912 17:34:31.533410 2521 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:34:31.533491 kubelet[2521]: I0912 17:34:31.533444 2521 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 17:34:31.533491 kubelet[2521]: I0912 17:34:31.533464 2521 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 17:34:31.533995 kubelet[2521]: E0912 17:34:31.533764 2521 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:34:31.634045 kubelet[2521]: E0912 17:34:31.633901 2521 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:34:31.649417 kubelet[2521]: I0912 17:34:31.649285 2521 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 17:34:31.649417 kubelet[2521]: I0912 17:34:31.649307 2521 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 17:34:31.649417 kubelet[2521]: I0912 17:34:31.649334 2521 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:34:31.650166 kubelet[2521]: I0912 17:34:31.649806 2521 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:34:31.650166 kubelet[2521]: I0912 17:34:31.649827 2521 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:34:31.650166 kubelet[2521]: I0912 17:34:31.649880 2521 policy_none.go:49] "None policy: Start" Sep 12 17:34:31.652094 kubelet[2521]: I0912 17:34:31.651602 2521 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 17:34:31.652094 kubelet[2521]: I0912 17:34:31.651638 2521 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:34:31.652094 kubelet[2521]: I0912 17:34:31.651868 2521 state_mem.go:75] "Updated machine memory state" Sep 12 17:34:31.660636 kubelet[2521]: I0912 17:34:31.660573 2521 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:34:31.664990 kubelet[2521]: I0912 17:34:31.664946 2521 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:34:31.668396 kubelet[2521]: I0912 17:34:31.665415 2521 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:34:31.668396 kubelet[2521]: I0912 17:34:31.665865 2521 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:34:31.779755 kubelet[2521]: I0912 17:34:31.779580 2521 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.6-8-31c29e3945" Sep 12 17:34:31.798787 kubelet[2521]: I0912 17:34:31.798692 2521 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.6-8-31c29e3945" Sep 12 17:34:31.799546 kubelet[2521]: I0912 17:34:31.799527 2521 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.6-8-31c29e3945" Sep 12 17:34:31.852605 kubelet[2521]: W0912 17:34:31.852380 2521 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:34:31.857019 kubelet[2521]: W0912 17:34:31.856952 2521 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:34:31.857717 kubelet[2521]: W0912 17:34:31.857416 2521 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 17:34:31.886606 kubelet[2521]: I0912 17:34:31.886544 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3cb63db9b98a2212fb1227ed58ecea4b-kubeconfig\") pod \"kube-scheduler-ci-4081.3.6-8-31c29e3945\" (UID: \"3cb63db9b98a2212fb1227ed58ecea4b\") " pod="kube-system/kube-scheduler-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:31.887708 kubelet[2521]: I0912 17:34:31.886858 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/41fcb295d0c29d940d1f6175f4e5bab4-ca-certs\") pod \"kube-apiserver-ci-4081.3.6-8-31c29e3945\" (UID: \"41fcb295d0c29d940d1f6175f4e5bab4\") " pod="kube-system/kube-apiserver-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:31.887708 kubelet[2521]: I0912 17:34:31.886888 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/41fcb295d0c29d940d1f6175f4e5bab4-k8s-certs\") pod \"kube-apiserver-ci-4081.3.6-8-31c29e3945\" (UID: \"41fcb295d0c29d940d1f6175f4e5bab4\") " pod="kube-system/kube-apiserver-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:31.887708 kubelet[2521]: I0912 17:34:31.886909 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/41fcb295d0c29d940d1f6175f4e5bab4-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.6-8-31c29e3945\" (UID: \"41fcb295d0c29d940d1f6175f4e5bab4\") " pod="kube-system/kube-apiserver-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:31.887708 kubelet[2521]: I0912 17:34:31.886931 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8956fd7ee68c603b537ce69d5f6795f6-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.6-8-31c29e3945\" (UID: \"8956fd7ee68c603b537ce69d5f6795f6\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:31.887708 kubelet[2521]: I0912 17:34:31.887045 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8956fd7ee68c603b537ce69d5f6795f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.6-8-31c29e3945\" (UID: \"8956fd7ee68c603b537ce69d5f6795f6\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:31.889414 kubelet[2521]: I0912 17:34:31.887083 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8956fd7ee68c603b537ce69d5f6795f6-ca-certs\") pod \"kube-controller-manager-ci-4081.3.6-8-31c29e3945\" (UID: \"8956fd7ee68c603b537ce69d5f6795f6\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:31.889414 kubelet[2521]: I0912 17:34:31.887114 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8956fd7ee68c603b537ce69d5f6795f6-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.6-8-31c29e3945\" (UID: \"8956fd7ee68c603b537ce69d5f6795f6\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:31.889414 kubelet[2521]: I0912 17:34:31.887139 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8956fd7ee68c603b537ce69d5f6795f6-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.6-8-31c29e3945\" (UID: \"8956fd7ee68c603b537ce69d5f6795f6\") " pod="kube-system/kube-controller-manager-ci-4081.3.6-8-31c29e3945" Sep 12 17:34:32.923758 systemd-timesyncd[1345]: Contacted time server 65.182.224.60:123 (2.flatcar.pool.ntp.org). Sep 12 17:34:32.923830 systemd-timesyncd[1345]: Initial clock synchronization to Fri 2025-09-12 17:34:32.923419 UTC. Sep 12 17:34:32.925449 systemd-resolved[1327]: Clock change detected. Flushing caches. Sep 12 17:34:32.943432 kubelet[2521]: E0912 17:34:32.943341 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:32.948135 kubelet[2521]: E0912 17:34:32.947981 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:32.949772 kubelet[2521]: E0912 17:34:32.947984 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:33.221902 kubelet[2521]: I0912 17:34:33.221646 2521 apiserver.go:52] "Watching apiserver" Sep 12 17:34:33.273394 kubelet[2521]: I0912 17:34:33.273283 2521 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 17:34:33.383552 kubelet[2521]: E0912 17:34:33.382237 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:33.383552 kubelet[2521]: E0912 17:34:33.382502 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:33.383552 kubelet[2521]: E0912 17:34:33.382844 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:33.434884 kubelet[2521]: I0912 17:34:33.434777 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.6-8-31c29e3945" podStartSLOduration=2.434722519 podStartE2EDuration="2.434722519s" podCreationTimestamp="2025-09-12 17:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:34:33.434577736 +0000 UTC m=+1.356786778" watchObservedRunningTime="2025-09-12 17:34:33.434722519 +0000 UTC m=+1.356931535" Sep 12 17:34:33.466054 kubelet[2521]: I0912 17:34:33.465289 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.6-8-31c29e3945" podStartSLOduration=2.465267642 podStartE2EDuration="2.465267642s" podCreationTimestamp="2025-09-12 17:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:34:33.453964365 +0000 UTC m=+1.376173409" watchObservedRunningTime="2025-09-12 17:34:33.465267642 +0000 UTC m=+1.387476679" Sep 12 17:34:34.384487 kubelet[2521]: E0912 17:34:34.384436 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:35.713731 kubelet[2521]: I0912 17:34:35.713530 2521 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:34:35.717180 containerd[1470]: time="2025-09-12T17:34:35.717099742Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:34:35.719571 kubelet[2521]: I0912 17:34:35.718263 2521 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:34:36.492324 kubelet[2521]: I0912 17:34:36.492218 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.6-8-31c29e3945" podStartSLOduration=5.49219308 podStartE2EDuration="5.49219308s" podCreationTimestamp="2025-09-12 17:34:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:34:33.465647276 +0000 UTC m=+1.387856323" watchObservedRunningTime="2025-09-12 17:34:36.49219308 +0000 UTC m=+4.414402113" Sep 12 17:34:36.502436 kubelet[2521]: I0912 17:34:36.502371 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a78c5684-539d-4143-81c6-5f519b79890f-kube-proxy\") pod \"kube-proxy-75l7s\" (UID: \"a78c5684-539d-4143-81c6-5f519b79890f\") " pod="kube-system/kube-proxy-75l7s" Sep 12 17:34:36.502436 kubelet[2521]: I0912 17:34:36.502423 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a78c5684-539d-4143-81c6-5f519b79890f-xtables-lock\") pod \"kube-proxy-75l7s\" (UID: \"a78c5684-539d-4143-81c6-5f519b79890f\") " pod="kube-system/kube-proxy-75l7s" Sep 12 17:34:36.502436 kubelet[2521]: I0912 17:34:36.502445 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a78c5684-539d-4143-81c6-5f519b79890f-lib-modules\") pod \"kube-proxy-75l7s\" (UID: \"a78c5684-539d-4143-81c6-5f519b79890f\") " pod="kube-system/kube-proxy-75l7s" Sep 12 17:34:36.502874 kubelet[2521]: I0912 17:34:36.502464 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8726\" (UniqueName: \"kubernetes.io/projected/a78c5684-539d-4143-81c6-5f519b79890f-kube-api-access-h8726\") pod \"kube-proxy-75l7s\" (UID: \"a78c5684-539d-4143-81c6-5f519b79890f\") " pod="kube-system/kube-proxy-75l7s" Sep 12 17:34:36.508858 systemd[1]: Created slice kubepods-besteffort-poda78c5684_539d_4143_81c6_5f519b79890f.slice - libcontainer container kubepods-besteffort-poda78c5684_539d_4143_81c6_5f519b79890f.slice. Sep 12 17:34:36.825525 systemd[1]: Created slice kubepods-besteffort-pod5f833bea_c3da_45de_bc4e_79166a973cc1.slice - libcontainer container kubepods-besteffort-pod5f833bea_c3da_45de_bc4e_79166a973cc1.slice. Sep 12 17:34:36.826656 kubelet[2521]: E0912 17:34:36.826265 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:36.828213 containerd[1470]: time="2025-09-12T17:34:36.828049530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75l7s,Uid:a78c5684-539d-4143-81c6-5f519b79890f,Namespace:kube-system,Attempt:0,}" Sep 12 17:34:36.906078 kubelet[2521]: I0912 17:34:36.905089 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dcncf\" (UniqueName: \"kubernetes.io/projected/5f833bea-c3da-45de-bc4e-79166a973cc1-kube-api-access-dcncf\") pod \"tigera-operator-58fc44c59b-nqdbd\" (UID: \"5f833bea-c3da-45de-bc4e-79166a973cc1\") " pod="tigera-operator/tigera-operator-58fc44c59b-nqdbd" Sep 12 17:34:36.906078 kubelet[2521]: I0912 17:34:36.905160 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5f833bea-c3da-45de-bc4e-79166a973cc1-var-lib-calico\") pod \"tigera-operator-58fc44c59b-nqdbd\" (UID: \"5f833bea-c3da-45de-bc4e-79166a973cc1\") " pod="tigera-operator/tigera-operator-58fc44c59b-nqdbd" Sep 12 17:34:36.912147 containerd[1470]: time="2025-09-12T17:34:36.909569376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:36.913198 containerd[1470]: time="2025-09-12T17:34:36.913083129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:36.913307 containerd[1470]: time="2025-09-12T17:34:36.913237130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:36.913725 containerd[1470]: time="2025-09-12T17:34:36.913649997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:36.969390 systemd[1]: Started cri-containerd-abe31573bfc469ba63871014383f2d0b2462aaaa98d05f3d8aa1e41e7db9bb0a.scope - libcontainer container abe31573bfc469ba63871014383f2d0b2462aaaa98d05f3d8aa1e41e7db9bb0a. Sep 12 17:34:37.050372 containerd[1470]: time="2025-09-12T17:34:37.050288990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-75l7s,Uid:a78c5684-539d-4143-81c6-5f519b79890f,Namespace:kube-system,Attempt:0,} returns sandbox id \"abe31573bfc469ba63871014383f2d0b2462aaaa98d05f3d8aa1e41e7db9bb0a\"" Sep 12 17:34:37.053599 kubelet[2521]: E0912 17:34:37.052330 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:37.059248 containerd[1470]: time="2025-09-12T17:34:37.059159112Z" level=info msg="CreateContainer within sandbox \"abe31573bfc469ba63871014383f2d0b2462aaaa98d05f3d8aa1e41e7db9bb0a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:34:37.129887 containerd[1470]: time="2025-09-12T17:34:37.129395892Z" level=info msg="CreateContainer within sandbox \"abe31573bfc469ba63871014383f2d0b2462aaaa98d05f3d8aa1e41e7db9bb0a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e3791d255f1eaf251f4d11bdc6c30046125732228c7b9bbfcf6ba1fa72725ed7\"" Sep 12 17:34:37.136113 containerd[1470]: time="2025-09-12T17:34:37.133245936Z" level=info msg="StartContainer for \"e3791d255f1eaf251f4d11bdc6c30046125732228c7b9bbfcf6ba1fa72725ed7\"" Sep 12 17:34:37.137264 containerd[1470]: time="2025-09-12T17:34:37.136671432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-nqdbd,Uid:5f833bea-c3da-45de-bc4e-79166a973cc1,Namespace:tigera-operator,Attempt:0,}" Sep 12 17:34:37.153421 kubelet[2521]: E0912 17:34:37.153350 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:37.217885 containerd[1470]: time="2025-09-12T17:34:37.217630176Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:37.217885 containerd[1470]: time="2025-09-12T17:34:37.217757913Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:37.217885 containerd[1470]: time="2025-09-12T17:34:37.217822271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:37.224122 containerd[1470]: time="2025-09-12T17:34:37.223680911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:37.226559 systemd[1]: Started cri-containerd-e3791d255f1eaf251f4d11bdc6c30046125732228c7b9bbfcf6ba1fa72725ed7.scope - libcontainer container e3791d255f1eaf251f4d11bdc6c30046125732228c7b9bbfcf6ba1fa72725ed7. Sep 12 17:34:37.266409 systemd[1]: Started cri-containerd-f2b2813f0d335513bfa7edeeb559ae9bcac9e904da742b2dd1d8093e75dacaf3.scope - libcontainer container f2b2813f0d335513bfa7edeeb559ae9bcac9e904da742b2dd1d8093e75dacaf3. Sep 12 17:34:37.331945 containerd[1470]: time="2025-09-12T17:34:37.331872783Z" level=info msg="StartContainer for \"e3791d255f1eaf251f4d11bdc6c30046125732228c7b9bbfcf6ba1fa72725ed7\" returns successfully" Sep 12 17:34:37.369977 containerd[1470]: time="2025-09-12T17:34:37.369878711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-nqdbd,Uid:5f833bea-c3da-45de-bc4e-79166a973cc1,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"f2b2813f0d335513bfa7edeeb559ae9bcac9e904da742b2dd1d8093e75dacaf3\"" Sep 12 17:34:37.375367 containerd[1470]: time="2025-09-12T17:34:37.375288036Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 17:34:37.397887 kubelet[2521]: E0912 17:34:37.397735 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:37.406157 kubelet[2521]: E0912 17:34:37.404906 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:37.431721 kubelet[2521]: I0912 17:34:37.431633 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-75l7s" podStartSLOduration=1.431603402 podStartE2EDuration="1.431603402s" podCreationTimestamp="2025-09-12 17:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:34:37.427898902 +0000 UTC m=+5.350107959" watchObservedRunningTime="2025-09-12 17:34:37.431603402 +0000 UTC m=+5.353812458" Sep 12 17:34:37.609064 kubelet[2521]: E0912 17:34:37.607701 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:38.409161 kubelet[2521]: E0912 17:34:38.406996 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:38.409161 kubelet[2521]: E0912 17:34:38.407706 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:38.742891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3360753389.mount: Deactivated successfully. Sep 12 17:34:39.623401 containerd[1470]: time="2025-09-12T17:34:39.623336088Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:39.625805 containerd[1470]: time="2025-09-12T17:34:39.625106927Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 17:34:39.625805 containerd[1470]: time="2025-09-12T17:34:39.625430664Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:39.631057 containerd[1470]: time="2025-09-12T17:34:39.630969032Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:39.632084 containerd[1470]: time="2025-09-12T17:34:39.631779110Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 2.256409523s" Sep 12 17:34:39.632084 containerd[1470]: time="2025-09-12T17:34:39.631831785Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 17:34:39.638639 containerd[1470]: time="2025-09-12T17:34:39.638560881Z" level=info msg="CreateContainer within sandbox \"f2b2813f0d335513bfa7edeeb559ae9bcac9e904da742b2dd1d8093e75dacaf3\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 17:34:39.659656 containerd[1470]: time="2025-09-12T17:34:39.659579939Z" level=info msg="CreateContainer within sandbox \"f2b2813f0d335513bfa7edeeb559ae9bcac9e904da742b2dd1d8093e75dacaf3\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"477bbf1ab20f764e31f6f9f8588d493023e35e3958071c1766231169932ac150\"" Sep 12 17:34:39.662737 containerd[1470]: time="2025-09-12T17:34:39.661192452Z" level=info msg="StartContainer for \"477bbf1ab20f764e31f6f9f8588d493023e35e3958071c1766231169932ac150\"" Sep 12 17:34:39.721336 systemd[1]: Started cri-containerd-477bbf1ab20f764e31f6f9f8588d493023e35e3958071c1766231169932ac150.scope - libcontainer container 477bbf1ab20f764e31f6f9f8588d493023e35e3958071c1766231169932ac150. Sep 12 17:34:39.759118 containerd[1470]: time="2025-09-12T17:34:39.759059602Z" level=info msg="StartContainer for \"477bbf1ab20f764e31f6f9f8588d493023e35e3958071c1766231169932ac150\" returns successfully" Sep 12 17:34:40.253101 update_engine[1456]: I20250912 17:34:40.252439 1456 update_attempter.cc:509] Updating boot flags... Sep 12 17:34:40.284442 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2867) Sep 12 17:34:40.371139 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2868) Sep 12 17:34:40.467058 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2868) Sep 12 17:34:42.155217 kubelet[2521]: E0912 17:34:42.155095 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:42.220201 kubelet[2521]: I0912 17:34:42.219235 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-nqdbd" podStartSLOduration=3.958185243 podStartE2EDuration="6.219211291s" podCreationTimestamp="2025-09-12 17:34:36 +0000 UTC" firstStartedPulling="2025-09-12 17:34:37.373193405 +0000 UTC m=+5.295402430" lastFinishedPulling="2025-09-12 17:34:39.634219446 +0000 UTC m=+7.556428478" observedRunningTime="2025-09-12 17:34:40.436595415 +0000 UTC m=+8.358804456" watchObservedRunningTime="2025-09-12 17:34:42.219211291 +0000 UTC m=+10.141420326" Sep 12 17:34:42.425233 kubelet[2521]: E0912 17:34:42.424222 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:45.686387 sudo[1659]: pam_unix(sudo:session): session closed for user root Sep 12 17:34:45.692741 sshd[1656]: pam_unix(sshd:session): session closed for user core Sep 12 17:34:45.703519 systemd-logind[1455]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:34:45.703828 systemd[1]: sshd@6-64.227.109.162:22-147.75.109.163:59448.service: Deactivated successfully. Sep 12 17:34:45.708163 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:34:45.708557 systemd[1]: session-7.scope: Consumed 7.557s CPU time, 143.1M memory peak, 0B memory swap peak. Sep 12 17:34:45.712844 systemd-logind[1455]: Removed session 7. Sep 12 17:34:51.568081 kubelet[2521]: W0912 17:34:51.567884 2521 reflector.go:561] object-"calico-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-4081.3.6-8-31c29e3945" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.6-8-31c29e3945' and this object Sep 12 17:34:51.568081 kubelet[2521]: E0912 17:34:51.567959 2521 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ci-4081.3.6-8-31c29e3945\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-8-31c29e3945' and this object" logger="UnhandledError" Sep 12 17:34:51.568081 kubelet[2521]: W0912 17:34:51.567904 2521 reflector.go:561] object-"calico-system"/"tigera-ca-bundle": failed to list *v1.ConfigMap: configmaps "tigera-ca-bundle" is forbidden: User "system:node:ci-4081.3.6-8-31c29e3945" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.6-8-31c29e3945' and this object Sep 12 17:34:51.568081 kubelet[2521]: E0912 17:34:51.568040 2521 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"tigera-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ci-4081.3.6-8-31c29e3945\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4081.3.6-8-31c29e3945' and this object" logger="UnhandledError" Sep 12 17:34:51.574518 systemd[1]: Created slice kubepods-besteffort-podb79b4ead_4fa5_4c13_b9ea_9bd39a4b3484.slice - libcontainer container kubepods-besteffort-podb79b4ead_4fa5_4c13_b9ea_9bd39a4b3484.slice. Sep 12 17:34:51.716975 kubelet[2521]: I0912 17:34:51.716795 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b79b4ead-4fa5-4c13-b9ea-9bd39a4b3484-typha-certs\") pod \"calico-typha-945d79bbd-srz52\" (UID: \"b79b4ead-4fa5-4c13-b9ea-9bd39a4b3484\") " pod="calico-system/calico-typha-945d79bbd-srz52" Sep 12 17:34:51.716975 kubelet[2521]: I0912 17:34:51.716868 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b79b4ead-4fa5-4c13-b9ea-9bd39a4b3484-tigera-ca-bundle\") pod \"calico-typha-945d79bbd-srz52\" (UID: \"b79b4ead-4fa5-4c13-b9ea-9bd39a4b3484\") " pod="calico-system/calico-typha-945d79bbd-srz52" Sep 12 17:34:51.716975 kubelet[2521]: I0912 17:34:51.716904 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8ssq\" (UniqueName: \"kubernetes.io/projected/b79b4ead-4fa5-4c13-b9ea-9bd39a4b3484-kube-api-access-k8ssq\") pod \"calico-typha-945d79bbd-srz52\" (UID: \"b79b4ead-4fa5-4c13-b9ea-9bd39a4b3484\") " pod="calico-system/calico-typha-945d79bbd-srz52" Sep 12 17:34:51.827228 systemd[1]: Created slice kubepods-besteffort-pod468ee326_6b46_4c5f_9260_428e2c0479c1.slice - libcontainer container kubepods-besteffort-pod468ee326_6b46_4c5f_9260_428e2c0479c1.slice. Sep 12 17:34:51.918551 kubelet[2521]: I0912 17:34:51.917934 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/468ee326-6b46-4c5f-9260-428e2c0479c1-policysync\") pod \"calico-node-hwmn7\" (UID: \"468ee326-6b46-4c5f-9260-428e2c0479c1\") " pod="calico-system/calico-node-hwmn7" Sep 12 17:34:51.918551 kubelet[2521]: I0912 17:34:51.918099 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/468ee326-6b46-4c5f-9260-428e2c0479c1-cni-bin-dir\") pod \"calico-node-hwmn7\" (UID: \"468ee326-6b46-4c5f-9260-428e2c0479c1\") " pod="calico-system/calico-node-hwmn7" Sep 12 17:34:51.918551 kubelet[2521]: I0912 17:34:51.918136 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/468ee326-6b46-4c5f-9260-428e2c0479c1-cni-net-dir\") pod \"calico-node-hwmn7\" (UID: \"468ee326-6b46-4c5f-9260-428e2c0479c1\") " pod="calico-system/calico-node-hwmn7" Sep 12 17:34:51.918551 kubelet[2521]: I0912 17:34:51.918172 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/468ee326-6b46-4c5f-9260-428e2c0479c1-tigera-ca-bundle\") pod \"calico-node-hwmn7\" (UID: \"468ee326-6b46-4c5f-9260-428e2c0479c1\") " pod="calico-system/calico-node-hwmn7" Sep 12 17:34:51.918551 kubelet[2521]: I0912 17:34:51.918211 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/468ee326-6b46-4c5f-9260-428e2c0479c1-cni-log-dir\") pod \"calico-node-hwmn7\" (UID: \"468ee326-6b46-4c5f-9260-428e2c0479c1\") " pod="calico-system/calico-node-hwmn7" Sep 12 17:34:51.919065 kubelet[2521]: I0912 17:34:51.918238 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/468ee326-6b46-4c5f-9260-428e2c0479c1-flexvol-driver-host\") pod \"calico-node-hwmn7\" (UID: \"468ee326-6b46-4c5f-9260-428e2c0479c1\") " pod="calico-system/calico-node-hwmn7" Sep 12 17:34:51.919065 kubelet[2521]: I0912 17:34:51.918275 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/468ee326-6b46-4c5f-9260-428e2c0479c1-lib-modules\") pod \"calico-node-hwmn7\" (UID: \"468ee326-6b46-4c5f-9260-428e2c0479c1\") " pod="calico-system/calico-node-hwmn7" Sep 12 17:34:51.919065 kubelet[2521]: I0912 17:34:51.918307 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/468ee326-6b46-4c5f-9260-428e2c0479c1-var-lib-calico\") pod \"calico-node-hwmn7\" (UID: \"468ee326-6b46-4c5f-9260-428e2c0479c1\") " pod="calico-system/calico-node-hwmn7" Sep 12 17:34:51.919065 kubelet[2521]: I0912 17:34:51.918339 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mw65\" (UniqueName: \"kubernetes.io/projected/468ee326-6b46-4c5f-9260-428e2c0479c1-kube-api-access-2mw65\") pod \"calico-node-hwmn7\" (UID: \"468ee326-6b46-4c5f-9260-428e2c0479c1\") " pod="calico-system/calico-node-hwmn7" Sep 12 17:34:51.919065 kubelet[2521]: I0912 17:34:51.918376 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/468ee326-6b46-4c5f-9260-428e2c0479c1-node-certs\") pod \"calico-node-hwmn7\" (UID: \"468ee326-6b46-4c5f-9260-428e2c0479c1\") " pod="calico-system/calico-node-hwmn7" Sep 12 17:34:51.920139 kubelet[2521]: I0912 17:34:51.918403 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/468ee326-6b46-4c5f-9260-428e2c0479c1-xtables-lock\") pod \"calico-node-hwmn7\" (UID: \"468ee326-6b46-4c5f-9260-428e2c0479c1\") " pod="calico-system/calico-node-hwmn7" Sep 12 17:34:51.920139 kubelet[2521]: I0912 17:34:51.918435 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/468ee326-6b46-4c5f-9260-428e2c0479c1-var-run-calico\") pod \"calico-node-hwmn7\" (UID: \"468ee326-6b46-4c5f-9260-428e2c0479c1\") " pod="calico-system/calico-node-hwmn7" Sep 12 17:34:52.036973 kubelet[2521]: E0912 17:34:52.036231 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.036973 kubelet[2521]: W0912 17:34:52.036982 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.043169 kubelet[2521]: E0912 17:34:52.043024 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.120674 kubelet[2521]: E0912 17:34:52.120525 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.120674 kubelet[2521]: W0912 17:34:52.120564 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.120674 kubelet[2521]: E0912 17:34:52.120599 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.121317 kubelet[2521]: E0912 17:34:52.121293 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.121317 kubelet[2521]: W0912 17:34:52.121316 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.121447 kubelet[2521]: E0912 17:34:52.121340 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.121717 kubelet[2521]: E0912 17:34:52.121697 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.121717 kubelet[2521]: W0912 17:34:52.121716 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.121830 kubelet[2521]: E0912 17:34:52.121731 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.125450 kubelet[2521]: E0912 17:34:52.125404 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.125450 kubelet[2521]: W0912 17:34:52.125439 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.125680 kubelet[2521]: E0912 17:34:52.125474 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.150533 kubelet[2521]: E0912 17:34:52.150428 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cl7zl" podUID="c589e8cc-160a-4a03-8cff-84be5e73deb3" Sep 12 17:34:52.221695 kubelet[2521]: E0912 17:34:52.221491 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.221695 kubelet[2521]: W0912 17:34:52.221523 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.221695 kubelet[2521]: E0912 17:34:52.221553 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.222266 kubelet[2521]: E0912 17:34:52.222147 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.222266 kubelet[2521]: W0912 17:34:52.222163 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.222266 kubelet[2521]: E0912 17:34:52.222180 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.222827 kubelet[2521]: E0912 17:34:52.222531 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.222827 kubelet[2521]: W0912 17:34:52.222545 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.222827 kubelet[2521]: E0912 17:34:52.222558 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.223451 kubelet[2521]: E0912 17:34:52.223213 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.223451 kubelet[2521]: W0912 17:34:52.223231 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.223451 kubelet[2521]: E0912 17:34:52.223247 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.224451 kubelet[2521]: E0912 17:34:52.224429 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.224660 kubelet[2521]: W0912 17:34:52.224538 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.224660 kubelet[2521]: E0912 17:34:52.224560 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.224895 kubelet[2521]: E0912 17:34:52.224883 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.225074 kubelet[2521]: W0912 17:34:52.224949 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.225074 kubelet[2521]: E0912 17:34:52.224964 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.225413 kubelet[2521]: E0912 17:34:52.225297 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.225413 kubelet[2521]: W0912 17:34:52.225311 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.225413 kubelet[2521]: E0912 17:34:52.225324 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.226058 kubelet[2521]: E0912 17:34:52.225762 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.226058 kubelet[2521]: W0912 17:34:52.225776 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.226058 kubelet[2521]: E0912 17:34:52.225790 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.228048 kubelet[2521]: E0912 17:34:52.226267 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.228048 kubelet[2521]: W0912 17:34:52.226282 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.228048 kubelet[2521]: E0912 17:34:52.226296 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.228534 kubelet[2521]: E0912 17:34:52.228405 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.228534 kubelet[2521]: W0912 17:34:52.228424 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.228534 kubelet[2521]: E0912 17:34:52.228441 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.230384 kubelet[2521]: E0912 17:34:52.230223 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.230384 kubelet[2521]: W0912 17:34:52.230246 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.230384 kubelet[2521]: E0912 17:34:52.230268 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.230705 kubelet[2521]: E0912 17:34:52.230692 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.230773 kubelet[2521]: W0912 17:34:52.230763 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.230832 kubelet[2521]: E0912 17:34:52.230821 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.231237 kubelet[2521]: E0912 17:34:52.231135 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.231237 kubelet[2521]: W0912 17:34:52.231148 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.231237 kubelet[2521]: E0912 17:34:52.231162 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.233911 kubelet[2521]: E0912 17:34:52.233707 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.233911 kubelet[2521]: W0912 17:34:52.233737 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.233911 kubelet[2521]: E0912 17:34:52.233759 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.234354 kubelet[2521]: E0912 17:34:52.234340 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.234432 kubelet[2521]: W0912 17:34:52.234420 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.234498 kubelet[2521]: E0912 17:34:52.234486 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.235150 kubelet[2521]: E0912 17:34:52.234784 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.235150 kubelet[2521]: W0912 17:34:52.234796 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.235150 kubelet[2521]: E0912 17:34:52.234810 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.237435 kubelet[2521]: E0912 17:34:52.237300 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.237435 kubelet[2521]: W0912 17:34:52.237322 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.237435 kubelet[2521]: E0912 17:34:52.237345 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.237799 kubelet[2521]: E0912 17:34:52.237751 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.238141 kubelet[2521]: W0912 17:34:52.237766 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.238141 kubelet[2521]: E0912 17:34:52.237980 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.239150 kubelet[2521]: E0912 17:34:52.239129 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.239360 kubelet[2521]: W0912 17:34:52.239227 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.239360 kubelet[2521]: E0912 17:34:52.239247 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.239575 kubelet[2521]: E0912 17:34:52.239565 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.239906 kubelet[2521]: W0912 17:34:52.239624 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.239906 kubelet[2521]: E0912 17:34:52.239639 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.240165 kubelet[2521]: E0912 17:34:52.240152 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.240242 kubelet[2521]: W0912 17:34:52.240224 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.240315 kubelet[2521]: E0912 17:34:52.240300 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.241936 kubelet[2521]: E0912 17:34:52.240622 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.242286 kubelet[2521]: W0912 17:34:52.242079 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.242286 kubelet[2521]: E0912 17:34:52.242110 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.242286 kubelet[2521]: I0912 17:34:52.242158 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c589e8cc-160a-4a03-8cff-84be5e73deb3-kubelet-dir\") pod \"csi-node-driver-cl7zl\" (UID: \"c589e8cc-160a-4a03-8cff-84be5e73deb3\") " pod="calico-system/csi-node-driver-cl7zl" Sep 12 17:34:52.242572 kubelet[2521]: E0912 17:34:52.242558 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.242662 kubelet[2521]: W0912 17:34:52.242642 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.242865 kubelet[2521]: E0912 17:34:52.242718 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.242865 kubelet[2521]: I0912 17:34:52.242749 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c589e8cc-160a-4a03-8cff-84be5e73deb3-registration-dir\") pod \"csi-node-driver-cl7zl\" (UID: \"c589e8cc-160a-4a03-8cff-84be5e73deb3\") " pod="calico-system/csi-node-driver-cl7zl" Sep 12 17:34:52.243114 kubelet[2521]: E0912 17:34:52.243100 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.243203 kubelet[2521]: W0912 17:34:52.243190 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.243364 kubelet[2521]: E0912 17:34:52.243257 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.243534 kubelet[2521]: E0912 17:34:52.243523 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.243597 kubelet[2521]: W0912 17:34:52.243582 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.243760 kubelet[2521]: E0912 17:34:52.243642 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.243760 kubelet[2521]: I0912 17:34:52.243670 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c589e8cc-160a-4a03-8cff-84be5e73deb3-socket-dir\") pod \"csi-node-driver-cl7zl\" (UID: \"c589e8cc-160a-4a03-8cff-84be5e73deb3\") " pod="calico-system/csi-node-driver-cl7zl" Sep 12 17:34:52.243973 kubelet[2521]: E0912 17:34:52.243962 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.244218 kubelet[2521]: W0912 17:34:52.244056 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.244218 kubelet[2521]: E0912 17:34:52.244077 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.244218 kubelet[2521]: I0912 17:34:52.244114 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2tj7\" (UniqueName: \"kubernetes.io/projected/c589e8cc-160a-4a03-8cff-84be5e73deb3-kube-api-access-d2tj7\") pod \"csi-node-driver-cl7zl\" (UID: \"c589e8cc-160a-4a03-8cff-84be5e73deb3\") " pod="calico-system/csi-node-driver-cl7zl" Sep 12 17:34:52.244466 kubelet[2521]: E0912 17:34:52.244454 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.244538 kubelet[2521]: W0912 17:34:52.244527 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.244847 kubelet[2521]: E0912 17:34:52.244701 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.245222 kubelet[2521]: E0912 17:34:52.245187 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.245685 kubelet[2521]: W0912 17:34:52.245500 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.245685 kubelet[2521]: E0912 17:34:52.245525 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.245685 kubelet[2521]: I0912 17:34:52.245569 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c589e8cc-160a-4a03-8cff-84be5e73deb3-varrun\") pod \"csi-node-driver-cl7zl\" (UID: \"c589e8cc-160a-4a03-8cff-84be5e73deb3\") " pod="calico-system/csi-node-driver-cl7zl" Sep 12 17:34:52.246969 kubelet[2521]: E0912 17:34:52.246947 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.247218 kubelet[2521]: W0912 17:34:52.247114 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.247218 kubelet[2521]: E0912 17:34:52.247136 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.256352 kubelet[2521]: E0912 17:34:52.256310 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.256730 kubelet[2521]: W0912 17:34:52.256552 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.256730 kubelet[2521]: E0912 17:34:52.256594 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.257387 kubelet[2521]: E0912 17:34:52.256985 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.257387 kubelet[2521]: W0912 17:34:52.256999 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.257387 kubelet[2521]: E0912 17:34:52.257043 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.258036 kubelet[2521]: E0912 17:34:52.257621 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.258159 kubelet[2521]: W0912 17:34:52.258138 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.258354 kubelet[2521]: E0912 17:34:52.258223 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.259538 kubelet[2521]: E0912 17:34:52.259314 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.259538 kubelet[2521]: W0912 17:34:52.259337 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.259538 kubelet[2521]: E0912 17:34:52.259357 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.260256 kubelet[2521]: E0912 17:34:52.260239 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.260361 kubelet[2521]: W0912 17:34:52.260347 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.260434 kubelet[2521]: E0912 17:34:52.260423 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.261045 kubelet[2521]: E0912 17:34:52.260686 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.261131 kubelet[2521]: W0912 17:34:52.261116 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.261344 kubelet[2521]: E0912 17:34:52.261179 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.261528 kubelet[2521]: E0912 17:34:52.261516 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.261620 kubelet[2521]: W0912 17:34:52.261608 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.261688 kubelet[2521]: E0912 17:34:52.261677 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.264043 kubelet[2521]: E0912 17:34:52.261913 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.264043 kubelet[2521]: W0912 17:34:52.261930 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.264043 kubelet[2521]: E0912 17:34:52.261948 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.264577 kubelet[2521]: E0912 17:34:52.264422 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.264577 kubelet[2521]: W0912 17:34:52.264452 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.264577 kubelet[2521]: E0912 17:34:52.264471 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.267649 kubelet[2521]: E0912 17:34:52.266254 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.267649 kubelet[2521]: W0912 17:34:52.266276 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.267649 kubelet[2521]: E0912 17:34:52.266293 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.349226 kubelet[2521]: E0912 17:34:52.349175 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.349226 kubelet[2521]: W0912 17:34:52.349215 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.349637 kubelet[2521]: E0912 17:34:52.349250 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.350651 kubelet[2521]: E0912 17:34:52.350587 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.350746 kubelet[2521]: W0912 17:34:52.350664 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.350746 kubelet[2521]: E0912 17:34:52.350690 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.351143 kubelet[2521]: E0912 17:34:52.351074 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.351143 kubelet[2521]: W0912 17:34:52.351094 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.351143 kubelet[2521]: E0912 17:34:52.351111 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.354964 kubelet[2521]: E0912 17:34:52.354921 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.354964 kubelet[2521]: W0912 17:34:52.354959 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.355399 kubelet[2521]: E0912 17:34:52.354999 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.355399 kubelet[2521]: E0912 17:34:52.355386 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.355399 kubelet[2521]: W0912 17:34:52.355399 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.355510 kubelet[2521]: E0912 17:34:52.355415 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.355746 kubelet[2521]: E0912 17:34:52.355729 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.355746 kubelet[2521]: W0912 17:34:52.355746 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.355958 kubelet[2521]: E0912 17:34:52.355818 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.356196 kubelet[2521]: E0912 17:34:52.356176 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.356320 kubelet[2521]: W0912 17:34:52.356196 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.356320 kubelet[2521]: E0912 17:34:52.356276 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.357054 kubelet[2521]: E0912 17:34:52.357027 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.357054 kubelet[2521]: W0912 17:34:52.357050 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.357317 kubelet[2521]: E0912 17:34:52.357077 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.357478 kubelet[2521]: E0912 17:34:52.357459 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.357478 kubelet[2521]: W0912 17:34:52.357476 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.357691 kubelet[2521]: E0912 17:34:52.357556 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.358335 kubelet[2521]: E0912 17:34:52.358252 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.358335 kubelet[2521]: W0912 17:34:52.358323 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.358561 kubelet[2521]: E0912 17:34:52.358421 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.359519 kubelet[2521]: E0912 17:34:52.359495 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.359519 kubelet[2521]: W0912 17:34:52.359515 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.359720 kubelet[2521]: E0912 17:34:52.359607 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.360113 kubelet[2521]: E0912 17:34:52.360092 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.360113 kubelet[2521]: W0912 17:34:52.360111 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.360265 kubelet[2521]: E0912 17:34:52.360182 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.360612 kubelet[2521]: E0912 17:34:52.360591 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.360731 kubelet[2521]: W0912 17:34:52.360709 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.360860 kubelet[2521]: E0912 17:34:52.360792 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.361485 kubelet[2521]: E0912 17:34:52.361454 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.361485 kubelet[2521]: W0912 17:34:52.361473 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.361485 kubelet[2521]: E0912 17:34:52.361518 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.362387 kubelet[2521]: E0912 17:34:52.362032 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.362472 kubelet[2521]: W0912 17:34:52.362392 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.362679 kubelet[2521]: E0912 17:34:52.362636 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.362925 kubelet[2521]: E0912 17:34:52.362906 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.362925 kubelet[2521]: W0912 17:34:52.362924 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.363190 kubelet[2521]: E0912 17:34:52.363050 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.363433 kubelet[2521]: E0912 17:34:52.363414 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.363433 kubelet[2521]: W0912 17:34:52.363432 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.363712 kubelet[2521]: E0912 17:34:52.363566 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.364578 kubelet[2521]: E0912 17:34:52.364447 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.364578 kubelet[2521]: W0912 17:34:52.364466 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.364578 kubelet[2521]: E0912 17:34:52.364520 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.364872 kubelet[2521]: E0912 17:34:52.364859 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.365122 kubelet[2521]: W0912 17:34:52.365053 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.365327 kubelet[2521]: E0912 17:34:52.365265 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.366290 kubelet[2521]: E0912 17:34:52.365616 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.366290 kubelet[2521]: W0912 17:34:52.365632 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.366674 kubelet[2521]: E0912 17:34:52.366490 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.366939 kubelet[2521]: E0912 17:34:52.366826 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.366939 kubelet[2521]: W0912 17:34:52.366841 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.367100 kubelet[2521]: E0912 17:34:52.367085 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.367225 kubelet[2521]: E0912 17:34:52.367175 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.367393 kubelet[2521]: W0912 17:34:52.367288 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.367437 kubelet[2521]: E0912 17:34:52.367411 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.367593 kubelet[2521]: E0912 17:34:52.367582 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.367667 kubelet[2521]: W0912 17:34:52.367657 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.368858 kubelet[2521]: E0912 17:34:52.368788 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.369358 kubelet[2521]: E0912 17:34:52.369247 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.369358 kubelet[2521]: W0912 17:34:52.369264 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.369468 kubelet[2521]: E0912 17:34:52.369389 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.369803 kubelet[2521]: E0912 17:34:52.369657 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.369803 kubelet[2521]: W0912 17:34:52.369669 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.369803 kubelet[2521]: E0912 17:34:52.369711 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.370033 kubelet[2521]: E0912 17:34:52.370003 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.370105 kubelet[2521]: W0912 17:34:52.370094 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.370257 kubelet[2521]: E0912 17:34:52.370159 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.370869 kubelet[2521]: E0912 17:34:52.370650 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.370869 kubelet[2521]: W0912 17:34:52.370674 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.370869 kubelet[2521]: E0912 17:34:52.370701 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.372172 kubelet[2521]: E0912 17:34:52.372147 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.372172 kubelet[2521]: W0912 17:34:52.372170 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.372348 kubelet[2521]: E0912 17:34:52.372197 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.373516 kubelet[2521]: E0912 17:34:52.373491 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.373516 kubelet[2521]: W0912 17:34:52.373513 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.373645 kubelet[2521]: E0912 17:34:52.373532 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.466500 kubelet[2521]: E0912 17:34:52.466300 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.466500 kubelet[2521]: W0912 17:34:52.466333 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.466500 kubelet[2521]: E0912 17:34:52.466364 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.468758 kubelet[2521]: E0912 17:34:52.467653 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.468758 kubelet[2521]: W0912 17:34:52.467679 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.468758 kubelet[2521]: E0912 17:34:52.467705 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.469574 kubelet[2521]: E0912 17:34:52.469291 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.469574 kubelet[2521]: W0912 17:34:52.469315 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.469574 kubelet[2521]: E0912 17:34:52.469339 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.470048 kubelet[2521]: E0912 17:34:52.469794 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.470048 kubelet[2521]: W0912 17:34:52.469808 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.470048 kubelet[2521]: E0912 17:34:52.469823 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.470353 kubelet[2521]: E0912 17:34:52.470219 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.470353 kubelet[2521]: W0912 17:34:52.470233 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.470353 kubelet[2521]: E0912 17:34:52.470252 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.572037 kubelet[2521]: E0912 17:34:52.571970 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.572456 kubelet[2521]: W0912 17:34:52.572079 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.572783 kubelet[2521]: E0912 17:34:52.572728 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.573229 kubelet[2521]: E0912 17:34:52.573206 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.573298 kubelet[2521]: W0912 17:34:52.573229 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.573298 kubelet[2521]: E0912 17:34:52.573254 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.573966 kubelet[2521]: E0912 17:34:52.573593 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.573966 kubelet[2521]: W0912 17:34:52.573606 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.573966 kubelet[2521]: E0912 17:34:52.573621 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.573966 kubelet[2521]: E0912 17:34:52.573877 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.573966 kubelet[2521]: W0912 17:34:52.573889 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.573966 kubelet[2521]: E0912 17:34:52.573902 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.575287 kubelet[2521]: E0912 17:34:52.574977 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.575287 kubelet[2521]: W0912 17:34:52.575001 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.575287 kubelet[2521]: E0912 17:34:52.575122 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.676094 kubelet[2521]: E0912 17:34:52.675895 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.676094 kubelet[2521]: W0912 17:34:52.675927 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.676094 kubelet[2521]: E0912 17:34:52.675955 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.676453 kubelet[2521]: E0912 17:34:52.676441 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.676535 kubelet[2521]: W0912 17:34:52.676523 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.676724 kubelet[2521]: E0912 17:34:52.676586 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.676904 kubelet[2521]: E0912 17:34:52.676892 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.677114 kubelet[2521]: W0912 17:34:52.676956 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.677114 kubelet[2521]: E0912 17:34:52.676972 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.677283 kubelet[2521]: E0912 17:34:52.677272 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.677342 kubelet[2521]: W0912 17:34:52.677332 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.677399 kubelet[2521]: E0912 17:34:52.677388 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.677805 kubelet[2521]: E0912 17:34:52.677736 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.677805 kubelet[2521]: W0912 17:34:52.677748 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.677805 kubelet[2521]: E0912 17:34:52.677760 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.741590 kubelet[2521]: E0912 17:34:52.741337 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.741590 kubelet[2521]: W0912 17:34:52.741375 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.741590 kubelet[2521]: E0912 17:34:52.741407 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.743460 kubelet[2521]: E0912 17:34:52.743358 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.743460 kubelet[2521]: W0912 17:34:52.743389 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.743460 kubelet[2521]: E0912 17:34:52.743412 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.778929 kubelet[2521]: E0912 17:34:52.778878 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.778929 kubelet[2521]: W0912 17:34:52.778915 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.778929 kubelet[2521]: E0912 17:34:52.778942 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.780088 kubelet[2521]: E0912 17:34:52.780044 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.780088 kubelet[2521]: W0912 17:34:52.780068 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.780088 kubelet[2521]: E0912 17:34:52.780089 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.780764 kubelet[2521]: E0912 17:34:52.780738 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.780764 kubelet[2521]: W0912 17:34:52.780758 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.780884 kubelet[2521]: E0912 17:34:52.780774 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.845360 kubelet[2521]: E0912 17:34:52.845288 2521 projected.go:288] Couldn't get configMap calico-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:52.845360 kubelet[2521]: E0912 17:34:52.845362 2521 projected.go:194] Error preparing data for projected volume kube-api-access-k8ssq for pod calico-system/calico-typha-945d79bbd-srz52: failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:52.845610 kubelet[2521]: E0912 17:34:52.845486 2521 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b79b4ead-4fa5-4c13-b9ea-9bd39a4b3484-kube-api-access-k8ssq podName:b79b4ead-4fa5-4c13-b9ea-9bd39a4b3484 nodeName:}" failed. No retries permitted until 2025-09-12 17:34:53.34545122 +0000 UTC m=+21.267660262 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k8ssq" (UniqueName: "kubernetes.io/projected/b79b4ead-4fa5-4c13-b9ea-9bd39a4b3484-kube-api-access-k8ssq") pod "calico-typha-945d79bbd-srz52" (UID: "b79b4ead-4fa5-4c13-b9ea-9bd39a4b3484") : failed to sync configmap cache: timed out waiting for the condition Sep 12 17:34:52.881808 kubelet[2521]: E0912 17:34:52.881755 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.881808 kubelet[2521]: W0912 17:34:52.881792 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.882053 kubelet[2521]: E0912 17:34:52.881826 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.882181 kubelet[2521]: E0912 17:34:52.882170 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.882236 kubelet[2521]: W0912 17:34:52.882183 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.882236 kubelet[2521]: E0912 17:34:52.882197 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.882458 kubelet[2521]: E0912 17:34:52.882441 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.882458 kubelet[2521]: W0912 17:34:52.882457 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.882552 kubelet[2521]: E0912 17:34:52.882470 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.943054 kubelet[2521]: E0912 17:34:52.942139 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.943054 kubelet[2521]: W0912 17:34:52.942178 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.943054 kubelet[2521]: E0912 17:34:52.942208 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.945424 kubelet[2521]: E0912 17:34:52.945384 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.945596 kubelet[2521]: W0912 17:34:52.945571 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.945692 kubelet[2521]: E0912 17:34:52.945673 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:52.984123 kubelet[2521]: E0912 17:34:52.984066 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:52.984123 kubelet[2521]: W0912 17:34:52.984106 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:52.984337 kubelet[2521]: E0912 17:34:52.984140 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:53.059211 containerd[1470]: time="2025-09-12T17:34:53.059154683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hwmn7,Uid:468ee326-6b46-4c5f-9260-428e2c0479c1,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:53.085750 kubelet[2521]: E0912 17:34:53.085511 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:53.085750 kubelet[2521]: W0912 17:34:53.085580 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:53.085750 kubelet[2521]: E0912 17:34:53.085606 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:53.137574 containerd[1470]: time="2025-09-12T17:34:53.133512733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:53.137574 containerd[1470]: time="2025-09-12T17:34:53.133610111Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:53.137574 containerd[1470]: time="2025-09-12T17:34:53.133635125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:53.137974 containerd[1470]: time="2025-09-12T17:34:53.137558240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:53.185312 systemd[1]: Started cri-containerd-6b4ca5feec8c09c4b3479e0a6401f3c531b536da3e85a5fc21b32d0ba92d2026.scope - libcontainer container 6b4ca5feec8c09c4b3479e0a6401f3c531b536da3e85a5fc21b32d0ba92d2026. Sep 12 17:34:53.188961 kubelet[2521]: E0912 17:34:53.188802 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:53.188961 kubelet[2521]: W0912 17:34:53.188956 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:53.189229 kubelet[2521]: E0912 17:34:53.189036 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:53.229963 containerd[1470]: time="2025-09-12T17:34:53.229309020Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hwmn7,Uid:468ee326-6b46-4c5f-9260-428e2c0479c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b4ca5feec8c09c4b3479e0a6401f3c531b536da3e85a5fc21b32d0ba92d2026\"" Sep 12 17:34:53.241547 containerd[1470]: time="2025-09-12T17:34:53.241369570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 17:34:53.290240 kubelet[2521]: E0912 17:34:53.290158 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:53.290484 kubelet[2521]: W0912 17:34:53.290305 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:53.290484 kubelet[2521]: E0912 17:34:53.290345 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:53.392251 kubelet[2521]: E0912 17:34:53.392188 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:53.392810 kubelet[2521]: W0912 17:34:53.392220 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:53.392810 kubelet[2521]: E0912 17:34:53.392548 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:53.393190 kubelet[2521]: E0912 17:34:53.393050 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:53.393190 kubelet[2521]: W0912 17:34:53.393067 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:53.393190 kubelet[2521]: E0912 17:34:53.393085 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:53.393529 kubelet[2521]: E0912 17:34:53.393422 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:53.393529 kubelet[2521]: W0912 17:34:53.393454 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:53.393529 kubelet[2521]: E0912 17:34:53.393469 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:53.394320 kubelet[2521]: E0912 17:34:53.394102 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:53.394320 kubelet[2521]: W0912 17:34:53.394160 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:53.394320 kubelet[2521]: E0912 17:34:53.394190 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:53.399951 kubelet[2521]: E0912 17:34:53.399458 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:53.399951 kubelet[2521]: W0912 17:34:53.399497 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:53.399951 kubelet[2521]: E0912 17:34:53.399523 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:53.404293 kubelet[2521]: E0912 17:34:53.404246 2521 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 17:34:53.404293 kubelet[2521]: W0912 17:34:53.404282 2521 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 17:34:53.404481 kubelet[2521]: E0912 17:34:53.404313 2521 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 17:34:53.680494 kubelet[2521]: E0912 17:34:53.680426 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:53.682344 containerd[1470]: time="2025-09-12T17:34:53.681600545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-945d79bbd-srz52,Uid:b79b4ead-4fa5-4c13-b9ea-9bd39a4b3484,Namespace:calico-system,Attempt:0,}" Sep 12 17:34:53.720054 containerd[1470]: time="2025-09-12T17:34:53.718410877Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:34:53.720054 containerd[1470]: time="2025-09-12T17:34:53.718484621Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:34:53.720054 containerd[1470]: time="2025-09-12T17:34:53.718496716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:53.720054 containerd[1470]: time="2025-09-12T17:34:53.718615471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:34:53.751386 systemd[1]: Started cri-containerd-82d21381b64958862ee15882deb3332ef63f19c544aca2943215aada653399cd.scope - libcontainer container 82d21381b64958862ee15882deb3332ef63f19c544aca2943215aada653399cd. Sep 12 17:34:53.828716 containerd[1470]: time="2025-09-12T17:34:53.828289833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-945d79bbd-srz52,Uid:b79b4ead-4fa5-4c13-b9ea-9bd39a4b3484,Namespace:calico-system,Attempt:0,} returns sandbox id \"82d21381b64958862ee15882deb3332ef63f19c544aca2943215aada653399cd\"" Sep 12 17:34:53.830120 kubelet[2521]: E0912 17:34:53.829770 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:54.326530 kubelet[2521]: E0912 17:34:54.326440 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cl7zl" podUID="c589e8cc-160a-4a03-8cff-84be5e73deb3" Sep 12 17:34:54.724675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3100083811.mount: Deactivated successfully. Sep 12 17:34:54.899032 containerd[1470]: time="2025-09-12T17:34:54.897141829Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:54.899721 containerd[1470]: time="2025-09-12T17:34:54.899654762Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 12 17:34:54.900045 containerd[1470]: time="2025-09-12T17:34:54.899986979Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:54.906658 containerd[1470]: time="2025-09-12T17:34:54.906436551Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:54.907505 containerd[1470]: time="2025-09-12T17:34:54.907363646Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.665906432s" Sep 12 17:34:54.907505 containerd[1470]: time="2025-09-12T17:34:54.907423945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 17:34:54.910256 containerd[1470]: time="2025-09-12T17:34:54.910210263Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 17:34:54.928538 containerd[1470]: time="2025-09-12T17:34:54.927621180Z" level=info msg="CreateContainer within sandbox \"6b4ca5feec8c09c4b3479e0a6401f3c531b536da3e85a5fc21b32d0ba92d2026\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 17:34:54.974326 containerd[1470]: time="2025-09-12T17:34:54.973327059Z" level=info msg="CreateContainer within sandbox \"6b4ca5feec8c09c4b3479e0a6401f3c531b536da3e85a5fc21b32d0ba92d2026\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f73c2ddf74b06664dfecf25dd1089c3acb8bfccc607659865e959665e86d446b\"" Sep 12 17:34:54.976125 containerd[1470]: time="2025-09-12T17:34:54.974792572Z" level=info msg="StartContainer for \"f73c2ddf74b06664dfecf25dd1089c3acb8bfccc607659865e959665e86d446b\"" Sep 12 17:34:55.068311 systemd[1]: Started cri-containerd-f73c2ddf74b06664dfecf25dd1089c3acb8bfccc607659865e959665e86d446b.scope - libcontainer container f73c2ddf74b06664dfecf25dd1089c3acb8bfccc607659865e959665e86d446b. Sep 12 17:34:55.184909 containerd[1470]: time="2025-09-12T17:34:55.184829959Z" level=info msg="StartContainer for \"f73c2ddf74b06664dfecf25dd1089c3acb8bfccc607659865e959665e86d446b\" returns successfully" Sep 12 17:34:55.238953 systemd[1]: cri-containerd-f73c2ddf74b06664dfecf25dd1089c3acb8bfccc607659865e959665e86d446b.scope: Deactivated successfully. Sep 12 17:34:55.305091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f73c2ddf74b06664dfecf25dd1089c3acb8bfccc607659865e959665e86d446b-rootfs.mount: Deactivated successfully. Sep 12 17:34:55.319459 containerd[1470]: time="2025-09-12T17:34:55.310963041Z" level=info msg="shim disconnected" id=f73c2ddf74b06664dfecf25dd1089c3acb8bfccc607659865e959665e86d446b namespace=k8s.io Sep 12 17:34:55.320558 containerd[1470]: time="2025-09-12T17:34:55.319480898Z" level=warning msg="cleaning up after shim disconnected" id=f73c2ddf74b06664dfecf25dd1089c3acb8bfccc607659865e959665e86d446b namespace=k8s.io Sep 12 17:34:55.320558 containerd[1470]: time="2025-09-12T17:34:55.319505658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:34:56.325756 kubelet[2521]: E0912 17:34:56.325667 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cl7zl" podUID="c589e8cc-160a-4a03-8cff-84be5e73deb3" Sep 12 17:34:57.380228 containerd[1470]: time="2025-09-12T17:34:57.380148483Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:57.381641 containerd[1470]: time="2025-09-12T17:34:57.381569218Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 12 17:34:57.384186 containerd[1470]: time="2025-09-12T17:34:57.384131782Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:57.388874 containerd[1470]: time="2025-09-12T17:34:57.388802588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:34:57.390984 containerd[1470]: time="2025-09-12T17:34:57.390221512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 2.479954034s" Sep 12 17:34:57.390984 containerd[1470]: time="2025-09-12T17:34:57.390286795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 17:34:57.394347 containerd[1470]: time="2025-09-12T17:34:57.394287540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 17:34:57.428031 containerd[1470]: time="2025-09-12T17:34:57.427949359Z" level=info msg="CreateContainer within sandbox \"82d21381b64958862ee15882deb3332ef63f19c544aca2943215aada653399cd\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 17:34:57.504351 containerd[1470]: time="2025-09-12T17:34:57.504238197Z" level=info msg="CreateContainer within sandbox \"82d21381b64958862ee15882deb3332ef63f19c544aca2943215aada653399cd\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"0db39ae3d11b80e1fbda8c7dd117d9425be76fe2e7b67ca1a22a749caf8c0649\"" Sep 12 17:34:57.509098 containerd[1470]: time="2025-09-12T17:34:57.507479046Z" level=info msg="StartContainer for \"0db39ae3d11b80e1fbda8c7dd117d9425be76fe2e7b67ca1a22a749caf8c0649\"" Sep 12 17:34:57.674338 systemd[1]: Started cri-containerd-0db39ae3d11b80e1fbda8c7dd117d9425be76fe2e7b67ca1a22a749caf8c0649.scope - libcontainer container 0db39ae3d11b80e1fbda8c7dd117d9425be76fe2e7b67ca1a22a749caf8c0649. Sep 12 17:34:57.759260 containerd[1470]: time="2025-09-12T17:34:57.759193016Z" level=info msg="StartContainer for \"0db39ae3d11b80e1fbda8c7dd117d9425be76fe2e7b67ca1a22a749caf8c0649\" returns successfully" Sep 12 17:34:58.324878 kubelet[2521]: E0912 17:34:58.324422 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cl7zl" podUID="c589e8cc-160a-4a03-8cff-84be5e73deb3" Sep 12 17:34:58.526390 kubelet[2521]: E0912 17:34:58.526333 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:34:59.549639 kubelet[2521]: I0912 17:34:59.549585 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:34:59.552253 kubelet[2521]: E0912 17:34:59.552141 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:00.326319 kubelet[2521]: E0912 17:35:00.325402 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cl7zl" podUID="c589e8cc-160a-4a03-8cff-84be5e73deb3" Sep 12 17:35:02.337059 kubelet[2521]: E0912 17:35:02.336206 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cl7zl" podUID="c589e8cc-160a-4a03-8cff-84be5e73deb3" Sep 12 17:35:03.818684 containerd[1470]: time="2025-09-12T17:35:03.818077449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:03.828340 containerd[1470]: time="2025-09-12T17:35:03.828223920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 17:35:03.829666 containerd[1470]: time="2025-09-12T17:35:03.829557401Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:03.834983 containerd[1470]: time="2025-09-12T17:35:03.834870430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:03.836842 containerd[1470]: time="2025-09-12T17:35:03.836387988Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 6.442035965s" Sep 12 17:35:03.836842 containerd[1470]: time="2025-09-12T17:35:03.836460072Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 17:35:03.848946 containerd[1470]: time="2025-09-12T17:35:03.848620551Z" level=info msg="CreateContainer within sandbox \"6b4ca5feec8c09c4b3479e0a6401f3c531b536da3e85a5fc21b32d0ba92d2026\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 17:35:03.919779 containerd[1470]: time="2025-09-12T17:35:03.919062919Z" level=info msg="CreateContainer within sandbox \"6b4ca5feec8c09c4b3479e0a6401f3c531b536da3e85a5fc21b32d0ba92d2026\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"bbd6ff78eedb4cbd6827ca0532f96bc188fccfc875419b4f3f5ca6766af4ed7a\"" Sep 12 17:35:03.924052 containerd[1470]: time="2025-09-12T17:35:03.921825094Z" level=info msg="StartContainer for \"bbd6ff78eedb4cbd6827ca0532f96bc188fccfc875419b4f3f5ca6766af4ed7a\"" Sep 12 17:35:04.036443 systemd[1]: Started cri-containerd-bbd6ff78eedb4cbd6827ca0532f96bc188fccfc875419b4f3f5ca6766af4ed7a.scope - libcontainer container bbd6ff78eedb4cbd6827ca0532f96bc188fccfc875419b4f3f5ca6766af4ed7a. Sep 12 17:35:04.108930 containerd[1470]: time="2025-09-12T17:35:04.108235049Z" level=info msg="StartContainer for \"bbd6ff78eedb4cbd6827ca0532f96bc188fccfc875419b4f3f5ca6766af4ed7a\" returns successfully" Sep 12 17:35:04.326387 kubelet[2521]: E0912 17:35:04.325046 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cl7zl" podUID="c589e8cc-160a-4a03-8cff-84be5e73deb3" Sep 12 17:35:04.605513 kubelet[2521]: I0912 17:35:04.604299 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-945d79bbd-srz52" podStartSLOduration=10.042065214 podStartE2EDuration="13.604250557s" podCreationTimestamp="2025-09-12 17:34:51 +0000 UTC" firstStartedPulling="2025-09-12 17:34:53.831220192 +0000 UTC m=+21.753429214" lastFinishedPulling="2025-09-12 17:34:57.393405512 +0000 UTC m=+25.315614557" observedRunningTime="2025-09-12 17:34:58.543321474 +0000 UTC m=+26.465530509" watchObservedRunningTime="2025-09-12 17:35:04.604250557 +0000 UTC m=+32.526459611" Sep 12 17:35:05.305414 systemd[1]: cri-containerd-bbd6ff78eedb4cbd6827ca0532f96bc188fccfc875419b4f3f5ca6766af4ed7a.scope: Deactivated successfully. Sep 12 17:35:05.405388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbd6ff78eedb4cbd6827ca0532f96bc188fccfc875419b4f3f5ca6766af4ed7a-rootfs.mount: Deactivated successfully. Sep 12 17:35:05.413804 containerd[1470]: time="2025-09-12T17:35:05.411780163Z" level=info msg="shim disconnected" id=bbd6ff78eedb4cbd6827ca0532f96bc188fccfc875419b4f3f5ca6766af4ed7a namespace=k8s.io Sep 12 17:35:05.413804 containerd[1470]: time="2025-09-12T17:35:05.412117247Z" level=warning msg="cleaning up after shim disconnected" id=bbd6ff78eedb4cbd6827ca0532f96bc188fccfc875419b4f3f5ca6766af4ed7a namespace=k8s.io Sep 12 17:35:05.413804 containerd[1470]: time="2025-09-12T17:35:05.412135247Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:35:05.432277 kubelet[2521]: I0912 17:35:05.431600 2521 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 17:35:05.512773 systemd[1]: Created slice kubepods-besteffort-pod47141fc5_2666_4c35_9d69_2367a7803e63.slice - libcontainer container kubepods-besteffort-pod47141fc5_2666_4c35_9d69_2367a7803e63.slice. Sep 12 17:35:05.535258 systemd[1]: Created slice kubepods-burstable-pod058bb08d_c113_4683_9906_785068b1e043.slice - libcontainer container kubepods-burstable-pod058bb08d_c113_4683_9906_785068b1e043.slice. Sep 12 17:35:05.539367 kubelet[2521]: I0912 17:35:05.538679 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlh44\" (UniqueName: \"kubernetes.io/projected/37a86f89-3968-4ebf-bcc4-c3d17db0dd1b-kube-api-access-nlh44\") pod \"coredns-7c65d6cfc9-tnpg9\" (UID: \"37a86f89-3968-4ebf-bcc4-c3d17db0dd1b\") " pod="kube-system/coredns-7c65d6cfc9-tnpg9" Sep 12 17:35:05.539367 kubelet[2521]: I0912 17:35:05.538752 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/058bb08d-c113-4683-9906-785068b1e043-config-volume\") pod \"coredns-7c65d6cfc9-hdjvw\" (UID: \"058bb08d-c113-4683-9906-785068b1e043\") " pod="kube-system/coredns-7c65d6cfc9-hdjvw" Sep 12 17:35:05.539367 kubelet[2521]: I0912 17:35:05.538793 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/37a86f89-3968-4ebf-bcc4-c3d17db0dd1b-config-volume\") pod \"coredns-7c65d6cfc9-tnpg9\" (UID: \"37a86f89-3968-4ebf-bcc4-c3d17db0dd1b\") " pod="kube-system/coredns-7c65d6cfc9-tnpg9" Sep 12 17:35:05.539367 kubelet[2521]: I0912 17:35:05.538830 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vrnz\" (UniqueName: \"kubernetes.io/projected/47141fc5-2666-4c35-9d69-2367a7803e63-kube-api-access-7vrnz\") pod \"calico-kube-controllers-9d66ff4cd-cjszq\" (UID: \"47141fc5-2666-4c35-9d69-2367a7803e63\") " pod="calico-system/calico-kube-controllers-9d66ff4cd-cjszq" Sep 12 17:35:05.539367 kubelet[2521]: I0912 17:35:05.538864 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tq8g\" (UniqueName: \"kubernetes.io/projected/058bb08d-c113-4683-9906-785068b1e043-kube-api-access-8tq8g\") pod \"coredns-7c65d6cfc9-hdjvw\" (UID: \"058bb08d-c113-4683-9906-785068b1e043\") " pod="kube-system/coredns-7c65d6cfc9-hdjvw" Sep 12 17:35:05.539781 kubelet[2521]: I0912 17:35:05.538893 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47141fc5-2666-4c35-9d69-2367a7803e63-tigera-ca-bundle\") pod \"calico-kube-controllers-9d66ff4cd-cjszq\" (UID: \"47141fc5-2666-4c35-9d69-2367a7803e63\") " pod="calico-system/calico-kube-controllers-9d66ff4cd-cjszq" Sep 12 17:35:05.559161 systemd[1]: Created slice kubepods-burstable-pod37a86f89_3968_4ebf_bcc4_c3d17db0dd1b.slice - libcontainer container kubepods-burstable-pod37a86f89_3968_4ebf_bcc4_c3d17db0dd1b.slice. Sep 12 17:35:05.595998 systemd[1]: Created slice kubepods-besteffort-podfade2c26_8080_4e6a_9beb_ec982854f037.slice - libcontainer container kubepods-besteffort-podfade2c26_8080_4e6a_9beb_ec982854f037.slice. Sep 12 17:35:05.628006 systemd[1]: Created slice kubepods-besteffort-podf56838a8_b323_497d_aa49_d43c2031138f.slice - libcontainer container kubepods-besteffort-podf56838a8_b323_497d_aa49_d43c2031138f.slice. Sep 12 17:35:05.632188 containerd[1470]: time="2025-09-12T17:35:05.631157768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 17:35:05.648537 kubelet[2521]: I0912 17:35:05.648048 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f56838a8-b323-497d-aa49-d43c2031138f-whisker-ca-bundle\") pod \"whisker-5c566ffb4c-zmk9h\" (UID: \"f56838a8-b323-497d-aa49-d43c2031138f\") " pod="calico-system/whisker-5c566ffb4c-zmk9h" Sep 12 17:35:05.649839 kubelet[2521]: I0912 17:35:05.649695 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f78g\" (UniqueName: \"kubernetes.io/projected/f56838a8-b323-497d-aa49-d43c2031138f-kube-api-access-7f78g\") pod \"whisker-5c566ffb4c-zmk9h\" (UID: \"f56838a8-b323-497d-aa49-d43c2031138f\") " pod="calico-system/whisker-5c566ffb4c-zmk9h" Sep 12 17:35:05.651089 kubelet[2521]: I0912 17:35:05.650531 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d953a4e7-ed0f-478f-9cb2-fa836708ae8f-goldmane-ca-bundle\") pod \"goldmane-7988f88666-kxmsn\" (UID: \"d953a4e7-ed0f-478f-9cb2-fa836708ae8f\") " pod="calico-system/goldmane-7988f88666-kxmsn" Sep 12 17:35:05.654045 systemd[1]: Created slice kubepods-besteffort-podaa60dfc9_ae0e_4b18_997a_7dcfb50c4f05.slice - libcontainer container kubepods-besteffort-podaa60dfc9_ae0e_4b18_997a_7dcfb50c4f05.slice. Sep 12 17:35:05.656049 kubelet[2521]: I0912 17:35:05.655949 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d953a4e7-ed0f-478f-9cb2-fa836708ae8f-config\") pod \"goldmane-7988f88666-kxmsn\" (UID: \"d953a4e7-ed0f-478f-9cb2-fa836708ae8f\") " pod="calico-system/goldmane-7988f88666-kxmsn" Sep 12 17:35:05.656509 kubelet[2521]: I0912 17:35:05.656126 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k4bt\" (UniqueName: \"kubernetes.io/projected/fade2c26-8080-4e6a-9beb-ec982854f037-kube-api-access-8k4bt\") pod \"calico-apiserver-56b75c89b8-5w8s5\" (UID: \"fade2c26-8080-4e6a-9beb-ec982854f037\") " pod="calico-apiserver/calico-apiserver-56b75c89b8-5w8s5" Sep 12 17:35:05.656509 kubelet[2521]: I0912 17:35:05.656172 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05-calico-apiserver-certs\") pod \"calico-apiserver-56b75c89b8-mssfh\" (UID: \"aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05\") " pod="calico-apiserver/calico-apiserver-56b75c89b8-mssfh" Sep 12 17:35:05.656509 kubelet[2521]: I0912 17:35:05.656209 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fade2c26-8080-4e6a-9beb-ec982854f037-calico-apiserver-certs\") pod \"calico-apiserver-56b75c89b8-5w8s5\" (UID: \"fade2c26-8080-4e6a-9beb-ec982854f037\") " pod="calico-apiserver/calico-apiserver-56b75c89b8-5w8s5" Sep 12 17:35:05.656509 kubelet[2521]: I0912 17:35:05.656349 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xprgt\" (UniqueName: \"kubernetes.io/projected/aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05-kube-api-access-xprgt\") pod \"calico-apiserver-56b75c89b8-mssfh\" (UID: \"aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05\") " pod="calico-apiserver/calico-apiserver-56b75c89b8-mssfh" Sep 12 17:35:05.656509 kubelet[2521]: I0912 17:35:05.656382 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d953a4e7-ed0f-478f-9cb2-fa836708ae8f-goldmane-key-pair\") pod \"goldmane-7988f88666-kxmsn\" (UID: \"d953a4e7-ed0f-478f-9cb2-fa836708ae8f\") " pod="calico-system/goldmane-7988f88666-kxmsn" Sep 12 17:35:05.656722 kubelet[2521]: I0912 17:35:05.656411 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7h66\" (UniqueName: \"kubernetes.io/projected/d953a4e7-ed0f-478f-9cb2-fa836708ae8f-kube-api-access-d7h66\") pod \"goldmane-7988f88666-kxmsn\" (UID: \"d953a4e7-ed0f-478f-9cb2-fa836708ae8f\") " pod="calico-system/goldmane-7988f88666-kxmsn" Sep 12 17:35:05.656722 kubelet[2521]: I0912 17:35:05.656440 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f56838a8-b323-497d-aa49-d43c2031138f-whisker-backend-key-pair\") pod \"whisker-5c566ffb4c-zmk9h\" (UID: \"f56838a8-b323-497d-aa49-d43c2031138f\") " pod="calico-system/whisker-5c566ffb4c-zmk9h" Sep 12 17:35:05.671838 systemd[1]: Created slice kubepods-besteffort-podd953a4e7_ed0f_478f_9cb2_fa836708ae8f.slice - libcontainer container kubepods-besteffort-podd953a4e7_ed0f_478f_9cb2_fa836708ae8f.slice. Sep 12 17:35:05.824988 containerd[1470]: time="2025-09-12T17:35:05.824408924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9d66ff4cd-cjszq,Uid:47141fc5-2666-4c35-9d69-2367a7803e63,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:05.850628 kubelet[2521]: E0912 17:35:05.850398 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:05.852776 containerd[1470]: time="2025-09-12T17:35:05.852449559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hdjvw,Uid:058bb08d-c113-4683-9906-785068b1e043,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:05.873385 kubelet[2521]: E0912 17:35:05.873323 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:05.875485 containerd[1470]: time="2025-09-12T17:35:05.875413654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tnpg9,Uid:37a86f89-3968-4ebf-bcc4-c3d17db0dd1b,Namespace:kube-system,Attempt:0,}" Sep 12 17:35:05.920399 containerd[1470]: time="2025-09-12T17:35:05.919928136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b75c89b8-5w8s5,Uid:fade2c26-8080-4e6a-9beb-ec982854f037,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:35:05.944745 containerd[1470]: time="2025-09-12T17:35:05.944458131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c566ffb4c-zmk9h,Uid:f56838a8-b323-497d-aa49-d43c2031138f,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:05.968917 containerd[1470]: time="2025-09-12T17:35:05.968382481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b75c89b8-mssfh,Uid:aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05,Namespace:calico-apiserver,Attempt:0,}" Sep 12 17:35:05.987218 containerd[1470]: time="2025-09-12T17:35:05.987154931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-kxmsn,Uid:d953a4e7-ed0f-478f-9cb2-fa836708ae8f,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:06.352164 systemd[1]: Created slice kubepods-besteffort-podc589e8cc_160a_4a03_8cff_84be5e73deb3.slice - libcontainer container kubepods-besteffort-podc589e8cc_160a_4a03_8cff_84be5e73deb3.slice. Sep 12 17:35:06.400870 containerd[1470]: time="2025-09-12T17:35:06.395736044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cl7zl,Uid:c589e8cc-160a-4a03-8cff-84be5e73deb3,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:06.685376 containerd[1470]: time="2025-09-12T17:35:06.685096997Z" level=error msg="Failed to destroy network for sandbox \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.690522 containerd[1470]: time="2025-09-12T17:35:06.690428971Z" level=error msg="encountered an error cleaning up failed sandbox \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.693794 containerd[1470]: time="2025-09-12T17:35:06.693505396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hdjvw,Uid:058bb08d-c113-4683-9906-785068b1e043,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.694308 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e-shm.mount: Deactivated successfully. Sep 12 17:35:06.696579 containerd[1470]: time="2025-09-12T17:35:06.695972332Z" level=error msg="Failed to destroy network for sandbox \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.702735 containerd[1470]: time="2025-09-12T17:35:06.702471259Z" level=error msg="encountered an error cleaning up failed sandbox \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.702735 containerd[1470]: time="2025-09-12T17:35:06.702594427Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9d66ff4cd-cjszq,Uid:47141fc5-2666-4c35-9d69-2367a7803e63,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.703315 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42-shm.mount: Deactivated successfully. Sep 12 17:35:06.714305 containerd[1470]: time="2025-09-12T17:35:06.714157011Z" level=error msg="Failed to destroy network for sandbox \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.715160 containerd[1470]: time="2025-09-12T17:35:06.715108265Z" level=error msg="encountered an error cleaning up failed sandbox \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.715463 containerd[1470]: time="2025-09-12T17:35:06.715422005Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tnpg9,Uid:37a86f89-3968-4ebf-bcc4-c3d17db0dd1b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.716366 kubelet[2521]: E0912 17:35:06.715893 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.716366 kubelet[2521]: E0912 17:35:06.715964 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.716366 kubelet[2521]: E0912 17:35:06.716005 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tnpg9" Sep 12 17:35:06.716366 kubelet[2521]: E0912 17:35:06.716062 2521 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tnpg9" Sep 12 17:35:06.717388 kubelet[2521]: E0912 17:35:06.716084 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9d66ff4cd-cjszq" Sep 12 17:35:06.717388 kubelet[2521]: E0912 17:35:06.716118 2521 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-9d66ff4cd-cjszq" Sep 12 17:35:06.717388 kubelet[2521]: E0912 17:35:06.716139 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tnpg9_kube-system(37a86f89-3968-4ebf-bcc4-c3d17db0dd1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tnpg9_kube-system(37a86f89-3968-4ebf-bcc4-c3d17db0dd1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tnpg9" podUID="37a86f89-3968-4ebf-bcc4-c3d17db0dd1b" Sep 12 17:35:06.717638 kubelet[2521]: E0912 17:35:06.716177 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-9d66ff4cd-cjszq_calico-system(47141fc5-2666-4c35-9d69-2367a7803e63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-9d66ff4cd-cjszq_calico-system(47141fc5-2666-4c35-9d69-2367a7803e63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9d66ff4cd-cjszq" podUID="47141fc5-2666-4c35-9d69-2367a7803e63" Sep 12 17:35:06.717638 kubelet[2521]: E0912 17:35:06.715893 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.717638 kubelet[2521]: E0912 17:35:06.716237 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hdjvw" Sep 12 17:35:06.717865 kubelet[2521]: E0912 17:35:06.716260 2521 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-hdjvw" Sep 12 17:35:06.717865 kubelet[2521]: E0912 17:35:06.716305 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hdjvw_kube-system(058bb08d-c113-4683-9906-785068b1e043)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hdjvw_kube-system(058bb08d-c113-4683-9906-785068b1e043)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hdjvw" podUID="058bb08d-c113-4683-9906-785068b1e043" Sep 12 17:35:06.721979 containerd[1470]: time="2025-09-12T17:35:06.721899912Z" level=error msg="Failed to destroy network for sandbox \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.724971 containerd[1470]: time="2025-09-12T17:35:06.724366140Z" level=error msg="encountered an error cleaning up failed sandbox \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.725521 containerd[1470]: time="2025-09-12T17:35:06.725247702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b75c89b8-5w8s5,Uid:fade2c26-8080-4e6a-9beb-ec982854f037,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.726979 kubelet[2521]: E0912 17:35:06.726845 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.727486 kubelet[2521]: E0912 17:35:06.727136 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56b75c89b8-5w8s5" Sep 12 17:35:06.727486 kubelet[2521]: E0912 17:35:06.727412 2521 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56b75c89b8-5w8s5" Sep 12 17:35:06.728125 kubelet[2521]: E0912 17:35:06.727926 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56b75c89b8-5w8s5_calico-apiserver(fade2c26-8080-4e6a-9beb-ec982854f037)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56b75c89b8-5w8s5_calico-apiserver(fade2c26-8080-4e6a-9beb-ec982854f037)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56b75c89b8-5w8s5" podUID="fade2c26-8080-4e6a-9beb-ec982854f037" Sep 12 17:35:06.779406 containerd[1470]: time="2025-09-12T17:35:06.779333585Z" level=error msg="Failed to destroy network for sandbox \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.779856 containerd[1470]: time="2025-09-12T17:35:06.779808007Z" level=error msg="encountered an error cleaning up failed sandbox \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.779935 containerd[1470]: time="2025-09-12T17:35:06.779895821Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c566ffb4c-zmk9h,Uid:f56838a8-b323-497d-aa49-d43c2031138f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.780541 kubelet[2521]: E0912 17:35:06.780482 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.780644 kubelet[2521]: E0912 17:35:06.780567 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c566ffb4c-zmk9h" Sep 12 17:35:06.780644 kubelet[2521]: E0912 17:35:06.780599 2521 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c566ffb4c-zmk9h" Sep 12 17:35:06.780768 kubelet[2521]: E0912 17:35:06.780662 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c566ffb4c-zmk9h_calico-system(f56838a8-b323-497d-aa49-d43c2031138f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c566ffb4c-zmk9h_calico-system(f56838a8-b323-497d-aa49-d43c2031138f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c566ffb4c-zmk9h" podUID="f56838a8-b323-497d-aa49-d43c2031138f" Sep 12 17:35:06.786649 containerd[1470]: time="2025-09-12T17:35:06.786222310Z" level=error msg="Failed to destroy network for sandbox \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.788737 containerd[1470]: time="2025-09-12T17:35:06.788610456Z" level=error msg="encountered an error cleaning up failed sandbox \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.788917 containerd[1470]: time="2025-09-12T17:35:06.788740749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-kxmsn,Uid:d953a4e7-ed0f-478f-9cb2-fa836708ae8f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.789957 kubelet[2521]: E0912 17:35:06.789063 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.789957 kubelet[2521]: E0912 17:35:06.789142 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-kxmsn" Sep 12 17:35:06.789957 kubelet[2521]: E0912 17:35:06.789186 2521 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-kxmsn" Sep 12 17:35:06.790497 kubelet[2521]: E0912 17:35:06.789263 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-kxmsn_calico-system(d953a4e7-ed0f-478f-9cb2-fa836708ae8f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-kxmsn_calico-system(d953a4e7-ed0f-478f-9cb2-fa836708ae8f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-kxmsn" podUID="d953a4e7-ed0f-478f-9cb2-fa836708ae8f" Sep 12 17:35:06.809882 containerd[1470]: time="2025-09-12T17:35:06.809711687Z" level=error msg="Failed to destroy network for sandbox \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.811299 containerd[1470]: time="2025-09-12T17:35:06.811225357Z" level=error msg="encountered an error cleaning up failed sandbox \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.811581 containerd[1470]: time="2025-09-12T17:35:06.811432950Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b75c89b8-mssfh,Uid:aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.812387 kubelet[2521]: E0912 17:35:06.812307 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.812515 kubelet[2521]: E0912 17:35:06.812414 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56b75c89b8-mssfh" Sep 12 17:35:06.812778 kubelet[2521]: E0912 17:35:06.812734 2521 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-56b75c89b8-mssfh" Sep 12 17:35:06.812900 kubelet[2521]: E0912 17:35:06.812834 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-56b75c89b8-mssfh_calico-apiserver(aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-56b75c89b8-mssfh_calico-apiserver(aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56b75c89b8-mssfh" podUID="aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05" Sep 12 17:35:06.830880 containerd[1470]: time="2025-09-12T17:35:06.830792008Z" level=error msg="Failed to destroy network for sandbox \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.831642 containerd[1470]: time="2025-09-12T17:35:06.831523353Z" level=error msg="encountered an error cleaning up failed sandbox \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.831793 containerd[1470]: time="2025-09-12T17:35:06.831610038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cl7zl,Uid:c589e8cc-160a-4a03-8cff-84be5e73deb3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.833311 kubelet[2521]: E0912 17:35:06.832074 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:06.833311 kubelet[2521]: E0912 17:35:06.832142 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cl7zl" Sep 12 17:35:06.833311 kubelet[2521]: E0912 17:35:06.832165 2521 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cl7zl" Sep 12 17:35:06.833558 kubelet[2521]: E0912 17:35:06.832216 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cl7zl_calico-system(c589e8cc-160a-4a03-8cff-84be5e73deb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cl7zl_calico-system(c589e8cc-160a-4a03-8cff-84be5e73deb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cl7zl" podUID="c589e8cc-160a-4a03-8cff-84be5e73deb3" Sep 12 17:35:07.402732 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0-shm.mount: Deactivated successfully. Sep 12 17:35:07.402935 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e-shm.mount: Deactivated successfully. Sep 12 17:35:07.403061 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee-shm.mount: Deactivated successfully. Sep 12 17:35:07.403158 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8-shm.mount: Deactivated successfully. Sep 12 17:35:07.403265 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9-shm.mount: Deactivated successfully. Sep 12 17:35:07.403359 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627-shm.mount: Deactivated successfully. Sep 12 17:35:07.628883 kubelet[2521]: I0912 17:35:07.628751 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Sep 12 17:35:07.633888 kubelet[2521]: I0912 17:35:07.633398 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Sep 12 17:35:07.644316 containerd[1470]: time="2025-09-12T17:35:07.643366433Z" level=info msg="StopPodSandbox for \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\"" Sep 12 17:35:07.644316 containerd[1470]: time="2025-09-12T17:35:07.644068153Z" level=info msg="StopPodSandbox for \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\"" Sep 12 17:35:07.658116 kubelet[2521]: I0912 17:35:07.653921 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Sep 12 17:35:07.660708 containerd[1470]: time="2025-09-12T17:35:07.659752686Z" level=info msg="Ensure that sandbox 620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee in task-service has been cleanup successfully" Sep 12 17:35:07.671560 containerd[1470]: time="2025-09-12T17:35:07.663127796Z" level=info msg="StopPodSandbox for \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\"" Sep 12 17:35:07.671560 containerd[1470]: time="2025-09-12T17:35:07.663415928Z" level=info msg="Ensure that sandbox 3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8 in task-service has been cleanup successfully" Sep 12 17:35:07.672261 containerd[1470]: time="2025-09-12T17:35:07.672197958Z" level=info msg="Ensure that sandbox b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0 in task-service has been cleanup successfully" Sep 12 17:35:07.692862 kubelet[2521]: I0912 17:35:07.692803 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Sep 12 17:35:07.703287 containerd[1470]: time="2025-09-12T17:35:07.703207016Z" level=info msg="StopPodSandbox for \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\"" Sep 12 17:35:07.705401 containerd[1470]: time="2025-09-12T17:35:07.703516618Z" level=info msg="Ensure that sandbox ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e in task-service has been cleanup successfully" Sep 12 17:35:07.716760 kubelet[2521]: I0912 17:35:07.716684 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Sep 12 17:35:07.728092 containerd[1470]: time="2025-09-12T17:35:07.727969739Z" level=info msg="StopPodSandbox for \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\"" Sep 12 17:35:07.728753 containerd[1470]: time="2025-09-12T17:35:07.728701804Z" level=info msg="Ensure that sandbox 4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e in task-service has been cleanup successfully" Sep 12 17:35:07.736303 kubelet[2521]: I0912 17:35:07.736213 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Sep 12 17:35:07.744305 containerd[1470]: time="2025-09-12T17:35:07.743991686Z" level=info msg="StopPodSandbox for \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\"" Sep 12 17:35:07.744993 kubelet[2521]: I0912 17:35:07.744951 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:07.749722 containerd[1470]: time="2025-09-12T17:35:07.749660885Z" level=info msg="StopPodSandbox for \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\"" Sep 12 17:35:07.749945 containerd[1470]: time="2025-09-12T17:35:07.749915838Z" level=info msg="Ensure that sandbox a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627 in task-service has been cleanup successfully" Sep 12 17:35:07.751237 containerd[1470]: time="2025-09-12T17:35:07.750994795Z" level=info msg="Ensure that sandbox eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9 in task-service has been cleanup successfully" Sep 12 17:35:07.787334 kubelet[2521]: I0912 17:35:07.787282 2521 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Sep 12 17:35:07.795633 containerd[1470]: time="2025-09-12T17:35:07.795542101Z" level=info msg="StopPodSandbox for \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\"" Sep 12 17:35:07.798872 containerd[1470]: time="2025-09-12T17:35:07.798636213Z" level=info msg="Ensure that sandbox c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42 in task-service has been cleanup successfully" Sep 12 17:35:07.952684 containerd[1470]: time="2025-09-12T17:35:07.948712753Z" level=error msg="StopPodSandbox for \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\" failed" error="failed to destroy network for sandbox \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:07.954045 kubelet[2521]: E0912 17:35:07.953103 2521 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Sep 12 17:35:07.956529 kubelet[2521]: E0912 17:35:07.953497 2521 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee"} Sep 12 17:35:07.956529 kubelet[2521]: E0912 17:35:07.955713 2521 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:07.956529 kubelet[2521]: E0912 17:35:07.955754 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56b75c89b8-mssfh" podUID="aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05" Sep 12 17:35:07.999438 containerd[1470]: time="2025-09-12T17:35:07.999356293Z" level=error msg="StopPodSandbox for \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\" failed" error="failed to destroy network for sandbox \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:08.002408 kubelet[2521]: E0912 17:35:08.000808 2521 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Sep 12 17:35:08.005128 kubelet[2521]: E0912 17:35:08.002817 2521 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8"} Sep 12 17:35:08.005128 kubelet[2521]: E0912 17:35:08.003072 2521 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f56838a8-b323-497d-aa49-d43c2031138f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:08.005128 kubelet[2521]: E0912 17:35:08.004957 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f56838a8-b323-497d-aa49-d43c2031138f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c566ffb4c-zmk9h" podUID="f56838a8-b323-497d-aa49-d43c2031138f" Sep 12 17:35:08.013667 containerd[1470]: time="2025-09-12T17:35:08.013497478Z" level=error msg="StopPodSandbox for \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\" failed" error="failed to destroy network for sandbox \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:08.014969 kubelet[2521]: E0912 17:35:08.014668 2521 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Sep 12 17:35:08.014969 kubelet[2521]: E0912 17:35:08.014759 2521 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e"} Sep 12 17:35:08.014969 kubelet[2521]: E0912 17:35:08.014852 2521 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"058bb08d-c113-4683-9906-785068b1e043\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:08.014969 kubelet[2521]: E0912 17:35:08.014900 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"058bb08d-c113-4683-9906-785068b1e043\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-hdjvw" podUID="058bb08d-c113-4683-9906-785068b1e043" Sep 12 17:35:08.022343 containerd[1470]: time="2025-09-12T17:35:08.021959084Z" level=error msg="StopPodSandbox for \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\" failed" error="failed to destroy network for sandbox \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:08.023313 kubelet[2521]: E0912 17:35:08.023040 2521 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Sep 12 17:35:08.023313 kubelet[2521]: E0912 17:35:08.023133 2521 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0"} Sep 12 17:35:08.023313 kubelet[2521]: E0912 17:35:08.023195 2521 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c589e8cc-160a-4a03-8cff-84be5e73deb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:08.023313 kubelet[2521]: E0912 17:35:08.023247 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c589e8cc-160a-4a03-8cff-84be5e73deb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cl7zl" podUID="c589e8cc-160a-4a03-8cff-84be5e73deb3" Sep 12 17:35:08.027291 containerd[1470]: time="2025-09-12T17:35:08.027201286Z" level=error msg="StopPodSandbox for \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\" failed" error="failed to destroy network for sandbox \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:08.028051 kubelet[2521]: E0912 17:35:08.027750 2521 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:08.028051 kubelet[2521]: E0912 17:35:08.027836 2521 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627"} Sep 12 17:35:08.028051 kubelet[2521]: E0912 17:35:08.027895 2521 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"37a86f89-3968-4ebf-bcc4-c3d17db0dd1b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:08.028051 kubelet[2521]: E0912 17:35:08.027931 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"37a86f89-3968-4ebf-bcc4-c3d17db0dd1b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tnpg9" podUID="37a86f89-3968-4ebf-bcc4-c3d17db0dd1b" Sep 12 17:35:08.028830 kubelet[2521]: E0912 17:35:08.028282 2521 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Sep 12 17:35:08.028830 kubelet[2521]: E0912 17:35:08.028345 2521 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e"} Sep 12 17:35:08.028830 kubelet[2521]: E0912 17:35:08.028400 2521 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d953a4e7-ed0f-478f-9cb2-fa836708ae8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:08.028830 kubelet[2521]: E0912 17:35:08.028439 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d953a4e7-ed0f-478f-9cb2-fa836708ae8f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-kxmsn" podUID="d953a4e7-ed0f-478f-9cb2-fa836708ae8f" Sep 12 17:35:08.029132 containerd[1470]: time="2025-09-12T17:35:08.027439793Z" level=error msg="StopPodSandbox for \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\" failed" error="failed to destroy network for sandbox \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:08.043637 containerd[1470]: time="2025-09-12T17:35:08.043540513Z" level=error msg="StopPodSandbox for \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\" failed" error="failed to destroy network for sandbox \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:08.043851 containerd[1470]: time="2025-09-12T17:35:08.043738986Z" level=error msg="StopPodSandbox for \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\" failed" error="failed to destroy network for sandbox \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 17:35:08.044355 kubelet[2521]: E0912 17:35:08.043992 2521 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Sep 12 17:35:08.044355 kubelet[2521]: E0912 17:35:08.044064 2521 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Sep 12 17:35:08.044355 kubelet[2521]: E0912 17:35:08.044148 2521 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42"} Sep 12 17:35:08.044355 kubelet[2521]: E0912 17:35:08.044207 2521 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47141fc5-2666-4c35-9d69-2367a7803e63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:08.044355 kubelet[2521]: E0912 17:35:08.044102 2521 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9"} Sep 12 17:35:08.045301 kubelet[2521]: E0912 17:35:08.044251 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47141fc5-2666-4c35-9d69-2367a7803e63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-9d66ff4cd-cjszq" podUID="47141fc5-2666-4c35-9d69-2367a7803e63" Sep 12 17:35:08.045301 kubelet[2521]: E0912 17:35:08.044264 2521 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fade2c26-8080-4e6a-9beb-ec982854f037\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 17:35:08.045301 kubelet[2521]: E0912 17:35:08.044325 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fade2c26-8080-4e6a-9beb-ec982854f037\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-56b75c89b8-5w8s5" podUID="fade2c26-8080-4e6a-9beb-ec982854f037" Sep 12 17:35:16.244854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778839175.mount: Deactivated successfully. Sep 12 17:35:16.414860 containerd[1470]: time="2025-09-12T17:35:16.377679087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 17:35:16.421134 containerd[1470]: time="2025-09-12T17:35:16.420422002Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 10.782515558s" Sep 12 17:35:16.421134 containerd[1470]: time="2025-09-12T17:35:16.420527022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 17:35:16.421448 containerd[1470]: time="2025-09-12T17:35:16.421202496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:16.464750 containerd[1470]: time="2025-09-12T17:35:16.463941811Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:16.466753 containerd[1470]: time="2025-09-12T17:35:16.465086941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:16.533113 containerd[1470]: time="2025-09-12T17:35:16.532772802Z" level=info msg="CreateContainer within sandbox \"6b4ca5feec8c09c4b3479e0a6401f3c531b536da3e85a5fc21b32d0ba92d2026\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 17:35:16.608383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount397303463.mount: Deactivated successfully. Sep 12 17:35:16.626313 containerd[1470]: time="2025-09-12T17:35:16.626194609Z" level=info msg="CreateContainer within sandbox \"6b4ca5feec8c09c4b3479e0a6401f3c531b536da3e85a5fc21b32d0ba92d2026\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"10f0f516c246948e6ec96d49d40708bc220712b906eb251394ed095883f145bc\"" Sep 12 17:35:16.636851 containerd[1470]: time="2025-09-12T17:35:16.635455011Z" level=info msg="StartContainer for \"10f0f516c246948e6ec96d49d40708bc220712b906eb251394ed095883f145bc\"" Sep 12 17:35:16.863620 systemd[1]: Started cri-containerd-10f0f516c246948e6ec96d49d40708bc220712b906eb251394ed095883f145bc.scope - libcontainer container 10f0f516c246948e6ec96d49d40708bc220712b906eb251394ed095883f145bc. Sep 12 17:35:16.959243 containerd[1470]: time="2025-09-12T17:35:16.959156480Z" level=info msg="StartContainer for \"10f0f516c246948e6ec96d49d40708bc220712b906eb251394ed095883f145bc\" returns successfully" Sep 12 17:35:17.151526 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 17:35:17.154658 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 17:35:17.514419 containerd[1470]: time="2025-09-12T17:35:17.514233249Z" level=info msg="StopPodSandbox for \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\"" Sep 12 17:35:17.967619 containerd[1470]: 2025-09-12 17:35:17.694 [INFO][3733] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Sep 12 17:35:17.967619 containerd[1470]: 2025-09-12 17:35:17.702 [INFO][3733] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" iface="eth0" netns="/var/run/netns/cni-16867999-95c2-85d1-638e-ecf85c8090e4" Sep 12 17:35:17.967619 containerd[1470]: 2025-09-12 17:35:17.702 [INFO][3733] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" iface="eth0" netns="/var/run/netns/cni-16867999-95c2-85d1-638e-ecf85c8090e4" Sep 12 17:35:17.967619 containerd[1470]: 2025-09-12 17:35:17.704 [INFO][3733] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" iface="eth0" netns="/var/run/netns/cni-16867999-95c2-85d1-638e-ecf85c8090e4" Sep 12 17:35:17.967619 containerd[1470]: 2025-09-12 17:35:17.704 [INFO][3733] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Sep 12 17:35:17.967619 containerd[1470]: 2025-09-12 17:35:17.704 [INFO][3733] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Sep 12 17:35:17.967619 containerd[1470]: 2025-09-12 17:35:17.922 [INFO][3741] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" HandleID="k8s-pod-network.3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Workload="ci--4081.3.6--8--31c29e3945-k8s-whisker--5c566ffb4c--zmk9h-eth0" Sep 12 17:35:17.967619 containerd[1470]: 2025-09-12 17:35:17.925 [INFO][3741] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:17.967619 containerd[1470]: 2025-09-12 17:35:17.926 [INFO][3741] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:17.967619 containerd[1470]: 2025-09-12 17:35:17.948 [WARNING][3741] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" HandleID="k8s-pod-network.3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Workload="ci--4081.3.6--8--31c29e3945-k8s-whisker--5c566ffb4c--zmk9h-eth0" Sep 12 17:35:17.967619 containerd[1470]: 2025-09-12 17:35:17.949 [INFO][3741] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" HandleID="k8s-pod-network.3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Workload="ci--4081.3.6--8--31c29e3945-k8s-whisker--5c566ffb4c--zmk9h-eth0" Sep 12 17:35:17.967619 containerd[1470]: 2025-09-12 17:35:17.955 [INFO][3741] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:17.967619 containerd[1470]: 2025-09-12 17:35:17.960 [INFO][3733] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Sep 12 17:35:17.972315 containerd[1470]: time="2025-09-12T17:35:17.970650333Z" level=info msg="TearDown network for sandbox \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\" successfully" Sep 12 17:35:17.972315 containerd[1470]: time="2025-09-12T17:35:17.970710487Z" level=info msg="StopPodSandbox for \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\" returns successfully" Sep 12 17:35:17.978966 systemd[1]: run-netns-cni\x2d16867999\x2d95c2\x2d85d1\x2d638e\x2decf85c8090e4.mount: Deactivated successfully. Sep 12 17:35:18.002034 kubelet[2521]: I0912 17:35:17.984216 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hwmn7" podStartSLOduration=3.723006472 podStartE2EDuration="26.949672242s" podCreationTimestamp="2025-09-12 17:34:51 +0000 UTC" firstStartedPulling="2025-09-12 17:34:53.240481922 +0000 UTC m=+21.162690931" lastFinishedPulling="2025-09-12 17:35:16.467147673 +0000 UTC m=+44.389356701" observedRunningTime="2025-09-12 17:35:17.946481486 +0000 UTC m=+45.868690544" watchObservedRunningTime="2025-09-12 17:35:17.949672242 +0000 UTC m=+45.871881286" Sep 12 17:35:18.122994 kubelet[2521]: I0912 17:35:18.120801 2521 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f56838a8-b323-497d-aa49-d43c2031138f-whisker-backend-key-pair\") pod \"f56838a8-b323-497d-aa49-d43c2031138f\" (UID: \"f56838a8-b323-497d-aa49-d43c2031138f\") " Sep 12 17:35:18.124788 kubelet[2521]: I0912 17:35:18.124618 2521 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f78g\" (UniqueName: \"kubernetes.io/projected/f56838a8-b323-497d-aa49-d43c2031138f-kube-api-access-7f78g\") pod \"f56838a8-b323-497d-aa49-d43c2031138f\" (UID: \"f56838a8-b323-497d-aa49-d43c2031138f\") " Sep 12 17:35:18.141898 kubelet[2521]: I0912 17:35:18.141843 2521 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f56838a8-b323-497d-aa49-d43c2031138f-whisker-ca-bundle\") pod \"f56838a8-b323-497d-aa49-d43c2031138f\" (UID: \"f56838a8-b323-497d-aa49-d43c2031138f\") " Sep 12 17:35:18.194336 kubelet[2521]: I0912 17:35:18.194240 2521 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f56838a8-b323-497d-aa49-d43c2031138f-kube-api-access-7f78g" (OuterVolumeSpecName: "kube-api-access-7f78g") pod "f56838a8-b323-497d-aa49-d43c2031138f" (UID: "f56838a8-b323-497d-aa49-d43c2031138f"). InnerVolumeSpecName "kube-api-access-7f78g". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 17:35:18.194835 kubelet[2521]: I0912 17:35:18.191819 2521 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f56838a8-b323-497d-aa49-d43c2031138f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f56838a8-b323-497d-aa49-d43c2031138f" (UID: "f56838a8-b323-497d-aa49-d43c2031138f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 17:35:18.196514 systemd[1]: var-lib-kubelet-pods-f56838a8\x2db323\x2d497d\x2daa49\x2dd43c2031138f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7f78g.mount: Deactivated successfully. Sep 12 17:35:18.201902 kubelet[2521]: I0912 17:35:18.201793 2521 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f56838a8-b323-497d-aa49-d43c2031138f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f56838a8-b323-497d-aa49-d43c2031138f" (UID: "f56838a8-b323-497d-aa49-d43c2031138f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 17:35:18.209438 systemd[1]: var-lib-kubelet-pods-f56838a8\x2db323\x2d497d\x2daa49\x2dd43c2031138f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 17:35:18.243788 kubelet[2521]: I0912 17:35:18.243608 2521 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f56838a8-b323-497d-aa49-d43c2031138f-whisker-ca-bundle\") on node \"ci-4081.3.6-8-31c29e3945\" DevicePath \"\"" Sep 12 17:35:18.243788 kubelet[2521]: I0912 17:35:18.243670 2521 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f56838a8-b323-497d-aa49-d43c2031138f-whisker-backend-key-pair\") on node \"ci-4081.3.6-8-31c29e3945\" DevicePath \"\"" Sep 12 17:35:18.243788 kubelet[2521]: I0912 17:35:18.243689 2521 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f78g\" (UniqueName: \"kubernetes.io/projected/f56838a8-b323-497d-aa49-d43c2031138f-kube-api-access-7f78g\") on node \"ci-4081.3.6-8-31c29e3945\" DevicePath \"\"" Sep 12 17:35:18.355695 systemd[1]: Removed slice kubepods-besteffort-podf56838a8_b323_497d_aa49_d43c2031138f.slice - libcontainer container kubepods-besteffort-podf56838a8_b323_497d_aa49_d43c2031138f.slice. Sep 12 17:35:18.910231 kubelet[2521]: I0912 17:35:18.909647 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:35:19.057988 systemd[1]: Created slice kubepods-besteffort-pod122e55ca_1e61_46b6_9deb_3aaea9fecba6.slice - libcontainer container kubepods-besteffort-pod122e55ca_1e61_46b6_9deb_3aaea9fecba6.slice. Sep 12 17:35:19.185645 kubelet[2521]: I0912 17:35:19.185141 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/122e55ca-1e61-46b6-9deb-3aaea9fecba6-whisker-backend-key-pair\") pod \"whisker-f7b85f5cd-b7jzh\" (UID: \"122e55ca-1e61-46b6-9deb-3aaea9fecba6\") " pod="calico-system/whisker-f7b85f5cd-b7jzh" Sep 12 17:35:19.185645 kubelet[2521]: I0912 17:35:19.185212 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8cxh\" (UniqueName: \"kubernetes.io/projected/122e55ca-1e61-46b6-9deb-3aaea9fecba6-kube-api-access-z8cxh\") pod \"whisker-f7b85f5cd-b7jzh\" (UID: \"122e55ca-1e61-46b6-9deb-3aaea9fecba6\") " pod="calico-system/whisker-f7b85f5cd-b7jzh" Sep 12 17:35:19.185645 kubelet[2521]: I0912 17:35:19.185259 2521 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/122e55ca-1e61-46b6-9deb-3aaea9fecba6-whisker-ca-bundle\") pod \"whisker-f7b85f5cd-b7jzh\" (UID: \"122e55ca-1e61-46b6-9deb-3aaea9fecba6\") " pod="calico-system/whisker-f7b85f5cd-b7jzh" Sep 12 17:35:19.325155 containerd[1470]: time="2025-09-12T17:35:19.325089808Z" level=info msg="StopPodSandbox for \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\"" Sep 12 17:35:19.372507 containerd[1470]: time="2025-09-12T17:35:19.372459594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f7b85f5cd-b7jzh,Uid:122e55ca-1e61-46b6-9deb-3aaea9fecba6,Namespace:calico-system,Attempt:0,}" Sep 12 17:35:19.523692 kubelet[2521]: I0912 17:35:19.523501 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:35:19.525209 kubelet[2521]: E0912 17:35:19.525167 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:19.558588 containerd[1470]: 2025-09-12 17:35:19.434 [INFO][3781] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Sep 12 17:35:19.558588 containerd[1470]: 2025-09-12 17:35:19.438 [INFO][3781] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" iface="eth0" netns="/var/run/netns/cni-e2eaad26-35bd-8615-057a-acd4c999a6e6" Sep 12 17:35:19.558588 containerd[1470]: 2025-09-12 17:35:19.439 [INFO][3781] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" iface="eth0" netns="/var/run/netns/cni-e2eaad26-35bd-8615-057a-acd4c999a6e6" Sep 12 17:35:19.558588 containerd[1470]: 2025-09-12 17:35:19.449 [INFO][3781] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" iface="eth0" netns="/var/run/netns/cni-e2eaad26-35bd-8615-057a-acd4c999a6e6" Sep 12 17:35:19.558588 containerd[1470]: 2025-09-12 17:35:19.450 [INFO][3781] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Sep 12 17:35:19.558588 containerd[1470]: 2025-09-12 17:35:19.450 [INFO][3781] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Sep 12 17:35:19.558588 containerd[1470]: 2025-09-12 17:35:19.515 [INFO][3827] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" HandleID="k8s-pod-network.b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Workload="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:19.558588 containerd[1470]: 2025-09-12 17:35:19.516 [INFO][3827] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:19.558588 containerd[1470]: 2025-09-12 17:35:19.516 [INFO][3827] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:19.558588 containerd[1470]: 2025-09-12 17:35:19.532 [WARNING][3827] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" HandleID="k8s-pod-network.b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Workload="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:19.558588 containerd[1470]: 2025-09-12 17:35:19.532 [INFO][3827] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" HandleID="k8s-pod-network.b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Workload="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:19.558588 containerd[1470]: 2025-09-12 17:35:19.544 [INFO][3827] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:19.558588 containerd[1470]: 2025-09-12 17:35:19.552 [INFO][3781] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Sep 12 17:35:19.563052 containerd[1470]: time="2025-09-12T17:35:19.562344987Z" level=info msg="TearDown network for sandbox \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\" successfully" Sep 12 17:35:19.563052 containerd[1470]: time="2025-09-12T17:35:19.562410838Z" level=info msg="StopPodSandbox for \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\" returns successfully" Sep 12 17:35:19.567912 systemd[1]: run-netns-cni\x2de2eaad26\x2d35bd\x2d8615\x2d057a\x2dacd4c999a6e6.mount: Deactivated successfully. Sep 12 17:35:19.569967 containerd[1470]: time="2025-09-12T17:35:19.568340883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cl7zl,Uid:c589e8cc-160a-4a03-8cff-84be5e73deb3,Namespace:calico-system,Attempt:1,}" Sep 12 17:35:19.919724 kubelet[2521]: E0912 17:35:19.919662 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:19.968666 systemd-networkd[1361]: cali4b4770355c0: Link UP Sep 12 17:35:19.968882 systemd-networkd[1361]: cali4b4770355c0: Gained carrier Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.532 [INFO][3814] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.574 [INFO][3814] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--8--31c29e3945-k8s-whisker--f7b85f5cd--b7jzh-eth0 whisker-f7b85f5cd- calico-system 122e55ca-1e61-46b6-9deb-3aaea9fecba6 910 0 2025-09-12 17:35:18 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:f7b85f5cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081.3.6-8-31c29e3945 whisker-f7b85f5cd-b7jzh eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali4b4770355c0 [] [] }} ContainerID="8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" Namespace="calico-system" Pod="whisker-f7b85f5cd-b7jzh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-whisker--f7b85f5cd--b7jzh-" Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.575 [INFO][3814] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" Namespace="calico-system" Pod="whisker-f7b85f5cd-b7jzh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-whisker--f7b85f5cd--b7jzh-eth0" Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.769 [INFO][3851] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" HandleID="k8s-pod-network.8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" Workload="ci--4081.3.6--8--31c29e3945-k8s-whisker--f7b85f5cd--b7jzh-eth0" Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.769 [INFO][3851] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" HandleID="k8s-pod-network.8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" Workload="ci--4081.3.6--8--31c29e3945-k8s-whisker--f7b85f5cd--b7jzh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003314b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-8-31c29e3945", "pod":"whisker-f7b85f5cd-b7jzh", "timestamp":"2025-09-12 17:35:19.769108382 +0000 UTC"}, Hostname:"ci-4081.3.6-8-31c29e3945", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.769 [INFO][3851] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.769 [INFO][3851] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.769 [INFO][3851] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-8-31c29e3945' Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.795 [INFO][3851] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.810 [INFO][3851] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.830 [INFO][3851] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.844 [INFO][3851] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.854 [INFO][3851] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.854 [INFO][3851] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.871 [INFO][3851] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3 Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.884 [INFO][3851] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.908 [INFO][3851] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.73.129/26] block=192.168.73.128/26 handle="k8s-pod-network.8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.908 [INFO][3851] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.129/26] handle="k8s-pod-network.8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.908 [INFO][3851] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:20.077055 containerd[1470]: 2025-09-12 17:35:19.909 [INFO][3851] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.129/26] IPv6=[] ContainerID="8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" HandleID="k8s-pod-network.8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" Workload="ci--4081.3.6--8--31c29e3945-k8s-whisker--f7b85f5cd--b7jzh-eth0" Sep 12 17:35:20.078268 containerd[1470]: 2025-09-12 17:35:19.921 [INFO][3814] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" Namespace="calico-system" Pod="whisker-f7b85f5cd-b7jzh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-whisker--f7b85f5cd--b7jzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-whisker--f7b85f5cd--b7jzh-eth0", GenerateName:"whisker-f7b85f5cd-", Namespace:"calico-system", SelfLink:"", UID:"122e55ca-1e61-46b6-9deb-3aaea9fecba6", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f7b85f5cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"", Pod:"whisker-f7b85f5cd-b7jzh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.73.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4b4770355c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:20.078268 containerd[1470]: 2025-09-12 17:35:19.922 [INFO][3814] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.129/32] ContainerID="8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" Namespace="calico-system" Pod="whisker-f7b85f5cd-b7jzh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-whisker--f7b85f5cd--b7jzh-eth0" Sep 12 17:35:20.078268 containerd[1470]: 2025-09-12 17:35:19.924 [INFO][3814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b4770355c0 ContainerID="8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" Namespace="calico-system" Pod="whisker-f7b85f5cd-b7jzh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-whisker--f7b85f5cd--b7jzh-eth0" Sep 12 17:35:20.078268 containerd[1470]: 2025-09-12 17:35:19.952 [INFO][3814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" Namespace="calico-system" Pod="whisker-f7b85f5cd-b7jzh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-whisker--f7b85f5cd--b7jzh-eth0" Sep 12 17:35:20.078268 containerd[1470]: 2025-09-12 17:35:19.954 [INFO][3814] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" Namespace="calico-system" Pod="whisker-f7b85f5cd-b7jzh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-whisker--f7b85f5cd--b7jzh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-whisker--f7b85f5cd--b7jzh-eth0", GenerateName:"whisker-f7b85f5cd-", Namespace:"calico-system", SelfLink:"", UID:"122e55ca-1e61-46b6-9deb-3aaea9fecba6", ResourceVersion:"910", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 35, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"f7b85f5cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3", Pod:"whisker-f7b85f5cd-b7jzh", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.73.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali4b4770355c0", MAC:"82:a9:8f:da:c6:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:20.078268 containerd[1470]: 2025-09-12 17:35:20.065 [INFO][3814] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3" Namespace="calico-system" Pod="whisker-f7b85f5cd-b7jzh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-whisker--f7b85f5cd--b7jzh-eth0" Sep 12 17:35:20.160451 systemd-networkd[1361]: cali0e0fbe8684c: Link UP Sep 12 17:35:20.164199 systemd-networkd[1361]: cali0e0fbe8684c: Gained carrier Sep 12 17:35:20.217381 containerd[1470]: time="2025-09-12T17:35:20.216342616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:20.217381 containerd[1470]: time="2025-09-12T17:35:20.216481770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:20.217381 containerd[1470]: time="2025-09-12T17:35:20.216548590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:20.217381 containerd[1470]: time="2025-09-12T17:35:20.216740988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:19.789 [INFO][3863] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:19.837 [INFO][3863] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0 csi-node-driver- calico-system c589e8cc-160a-4a03-8cff-84be5e73deb3 915 0 2025-09-12 17:34:52 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.6-8-31c29e3945 csi-node-driver-cl7zl eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0e0fbe8684c [] [] }} ContainerID="44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" Namespace="calico-system" Pod="csi-node-driver-cl7zl" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-" Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:19.837 [INFO][3863] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" Namespace="calico-system" Pod="csi-node-driver-cl7zl" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:19.933 [INFO][3881] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" HandleID="k8s-pod-network.44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" Workload="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:19.933 [INFO][3881] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" HandleID="k8s-pod-network.44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" Workload="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000101d60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-8-31c29e3945", "pod":"csi-node-driver-cl7zl", "timestamp":"2025-09-12 17:35:19.93368582 +0000 UTC"}, Hostname:"ci-4081.3.6-8-31c29e3945", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:19.935 [INFO][3881] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:19.935 [INFO][3881] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:19.935 [INFO][3881] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-8-31c29e3945' Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:20.026 [INFO][3881] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:20.043 [INFO][3881] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:20.073 [INFO][3881] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:20.087 [INFO][3881] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:20.096 [INFO][3881] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:20.099 [INFO][3881] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:20.103 [INFO][3881] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438 Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:20.112 [INFO][3881] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:20.127 [INFO][3881] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.73.130/26] block=192.168.73.128/26 handle="k8s-pod-network.44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:20.127 [INFO][3881] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.130/26] handle="k8s-pod-network.44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:20.127 [INFO][3881] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:20.218396 containerd[1470]: 2025-09-12 17:35:20.127 [INFO][3881] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.130/26] IPv6=[] ContainerID="44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" HandleID="k8s-pod-network.44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" Workload="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:20.222584 containerd[1470]: 2025-09-12 17:35:20.135 [INFO][3863] cni-plugin/k8s.go 418: Populated endpoint ContainerID="44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" Namespace="calico-system" Pod="csi-node-driver-cl7zl" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c589e8cc-160a-4a03-8cff-84be5e73deb3", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"", Pod:"csi-node-driver-cl7zl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e0fbe8684c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:20.222584 containerd[1470]: 2025-09-12 17:35:20.135 [INFO][3863] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.130/32] ContainerID="44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" Namespace="calico-system" Pod="csi-node-driver-cl7zl" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:20.222584 containerd[1470]: 2025-09-12 17:35:20.135 [INFO][3863] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0e0fbe8684c ContainerID="44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" Namespace="calico-system" Pod="csi-node-driver-cl7zl" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:20.222584 containerd[1470]: 2025-09-12 17:35:20.174 [INFO][3863] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" Namespace="calico-system" Pod="csi-node-driver-cl7zl" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:20.222584 containerd[1470]: 2025-09-12 17:35:20.176 [INFO][3863] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" Namespace="calico-system" Pod="csi-node-driver-cl7zl" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c589e8cc-160a-4a03-8cff-84be5e73deb3", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438", Pod:"csi-node-driver-cl7zl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e0fbe8684c", MAC:"5a:6f:3a:7c:aa:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:20.222584 containerd[1470]: 2025-09-12 17:35:20.204 [INFO][3863] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438" Namespace="calico-system" Pod="csi-node-driver-cl7zl" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:20.339600 systemd[1]: Started cri-containerd-8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3.scope - libcontainer container 8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3. Sep 12 17:35:20.350291 containerd[1470]: time="2025-09-12T17:35:20.350187751Z" level=info msg="StopPodSandbox for \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\"" Sep 12 17:35:20.356169 containerd[1470]: time="2025-09-12T17:35:20.355572130Z" level=info msg="StopPodSandbox for \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\"" Sep 12 17:35:20.380053 containerd[1470]: time="2025-09-12T17:35:20.356202984Z" level=info msg="StopPodSandbox for \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\"" Sep 12 17:35:20.380269 kubelet[2521]: I0912 17:35:20.378939 2521 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f56838a8-b323-497d-aa49-d43c2031138f" path="/var/lib/kubelet/pods/f56838a8-b323-497d-aa49-d43c2031138f/volumes" Sep 12 17:35:20.550108 containerd[1470]: time="2025-09-12T17:35:20.549758561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:20.559410 containerd[1470]: time="2025-09-12T17:35:20.558652568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:20.559410 containerd[1470]: time="2025-09-12T17:35:20.558714517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:20.559410 containerd[1470]: time="2025-09-12T17:35:20.558896598Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:20.680288 kubelet[2521]: I0912 17:35:20.679616 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:35:20.730407 systemd[1]: Started cri-containerd-44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438.scope - libcontainer container 44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438. Sep 12 17:35:21.053845 containerd[1470]: time="2025-09-12T17:35:21.053584355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-f7b85f5cd-b7jzh,Uid:122e55ca-1e61-46b6-9deb-3aaea9fecba6,Namespace:calico-system,Attempt:0,} returns sandbox id \"8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3\"" Sep 12 17:35:21.075732 containerd[1470]: time="2025-09-12T17:35:21.075497916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 17:35:21.250107 containerd[1470]: time="2025-09-12T17:35:21.249517724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cl7zl,Uid:c589e8cc-160a-4a03-8cff-84be5e73deb3,Namespace:calico-system,Attempt:1,} returns sandbox id \"44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438\"" Sep 12 17:35:21.278075 containerd[1470]: 2025-09-12 17:35:20.728 [INFO][3996] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Sep 12 17:35:21.278075 containerd[1470]: 2025-09-12 17:35:20.728 [INFO][3996] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" iface="eth0" netns="/var/run/netns/cni-baa73d78-c59d-2abe-5890-cdbe2e3cb1b8" Sep 12 17:35:21.278075 containerd[1470]: 2025-09-12 17:35:20.739 [INFO][3996] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" iface="eth0" netns="/var/run/netns/cni-baa73d78-c59d-2abe-5890-cdbe2e3cb1b8" Sep 12 17:35:21.278075 containerd[1470]: 2025-09-12 17:35:20.745 [INFO][3996] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" iface="eth0" netns="/var/run/netns/cni-baa73d78-c59d-2abe-5890-cdbe2e3cb1b8" Sep 12 17:35:21.278075 containerd[1470]: 2025-09-12 17:35:20.745 [INFO][3996] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Sep 12 17:35:21.278075 containerd[1470]: 2025-09-12 17:35:20.745 [INFO][3996] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Sep 12 17:35:21.278075 containerd[1470]: 2025-09-12 17:35:21.176 [INFO][4031] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" HandleID="k8s-pod-network.ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:21.278075 containerd[1470]: 2025-09-12 17:35:21.177 [INFO][4031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:21.278075 containerd[1470]: 2025-09-12 17:35:21.178 [INFO][4031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:21.278075 containerd[1470]: 2025-09-12 17:35:21.221 [WARNING][4031] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" HandleID="k8s-pod-network.ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:21.278075 containerd[1470]: 2025-09-12 17:35:21.221 [INFO][4031] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" HandleID="k8s-pod-network.ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:21.278075 containerd[1470]: 2025-09-12 17:35:21.229 [INFO][4031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:21.278075 containerd[1470]: 2025-09-12 17:35:21.240 [INFO][3996] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Sep 12 17:35:21.282465 containerd[1470]: time="2025-09-12T17:35:21.281356014Z" level=info msg="TearDown network for sandbox \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\" successfully" Sep 12 17:35:21.282465 containerd[1470]: time="2025-09-12T17:35:21.281517511Z" level=info msg="StopPodSandbox for \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\" returns successfully" Sep 12 17:35:21.278131 systemd[1]: run-containerd-runc-k8s.io-10f0f516c246948e6ec96d49d40708bc220712b906eb251394ed095883f145bc-runc.BSrmiN.mount: Deactivated successfully. Sep 12 17:35:21.285500 kubelet[2521]: E0912 17:35:21.282829 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:21.285629 containerd[1470]: time="2025-09-12T17:35:21.285577780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hdjvw,Uid:058bb08d-c113-4683-9906-785068b1e043,Namespace:kube-system,Attempt:1,}" Sep 12 17:35:21.307243 systemd[1]: run-netns-cni\x2dbaa73d78\x2dc59d\x2d2abe\x2d5890\x2dcdbe2e3cb1b8.mount: Deactivated successfully. Sep 12 17:35:21.335147 containerd[1470]: time="2025-09-12T17:35:21.334814078Z" level=info msg="StopPodSandbox for \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\"" Sep 12 17:35:21.372997 systemd-networkd[1361]: cali4b4770355c0: Gained IPv6LL Sep 12 17:35:21.457044 containerd[1470]: 2025-09-12 17:35:20.838 [INFO][3994] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:21.457044 containerd[1470]: 2025-09-12 17:35:20.843 [INFO][3994] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" iface="eth0" netns="/var/run/netns/cni-b2b2b0c7-e6a7-9544-d79e-17969737d0f2" Sep 12 17:35:21.457044 containerd[1470]: 2025-09-12 17:35:20.844 [INFO][3994] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" iface="eth0" netns="/var/run/netns/cni-b2b2b0c7-e6a7-9544-d79e-17969737d0f2" Sep 12 17:35:21.457044 containerd[1470]: 2025-09-12 17:35:20.845 [INFO][3994] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" iface="eth0" netns="/var/run/netns/cni-b2b2b0c7-e6a7-9544-d79e-17969737d0f2" Sep 12 17:35:21.457044 containerd[1470]: 2025-09-12 17:35:20.852 [INFO][3994] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:21.457044 containerd[1470]: 2025-09-12 17:35:20.852 [INFO][3994] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:21.457044 containerd[1470]: 2025-09-12 17:35:21.292 [INFO][4046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" HandleID="k8s-pod-network.a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:21.457044 containerd[1470]: 2025-09-12 17:35:21.295 [INFO][4046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:21.457044 containerd[1470]: 2025-09-12 17:35:21.296 [INFO][4046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:21.457044 containerd[1470]: 2025-09-12 17:35:21.384 [WARNING][4046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" HandleID="k8s-pod-network.a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:21.457044 containerd[1470]: 2025-09-12 17:35:21.385 [INFO][4046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" HandleID="k8s-pod-network.a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:21.457044 containerd[1470]: 2025-09-12 17:35:21.393 [INFO][4046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:21.457044 containerd[1470]: 2025-09-12 17:35:21.408 [INFO][3994] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:21.466444 containerd[1470]: time="2025-09-12T17:35:21.456980322Z" level=info msg="TearDown network for sandbox \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\" successfully" Sep 12 17:35:21.466444 containerd[1470]: time="2025-09-12T17:35:21.457125633Z" level=info msg="StopPodSandbox for \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\" returns successfully" Sep 12 17:35:21.466444 containerd[1470]: time="2025-09-12T17:35:21.461492209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tnpg9,Uid:37a86f89-3968-4ebf-bcc4-c3d17db0dd1b,Namespace:kube-system,Attempt:1,}" Sep 12 17:35:21.466615 kubelet[2521]: E0912 17:35:21.460443 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:21.466976 systemd[1]: run-netns-cni\x2db2b2b0c7\x2de6a7\x2d9544\x2dd79e\x2d17969737d0f2.mount: Deactivated successfully. Sep 12 17:35:21.512078 containerd[1470]: 2025-09-12 17:35:20.914 [INFO][3995] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Sep 12 17:35:21.512078 containerd[1470]: 2025-09-12 17:35:20.930 [INFO][3995] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" iface="eth0" netns="/var/run/netns/cni-7a7e727c-0554-9fd9-a0db-dfce6d4fb952" Sep 12 17:35:21.512078 containerd[1470]: 2025-09-12 17:35:20.930 [INFO][3995] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" iface="eth0" netns="/var/run/netns/cni-7a7e727c-0554-9fd9-a0db-dfce6d4fb952" Sep 12 17:35:21.512078 containerd[1470]: 2025-09-12 17:35:20.933 [INFO][3995] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" iface="eth0" netns="/var/run/netns/cni-7a7e727c-0554-9fd9-a0db-dfce6d4fb952" Sep 12 17:35:21.512078 containerd[1470]: 2025-09-12 17:35:20.933 [INFO][3995] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Sep 12 17:35:21.512078 containerd[1470]: 2025-09-12 17:35:20.933 [INFO][3995] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Sep 12 17:35:21.512078 containerd[1470]: 2025-09-12 17:35:21.415 [INFO][4057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" HandleID="k8s-pod-network.4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Workload="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:21.512078 containerd[1470]: 2025-09-12 17:35:21.415 [INFO][4057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:21.512078 containerd[1470]: 2025-09-12 17:35:21.428 [INFO][4057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:21.512078 containerd[1470]: 2025-09-12 17:35:21.481 [WARNING][4057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" HandleID="k8s-pod-network.4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Workload="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:21.512078 containerd[1470]: 2025-09-12 17:35:21.481 [INFO][4057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" HandleID="k8s-pod-network.4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Workload="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:21.512078 containerd[1470]: 2025-09-12 17:35:21.488 [INFO][4057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:21.512078 containerd[1470]: 2025-09-12 17:35:21.504 [INFO][3995] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Sep 12 17:35:21.515688 containerd[1470]: time="2025-09-12T17:35:21.512232459Z" level=info msg="TearDown network for sandbox \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\" successfully" Sep 12 17:35:21.515688 containerd[1470]: time="2025-09-12T17:35:21.515178216Z" level=info msg="StopPodSandbox for \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\" returns successfully" Sep 12 17:35:21.520265 systemd[1]: run-netns-cni\x2d7a7e727c\x2d0554\x2d9fd9\x2da0db\x2ddfce6d4fb952.mount: Deactivated successfully. Sep 12 17:35:21.526201 containerd[1470]: time="2025-09-12T17:35:21.523657825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-kxmsn,Uid:d953a4e7-ed0f-478f-9cb2-fa836708ae8f,Namespace:calico-system,Attempt:1,}" Sep 12 17:35:22.135772 systemd-networkd[1361]: cali0e0fbe8684c: Gained IPv6LL Sep 12 17:35:22.252160 systemd-networkd[1361]: calicdcea1ec6c0: Link UP Sep 12 17:35:22.252624 systemd-networkd[1361]: calicdcea1ec6c0: Gained carrier Sep 12 17:35:22.336039 containerd[1470]: time="2025-09-12T17:35:22.334456761Z" level=info msg="StopPodSandbox for \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\"" Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:21.696 [INFO][4095] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:21.768 [INFO][4095] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0 coredns-7c65d6cfc9- kube-system 058bb08d-c113-4683-9906-785068b1e043 936 0 2025-09-12 17:34:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-8-31c29e3945 coredns-7c65d6cfc9-hdjvw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicdcea1ec6c0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdjvw" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-" Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:21.768 [INFO][4095] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdjvw" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.080 [INFO][4148] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" HandleID="k8s-pod-network.99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.081 [INFO][4148] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" HandleID="k8s-pod-network.99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001f1d30), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-8-31c29e3945", "pod":"coredns-7c65d6cfc9-hdjvw", "timestamp":"2025-09-12 17:35:22.080769337 +0000 UTC"}, Hostname:"ci-4081.3.6-8-31c29e3945", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.081 [INFO][4148] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.081 [INFO][4148] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.081 [INFO][4148] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-8-31c29e3945' Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.103 [INFO][4148] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.129 [INFO][4148] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.171 [INFO][4148] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.177 [INFO][4148] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.182 [INFO][4148] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.182 [INFO][4148] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.187 [INFO][4148] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.197 [INFO][4148] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.218 [INFO][4148] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.73.131/26] block=192.168.73.128/26 handle="k8s-pod-network.99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.219 [INFO][4148] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.131/26] handle="k8s-pod-network.99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.219 [INFO][4148] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:22.346002 containerd[1470]: 2025-09-12 17:35:22.219 [INFO][4148] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.131/26] IPv6=[] ContainerID="99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" HandleID="k8s-pod-network.99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:22.346951 containerd[1470]: 2025-09-12 17:35:22.231 [INFO][4095] cni-plugin/k8s.go 418: Populated endpoint ContainerID="99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdjvw" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"058bb08d-c113-4683-9906-785068b1e043", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"", Pod:"coredns-7c65d6cfc9-hdjvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicdcea1ec6c0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:22.346951 containerd[1470]: 2025-09-12 17:35:22.233 [INFO][4095] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.131/32] ContainerID="99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdjvw" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:22.346951 containerd[1470]: 2025-09-12 17:35:22.234 [INFO][4095] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicdcea1ec6c0 ContainerID="99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdjvw" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:22.346951 containerd[1470]: 2025-09-12 17:35:22.257 [INFO][4095] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdjvw" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:22.346951 containerd[1470]: 2025-09-12 17:35:22.275 [INFO][4095] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdjvw" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"058bb08d-c113-4683-9906-785068b1e043", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a", Pod:"coredns-7c65d6cfc9-hdjvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicdcea1ec6c0", MAC:"ba:cf:13:d3:e5:df", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:22.346951 containerd[1470]: 2025-09-12 17:35:22.323 [INFO][4095] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a" Namespace="kube-system" Pod="coredns-7c65d6cfc9-hdjvw" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:22.543291 kernel: bpftool[4225]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 17:35:22.564402 containerd[1470]: time="2025-09-12T17:35:22.562339380Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:22.564402 containerd[1470]: time="2025-09-12T17:35:22.562446093Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:22.564402 containerd[1470]: time="2025-09-12T17:35:22.562468032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:22.564402 containerd[1470]: time="2025-09-12T17:35:22.562655061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:22.679856 systemd[1]: run-containerd-runc-k8s.io-99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a-runc.kmQkLC.mount: Deactivated successfully. Sep 12 17:35:22.700702 systemd[1]: Started cri-containerd-99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a.scope - libcontainer container 99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a. Sep 12 17:35:22.740259 containerd[1470]: 2025-09-12 17:35:21.855 [INFO][4112] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Sep 12 17:35:22.740259 containerd[1470]: 2025-09-12 17:35:21.856 [INFO][4112] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" iface="eth0" netns="/var/run/netns/cni-ef6db50c-79c4-3110-ee27-c848602fa6bf" Sep 12 17:35:22.740259 containerd[1470]: 2025-09-12 17:35:21.859 [INFO][4112] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" iface="eth0" netns="/var/run/netns/cni-ef6db50c-79c4-3110-ee27-c848602fa6bf" Sep 12 17:35:22.740259 containerd[1470]: 2025-09-12 17:35:21.862 [INFO][4112] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" iface="eth0" netns="/var/run/netns/cni-ef6db50c-79c4-3110-ee27-c848602fa6bf" Sep 12 17:35:22.740259 containerd[1470]: 2025-09-12 17:35:21.862 [INFO][4112] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Sep 12 17:35:22.740259 containerd[1470]: 2025-09-12 17:35:21.862 [INFO][4112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Sep 12 17:35:22.740259 containerd[1470]: 2025-09-12 17:35:22.130 [INFO][4153] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" HandleID="k8s-pod-network.620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:22.740259 containerd[1470]: 2025-09-12 17:35:22.130 [INFO][4153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:22.740259 containerd[1470]: 2025-09-12 17:35:22.626 [INFO][4153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:22.740259 containerd[1470]: 2025-09-12 17:35:22.646 [WARNING][4153] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" HandleID="k8s-pod-network.620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:22.740259 containerd[1470]: 2025-09-12 17:35:22.646 [INFO][4153] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" HandleID="k8s-pod-network.620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:22.740259 containerd[1470]: 2025-09-12 17:35:22.652 [INFO][4153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:22.740259 containerd[1470]: 2025-09-12 17:35:22.715 [INFO][4112] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Sep 12 17:35:22.749519 containerd[1470]: time="2025-09-12T17:35:22.745555364Z" level=info msg="TearDown network for sandbox \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\" successfully" Sep 12 17:35:22.749519 containerd[1470]: time="2025-09-12T17:35:22.745680121Z" level=info msg="StopPodSandbox for \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\" returns successfully" Sep 12 17:35:22.749519 containerd[1470]: time="2025-09-12T17:35:22.748580889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b75c89b8-mssfh,Uid:aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:35:22.746847 systemd[1]: run-netns-cni\x2def6db50c\x2d79c4\x2d3110\x2dee27\x2dc848602fa6bf.mount: Deactivated successfully. Sep 12 17:35:23.001834 systemd-networkd[1361]: cali681be8ebc06: Link UP Sep 12 17:35:23.024812 systemd-networkd[1361]: cali681be8ebc06: Gained carrier Sep 12 17:35:23.047090 containerd[1470]: 2025-09-12 17:35:22.684 [INFO][4199] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Sep 12 17:35:23.047090 containerd[1470]: 2025-09-12 17:35:22.684 [INFO][4199] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" iface="eth0" netns="/var/run/netns/cni-dcc881ff-83a9-8e09-5e6f-ed770e6321ff" Sep 12 17:35:23.047090 containerd[1470]: 2025-09-12 17:35:22.684 [INFO][4199] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" iface="eth0" netns="/var/run/netns/cni-dcc881ff-83a9-8e09-5e6f-ed770e6321ff" Sep 12 17:35:23.047090 containerd[1470]: 2025-09-12 17:35:22.687 [INFO][4199] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" iface="eth0" netns="/var/run/netns/cni-dcc881ff-83a9-8e09-5e6f-ed770e6321ff" Sep 12 17:35:23.047090 containerd[1470]: 2025-09-12 17:35:22.688 [INFO][4199] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Sep 12 17:35:23.047090 containerd[1470]: 2025-09-12 17:35:22.688 [INFO][4199] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Sep 12 17:35:23.047090 containerd[1470]: 2025-09-12 17:35:22.910 [INFO][4251] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" HandleID="k8s-pod-network.c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:23.047090 containerd[1470]: 2025-09-12 17:35:22.920 [INFO][4251] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:23.047090 containerd[1470]: 2025-09-12 17:35:22.920 [INFO][4251] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:23.047090 containerd[1470]: 2025-09-12 17:35:22.968 [WARNING][4251] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" HandleID="k8s-pod-network.c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:23.047090 containerd[1470]: 2025-09-12 17:35:22.968 [INFO][4251] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" HandleID="k8s-pod-network.c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:23.047090 containerd[1470]: 2025-09-12 17:35:22.980 [INFO][4251] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:23.047090 containerd[1470]: 2025-09-12 17:35:23.036 [INFO][4199] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Sep 12 17:35:23.050265 containerd[1470]: time="2025-09-12T17:35:23.049068581Z" level=info msg="TearDown network for sandbox \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\" successfully" Sep 12 17:35:23.050265 containerd[1470]: time="2025-09-12T17:35:23.049137502Z" level=info msg="StopPodSandbox for \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\" returns successfully" Sep 12 17:35:23.050712 containerd[1470]: time="2025-09-12T17:35:23.050536855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9d66ff4cd-cjszq,Uid:47141fc5-2666-4c35-9d69-2367a7803e63,Namespace:calico-system,Attempt:1,}" Sep 12 17:35:23.094328 containerd[1470]: time="2025-09-12T17:35:23.093990597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hdjvw,Uid:058bb08d-c113-4683-9906-785068b1e043,Namespace:kube-system,Attempt:1,} returns sandbox id \"99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a\"" Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:21.773 [INFO][4130] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:21.873 [INFO][4130] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0 goldmane-7988f88666- calico-system d953a4e7-ed0f-478f-9cb2-fa836708ae8f 938 0 2025-09-12 17:34:51 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081.3.6-8-31c29e3945 goldmane-7988f88666-kxmsn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali681be8ebc06 [] [] }} ContainerID="8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" Namespace="calico-system" Pod="goldmane-7988f88666-kxmsn" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-" Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:21.874 [INFO][4130] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" Namespace="calico-system" Pod="goldmane-7988f88666-kxmsn" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.141 [INFO][4163] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" HandleID="k8s-pod-network.8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" Workload="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.143 [INFO][4163] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" HandleID="k8s-pod-network.8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" Workload="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000352980), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-8-31c29e3945", "pod":"goldmane-7988f88666-kxmsn", "timestamp":"2025-09-12 17:35:22.140670211 +0000 UTC"}, Hostname:"ci-4081.3.6-8-31c29e3945", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.143 [INFO][4163] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.652 [INFO][4163] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.661 [INFO][4163] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-8-31c29e3945' Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.758 [INFO][4163] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.789 [INFO][4163] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.809 [INFO][4163] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.830 [INFO][4163] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.849 [INFO][4163] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.853 [INFO][4163] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.861 [INFO][4163] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6 Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.882 [INFO][4163] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.913 [INFO][4163] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.73.132/26] block=192.168.73.128/26 handle="k8s-pod-network.8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.913 [INFO][4163] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.132/26] handle="k8s-pod-network.8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.913 [INFO][4163] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:23.122515 containerd[1470]: 2025-09-12 17:35:22.913 [INFO][4163] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.132/26] IPv6=[] ContainerID="8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" HandleID="k8s-pod-network.8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" Workload="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:23.124847 containerd[1470]: 2025-09-12 17:35:22.952 [INFO][4130] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" Namespace="calico-system" Pod="goldmane-7988f88666-kxmsn" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d953a4e7-ed0f-478f-9cb2-fa836708ae8f", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"", Pod:"goldmane-7988f88666-kxmsn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.73.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali681be8ebc06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:23.124847 containerd[1470]: 2025-09-12 17:35:22.953 [INFO][4130] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.132/32] ContainerID="8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" Namespace="calico-system" Pod="goldmane-7988f88666-kxmsn" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:23.124847 containerd[1470]: 2025-09-12 17:35:22.953 [INFO][4130] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali681be8ebc06 ContainerID="8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" Namespace="calico-system" Pod="goldmane-7988f88666-kxmsn" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:23.124847 containerd[1470]: 2025-09-12 17:35:23.031 [INFO][4130] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" Namespace="calico-system" Pod="goldmane-7988f88666-kxmsn" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:23.124847 containerd[1470]: 2025-09-12 17:35:23.038 [INFO][4130] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" Namespace="calico-system" Pod="goldmane-7988f88666-kxmsn" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d953a4e7-ed0f-478f-9cb2-fa836708ae8f", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6", Pod:"goldmane-7988f88666-kxmsn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.73.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali681be8ebc06", MAC:"36:0c:9b:20:0e:29", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:23.124847 containerd[1470]: 2025-09-12 17:35:23.086 [INFO][4130] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6" Namespace="calico-system" Pod="goldmane-7988f88666-kxmsn" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:23.125168 kubelet[2521]: E0912 17:35:23.123183 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:23.160576 containerd[1470]: time="2025-09-12T17:35:23.159788640Z" level=info msg="CreateContainer within sandbox \"99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:35:23.277591 containerd[1470]: time="2025-09-12T17:35:23.275446522Z" level=info msg="CreateContainer within sandbox \"99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d2d899e414688875ba0289047b565424c62176a89a0f42d3dfe218595f2e52f1\"" Sep 12 17:35:23.281150 containerd[1470]: time="2025-09-12T17:35:23.281088503Z" level=info msg="StartContainer for \"d2d899e414688875ba0289047b565424c62176a89a0f42d3dfe218595f2e52f1\"" Sep 12 17:35:23.307666 systemd[1]: run-netns-cni\x2ddcc881ff\x2d83a9\x2d8e09\x2d5e6f\x2ded770e6321ff.mount: Deactivated successfully. Sep 12 17:35:23.383483 containerd[1470]: time="2025-09-12T17:35:23.383330798Z" level=info msg="StopPodSandbox for \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\"" Sep 12 17:35:23.452966 containerd[1470]: time="2025-09-12T17:35:23.452592625Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:23.452966 containerd[1470]: time="2025-09-12T17:35:23.452687850Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:23.452966 containerd[1470]: time="2025-09-12T17:35:23.452705048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:23.452966 containerd[1470]: time="2025-09-12T17:35:23.452807531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:23.504258 containerd[1470]: 2025-09-12 17:35:22.967 [INFO][4272] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424" Sep 12 17:35:23.504258 containerd[1470]: 2025-09-12 17:35:22.981 [INFO][4272] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424" iface="eth0" netns="/var/run/netns/cni-701b6418-786b-7b3c-51be-12b0fb86a994" Sep 12 17:35:23.504258 containerd[1470]: 2025-09-12 17:35:22.982 [INFO][4272] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424" iface="eth0" netns="/var/run/netns/cni-701b6418-786b-7b3c-51be-12b0fb86a994" Sep 12 17:35:23.504258 containerd[1470]: 2025-09-12 17:35:22.982 [INFO][4272] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424" iface="eth0" netns="/var/run/netns/cni-701b6418-786b-7b3c-51be-12b0fb86a994" Sep 12 17:35:23.504258 containerd[1470]: 2025-09-12 17:35:22.983 [INFO][4272] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424" Sep 12 17:35:23.504258 containerd[1470]: 2025-09-12 17:35:22.983 [INFO][4272] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424" Sep 12 17:35:23.504258 containerd[1470]: 2025-09-12 17:35:23.372 [INFO][4298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424" HandleID="k8s-pod-network.555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:23.504258 containerd[1470]: 2025-09-12 17:35:23.374 [INFO][4298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:23.504258 containerd[1470]: 2025-09-12 17:35:23.374 [INFO][4298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:23.504258 containerd[1470]: 2025-09-12 17:35:23.433 [WARNING][4298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424" HandleID="k8s-pod-network.555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:23.504258 containerd[1470]: 2025-09-12 17:35:23.433 [INFO][4298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424" HandleID="k8s-pod-network.555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:23.504258 containerd[1470]: 2025-09-12 17:35:23.448 [INFO][4298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:23.504258 containerd[1470]: 2025-09-12 17:35:23.475 [INFO][4272] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424" Sep 12 17:35:23.510370 systemd[1]: run-netns-cni\x2d701b6418\x2d786b\x2d7b3c\x2d51be\x2d12b0fb86a994.mount: Deactivated successfully. Sep 12 17:35:23.522303 systemd[1]: Started cri-containerd-d2d899e414688875ba0289047b565424c62176a89a0f42d3dfe218595f2e52f1.scope - libcontainer container d2d899e414688875ba0289047b565424c62176a89a0f42d3dfe218595f2e52f1. Sep 12 17:35:23.530442 containerd[1470]: time="2025-09-12T17:35:23.529786534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tnpg9,Uid:37a86f89-3968-4ebf-bcc4-c3d17db0dd1b,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424\": plugin type=\"calico\" failed (add): failed to look up reserved IPs: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/ipreservations?limit=500\": tls: failed to verify certificate: x509: certificate signed by unknown authority" Sep 12 17:35:23.531887 kubelet[2521]: E0912 17:35:23.531710 2521 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424\": plugin type=\"calico\" failed (add): failed to look up reserved IPs: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/ipreservations?limit=500\": tls: failed to verify certificate: x509: certificate signed by unknown authority" Sep 12 17:35:23.531887 kubelet[2521]: E0912 17:35:23.531803 2521 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424\": plugin type=\"calico\" failed (add): failed to look up reserved IPs: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/ipreservations?limit=500\": tls: failed to verify certificate: x509: certificate signed by unknown authority" pod="kube-system/coredns-7c65d6cfc9-tnpg9" Sep 12 17:35:23.531887 kubelet[2521]: E0912 17:35:23.531831 2521 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424\": plugin type=\"calico\" failed (add): failed to look up reserved IPs: Get \"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/ipreservations?limit=500\": tls: failed to verify certificate: x509: certificate signed by unknown authority" pod="kube-system/coredns-7c65d6cfc9-tnpg9" Sep 12 17:35:23.532335 kubelet[2521]: E0912 17:35:23.531886 2521 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tnpg9_kube-system(37a86f89-3968-4ebf-bcc4-c3d17db0dd1b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tnpg9_kube-system(37a86f89-3968-4ebf-bcc4-c3d17db0dd1b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424\\\": plugin type=\\\"calico\\\" failed (add): failed to look up reserved IPs: Get \\\"https://10.96.0.1:443/apis/crd.projectcalico.org/v1/ipreservations?limit=500\\\": tls: failed to verify certificate: x509: certificate signed by unknown authority\"" pod="kube-system/coredns-7c65d6cfc9-tnpg9" podUID="37a86f89-3968-4ebf-bcc4-c3d17db0dd1b" Sep 12 17:35:23.542634 systemd-networkd[1361]: calicdcea1ec6c0: Gained IPv6LL Sep 12 17:35:23.602959 systemd[1]: Started cri-containerd-8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6.scope - libcontainer container 8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6. Sep 12 17:35:23.641508 containerd[1470]: time="2025-09-12T17:35:23.641346881Z" level=info msg="StartContainer for \"d2d899e414688875ba0289047b565424c62176a89a0f42d3dfe218595f2e52f1\" returns successfully" Sep 12 17:35:23.972236 systemd-networkd[1361]: cali7da59dc6419: Link UP Sep 12 17:35:23.974903 systemd-networkd[1361]: cali7da59dc6419: Gained carrier Sep 12 17:35:24.032302 kubelet[2521]: E0912 17:35:24.032237 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:24.038429 containerd[1470]: time="2025-09-12T17:35:24.037327056Z" level=info msg="StopPodSandbox for \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\"" Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.342 [INFO][4282] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0 calico-apiserver-56b75c89b8- calico-apiserver aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05 947 0 2025-09-12 17:34:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56b75c89b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-8-31c29e3945 calico-apiserver-56b75c89b8-mssfh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7da59dc6419 [] [] }} ContainerID="96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-mssfh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-" Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.342 [INFO][4282] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-mssfh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.754 [INFO][4365] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" HandleID="k8s-pod-network.96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.757 [INFO][4365] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" HandleID="k8s-pod-network.96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fba0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-8-31c29e3945", "pod":"calico-apiserver-56b75c89b8-mssfh", "timestamp":"2025-09-12 17:35:23.754075323 +0000 UTC"}, Hostname:"ci-4081.3.6-8-31c29e3945", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.757 [INFO][4365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.757 [INFO][4365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.757 [INFO][4365] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-8-31c29e3945' Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.796 [INFO][4365] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.824 [INFO][4365] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.849 [INFO][4365] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.859 [INFO][4365] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.868 [INFO][4365] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.869 [INFO][4365] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.878 [INFO][4365] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832 Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.895 [INFO][4365] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.934 [INFO][4365] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.73.133/26] block=192.168.73.128/26 handle="k8s-pod-network.96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.934 [INFO][4365] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.133/26] handle="k8s-pod-network.96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.939 [INFO][4365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:24.083636 containerd[1470]: 2025-09-12 17:35:23.939 [INFO][4365] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.133/26] IPv6=[] ContainerID="96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" HandleID="k8s-pod-network.96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:24.084902 containerd[1470]: 2025-09-12 17:35:23.956 [INFO][4282] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-mssfh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0", GenerateName:"calico-apiserver-56b75c89b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56b75c89b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"", Pod:"calico-apiserver-56b75c89b8-mssfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7da59dc6419", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:24.084902 containerd[1470]: 2025-09-12 17:35:23.957 [INFO][4282] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.133/32] ContainerID="96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-mssfh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:24.084902 containerd[1470]: 2025-09-12 17:35:23.957 [INFO][4282] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7da59dc6419 ContainerID="96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-mssfh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:24.084902 containerd[1470]: 2025-09-12 17:35:23.977 [INFO][4282] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-mssfh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:24.084902 containerd[1470]: 2025-09-12 17:35:23.977 [INFO][4282] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-mssfh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0", GenerateName:"calico-apiserver-56b75c89b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05", ResourceVersion:"947", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56b75c89b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832", Pod:"calico-apiserver-56b75c89b8-mssfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7da59dc6419", MAC:"76:d8:f3:96:1b:c6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:24.084902 containerd[1470]: 2025-09-12 17:35:24.044 [INFO][4282] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-mssfh" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:24.097561 kubelet[2521]: I0912 17:35:24.096542 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hdjvw" podStartSLOduration=48.096507943 podStartE2EDuration="48.096507943s" podCreationTimestamp="2025-09-12 17:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:24.095982203 +0000 UTC m=+52.018191248" watchObservedRunningTime="2025-09-12 17:35:24.096507943 +0000 UTC m=+52.018717031" Sep 12 17:35:24.112972 containerd[1470]: time="2025-09-12T17:35:24.112925286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-kxmsn,Uid:d953a4e7-ed0f-478f-9cb2-fa836708ae8f,Namespace:calico-system,Attempt:1,} returns sandbox id \"8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6\"" Sep 12 17:35:24.265395 containerd[1470]: time="2025-09-12T17:35:24.264609716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:24.265395 containerd[1470]: time="2025-09-12T17:35:24.264743254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:24.265395 containerd[1470]: time="2025-09-12T17:35:24.264758403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:24.265395 containerd[1470]: time="2025-09-12T17:35:24.264925014Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:24.312924 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-555744e58ad453af2bc038603535bbfcc3dc36951f5c761d23873df5638f7424-shm.mount: Deactivated successfully. Sep 12 17:35:24.360736 systemd-networkd[1361]: calic3938091101: Link UP Sep 12 17:35:24.361036 systemd-networkd[1361]: calic3938091101: Gained carrier Sep 12 17:35:24.377189 systemd-networkd[1361]: cali681be8ebc06: Gained IPv6LL Sep 12 17:35:24.404219 systemd[1]: Started cri-containerd-96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832.scope - libcontainer container 96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832. Sep 12 17:35:24.417871 containerd[1470]: 2025-09-12 17:35:23.883 [INFO][4371] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Sep 12 17:35:24.417871 containerd[1470]: 2025-09-12 17:35:23.883 [INFO][4371] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" iface="eth0" netns="/var/run/netns/cni-dc2f9deb-04ca-abf8-2add-dda4dc53a038" Sep 12 17:35:24.417871 containerd[1470]: 2025-09-12 17:35:23.884 [INFO][4371] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" iface="eth0" netns="/var/run/netns/cni-dc2f9deb-04ca-abf8-2add-dda4dc53a038" Sep 12 17:35:24.417871 containerd[1470]: 2025-09-12 17:35:23.885 [INFO][4371] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" iface="eth0" netns="/var/run/netns/cni-dc2f9deb-04ca-abf8-2add-dda4dc53a038" Sep 12 17:35:24.417871 containerd[1470]: 2025-09-12 17:35:23.886 [INFO][4371] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Sep 12 17:35:24.417871 containerd[1470]: 2025-09-12 17:35:23.886 [INFO][4371] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Sep 12 17:35:24.417871 containerd[1470]: 2025-09-12 17:35:24.161 [INFO][4460] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" HandleID="k8s-pod-network.eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:24.417871 containerd[1470]: 2025-09-12 17:35:24.161 [INFO][4460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:24.417871 containerd[1470]: 2025-09-12 17:35:24.288 [INFO][4460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:24.417871 containerd[1470]: 2025-09-12 17:35:24.350 [WARNING][4460] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" HandleID="k8s-pod-network.eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:24.417871 containerd[1470]: 2025-09-12 17:35:24.350 [INFO][4460] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" HandleID="k8s-pod-network.eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:24.417871 containerd[1470]: 2025-09-12 17:35:24.363 [INFO][4460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:24.417871 containerd[1470]: 2025-09-12 17:35:24.393 [INFO][4371] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Sep 12 17:35:24.423699 containerd[1470]: time="2025-09-12T17:35:24.421400182Z" level=info msg="TearDown network for sandbox \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\" successfully" Sep 12 17:35:24.423699 containerd[1470]: time="2025-09-12T17:35:24.421446983Z" level=info msg="StopPodSandbox for \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\" returns successfully" Sep 12 17:35:24.425526 systemd[1]: run-netns-cni\x2ddc2f9deb\x2d04ca\x2dabf8\x2d2add\x2ddda4dc53a038.mount: Deactivated successfully. Sep 12 17:35:24.436337 containerd[1470]: time="2025-09-12T17:35:24.436265696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b75c89b8-5w8s5,Uid:fade2c26-8080-4e6a-9beb-ec982854f037,Namespace:calico-apiserver,Attempt:1,}" Sep 12 17:35:24.437064 containerd[1470]: time="2025-09-12T17:35:24.436671281Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:24.449680 containerd[1470]: time="2025-09-12T17:35:24.449581389Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 17:35:24.457511 containerd[1470]: time="2025-09-12T17:35:24.457440724Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:23.605 [INFO][4313] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0 calico-kube-controllers-9d66ff4cd- calico-system 47141fc5-2666-4c35-9d69-2367a7803e63 952 0 2025-09-12 17:34:52 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:9d66ff4cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.6-8-31c29e3945 calico-kube-controllers-9d66ff4cd-cjszq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic3938091101 [] [] }} ContainerID="989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" Namespace="calico-system" Pod="calico-kube-controllers-9d66ff4cd-cjszq" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-" Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:23.605 [INFO][4313] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" Namespace="calico-system" Pod="calico-kube-controllers-9d66ff4cd-cjszq" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:23.803 [INFO][4435] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" HandleID="k8s-pod-network.989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:23.803 [INFO][4435] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" HandleID="k8s-pod-network.989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fed0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.6-8-31c29e3945", "pod":"calico-kube-controllers-9d66ff4cd-cjszq", "timestamp":"2025-09-12 17:35:23.802997438 +0000 UTC"}, Hostname:"ci-4081.3.6-8-31c29e3945", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:23.803 [INFO][4435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:23.945 [INFO][4435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:23.946 [INFO][4435] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-8-31c29e3945' Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:24.015 [INFO][4435] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:24.071 [INFO][4435] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:24.126 [INFO][4435] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:24.154 [INFO][4435] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:24.181 [INFO][4435] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:24.184 [INFO][4435] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:24.222 [INFO][4435] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:24.246 [INFO][4435] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:24.284 [INFO][4435] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.73.134/26] block=192.168.73.128/26 handle="k8s-pod-network.989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:24.285 [INFO][4435] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.134/26] handle="k8s-pod-network.989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:24.285 [INFO][4435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:24.463628 containerd[1470]: 2025-09-12 17:35:24.287 [INFO][4435] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.134/26] IPv6=[] ContainerID="989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" HandleID="k8s-pod-network.989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:24.464592 containerd[1470]: 2025-09-12 17:35:24.321 [INFO][4313] cni-plugin/k8s.go 418: Populated endpoint ContainerID="989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" Namespace="calico-system" Pod="calico-kube-controllers-9d66ff4cd-cjszq" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0", GenerateName:"calico-kube-controllers-9d66ff4cd-", Namespace:"calico-system", SelfLink:"", UID:"47141fc5-2666-4c35-9d69-2367a7803e63", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9d66ff4cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"", Pod:"calico-kube-controllers-9d66ff4cd-cjszq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic3938091101", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:24.464592 containerd[1470]: 2025-09-12 17:35:24.325 [INFO][4313] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.134/32] ContainerID="989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" Namespace="calico-system" Pod="calico-kube-controllers-9d66ff4cd-cjszq" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:24.464592 containerd[1470]: 2025-09-12 17:35:24.325 [INFO][4313] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic3938091101 ContainerID="989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" Namespace="calico-system" Pod="calico-kube-controllers-9d66ff4cd-cjszq" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:24.464592 containerd[1470]: 2025-09-12 17:35:24.367 [INFO][4313] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" Namespace="calico-system" Pod="calico-kube-controllers-9d66ff4cd-cjszq" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:24.464592 containerd[1470]: 2025-09-12 17:35:24.374 [INFO][4313] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" Namespace="calico-system" Pod="calico-kube-controllers-9d66ff4cd-cjszq" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0", GenerateName:"calico-kube-controllers-9d66ff4cd-", Namespace:"calico-system", SelfLink:"", UID:"47141fc5-2666-4c35-9d69-2367a7803e63", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9d66ff4cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f", Pod:"calico-kube-controllers-9d66ff4cd-cjszq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic3938091101", MAC:"86:60:0f:19:65:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:24.464592 containerd[1470]: 2025-09-12 17:35:24.438 [INFO][4313] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f" Namespace="calico-system" Pod="calico-kube-controllers-9d66ff4cd-cjszq" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:24.481639 containerd[1470]: time="2025-09-12T17:35:24.480495324Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:24.481639 containerd[1470]: time="2025-09-12T17:35:24.481456223Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 3.405333145s" Sep 12 17:35:24.481639 containerd[1470]: time="2025-09-12T17:35:24.481507269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 17:35:24.489776 containerd[1470]: time="2025-09-12T17:35:24.488879668Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 17:35:24.614216 containerd[1470]: time="2025-09-12T17:35:24.597782278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:24.614216 containerd[1470]: time="2025-09-12T17:35:24.597920467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:24.614216 containerd[1470]: time="2025-09-12T17:35:24.597947201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:24.614216 containerd[1470]: time="2025-09-12T17:35:24.598101629Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:24.669805 containerd[1470]: 2025-09-12 17:35:24.455 [INFO][4486] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:24.669805 containerd[1470]: 2025-09-12 17:35:24.455 [INFO][4486] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" iface="eth0" netns="" Sep 12 17:35:24.669805 containerd[1470]: 2025-09-12 17:35:24.455 [INFO][4486] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:24.669805 containerd[1470]: 2025-09-12 17:35:24.455 [INFO][4486] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:24.669805 containerd[1470]: 2025-09-12 17:35:24.630 [INFO][4538] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" HandleID="k8s-pod-network.a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:24.669805 containerd[1470]: 2025-09-12 17:35:24.631 [INFO][4538] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:24.669805 containerd[1470]: 2025-09-12 17:35:24.631 [INFO][4538] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:24.669805 containerd[1470]: 2025-09-12 17:35:24.650 [WARNING][4538] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" HandleID="k8s-pod-network.a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:24.669805 containerd[1470]: 2025-09-12 17:35:24.650 [INFO][4538] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" HandleID="k8s-pod-network.a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:24.669805 containerd[1470]: 2025-09-12 17:35:24.654 [INFO][4538] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:24.669805 containerd[1470]: 2025-09-12 17:35:24.659 [INFO][4486] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:24.676358 containerd[1470]: time="2025-09-12T17:35:24.671711948Z" level=info msg="TearDown network for sandbox \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\" successfully" Sep 12 17:35:24.676358 containerd[1470]: time="2025-09-12T17:35:24.671808860Z" level=info msg="StopPodSandbox for \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\" returns successfully" Sep 12 17:35:24.676437 kubelet[2521]: E0912 17:35:24.672243 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:24.670347 systemd[1]: Started cri-containerd-989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f.scope - libcontainer container 989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f. Sep 12 17:35:24.684299 containerd[1470]: time="2025-09-12T17:35:24.684229873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tnpg9,Uid:37a86f89-3968-4ebf-bcc4-c3d17db0dd1b,Namespace:kube-system,Attempt:1,}" Sep 12 17:35:24.685826 containerd[1470]: time="2025-09-12T17:35:24.685765064Z" level=info msg="CreateContainer within sandbox \"8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 17:35:24.730602 containerd[1470]: time="2025-09-12T17:35:24.730518776Z" level=info msg="CreateContainer within sandbox \"8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7083183afd60916e69e49a6b5e7b396997414c9369d06ed908ae95a71c5c196e\"" Sep 12 17:35:24.733378 containerd[1470]: time="2025-09-12T17:35:24.733330522Z" level=info msg="StartContainer for \"7083183afd60916e69e49a6b5e7b396997414c9369d06ed908ae95a71c5c196e\"" Sep 12 17:35:24.863344 systemd[1]: Started cri-containerd-7083183afd60916e69e49a6b5e7b396997414c9369d06ed908ae95a71c5c196e.scope - libcontainer container 7083183afd60916e69e49a6b5e7b396997414c9369d06ed908ae95a71c5c196e. Sep 12 17:35:24.870949 containerd[1470]: time="2025-09-12T17:35:24.870248749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b75c89b8-mssfh,Uid:aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832\"" Sep 12 17:35:25.016750 containerd[1470]: time="2025-09-12T17:35:25.016685132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-9d66ff4cd-cjszq,Uid:47141fc5-2666-4c35-9d69-2367a7803e63,Namespace:calico-system,Attempt:1,} returns sandbox id \"989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f\"" Sep 12 17:35:25.079226 systemd-networkd[1361]: cali7da59dc6419: Gained IPv6LL Sep 12 17:35:25.092816 kubelet[2521]: E0912 17:35:25.091886 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:25.166339 systemd-networkd[1361]: calicfd3b01975d: Link UP Sep 12 17:35:25.170329 systemd-networkd[1361]: calicfd3b01975d: Gained carrier Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:24.848 [INFO][4594] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0 coredns-7c65d6cfc9- kube-system 37a86f89-3968-4ebf-bcc4-c3d17db0dd1b 937 0 2025-09-12 17:34:36 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.6-8-31c29e3945 coredns-7c65d6cfc9-tnpg9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicfd3b01975d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tnpg9" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-" Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:24.851 [INFO][4594] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tnpg9" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:24.987 [INFO][4635] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" HandleID="k8s-pod-network.b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:24.998 [INFO][4635] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" HandleID="k8s-pod-network.b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000410e50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.6-8-31c29e3945", "pod":"coredns-7c65d6cfc9-tnpg9", "timestamp":"2025-09-12 17:35:24.985355628 +0000 UTC"}, Hostname:"ci-4081.3.6-8-31c29e3945", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:24.998 [INFO][4635] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:25.000 [INFO][4635] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:25.000 [INFO][4635] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-8-31c29e3945' Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:25.020 [INFO][4635] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:25.043 [INFO][4635] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:25.069 [INFO][4635] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:25.078 [INFO][4635] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:25.097 [INFO][4635] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:25.097 [INFO][4635] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:25.108 [INFO][4635] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8 Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:25.122 [INFO][4635] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:25.135 [INFO][4635] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.73.135/26] block=192.168.73.128/26 handle="k8s-pod-network.b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:25.135 [INFO][4635] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.135/26] handle="k8s-pod-network.b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:25.135 [INFO][4635] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:25.216727 containerd[1470]: 2025-09-12 17:35:25.136 [INFO][4635] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.135/26] IPv6=[] ContainerID="b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" HandleID="k8s-pod-network.b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:25.219146 containerd[1470]: 2025-09-12 17:35:25.148 [INFO][4594] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tnpg9" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"37a86f89-3968-4ebf-bcc4-c3d17db0dd1b", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"", Pod:"coredns-7c65d6cfc9-tnpg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicfd3b01975d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:25.219146 containerd[1470]: 2025-09-12 17:35:25.149 [INFO][4594] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.135/32] ContainerID="b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tnpg9" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:25.219146 containerd[1470]: 2025-09-12 17:35:25.151 [INFO][4594] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicfd3b01975d ContainerID="b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tnpg9" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:25.219146 containerd[1470]: 2025-09-12 17:35:25.179 [INFO][4594] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tnpg9" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:25.219146 containerd[1470]: 2025-09-12 17:35:25.181 [INFO][4594] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tnpg9" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"37a86f89-3968-4ebf-bcc4-c3d17db0dd1b", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8", Pod:"coredns-7c65d6cfc9-tnpg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicfd3b01975d", MAC:"ae:00:d2:98:d5:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:25.219146 containerd[1470]: 2025-09-12 17:35:25.210 [INFO][4594] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tnpg9" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:25.279836 containerd[1470]: time="2025-09-12T17:35:25.279759956Z" level=info msg="StartContainer for \"7083183afd60916e69e49a6b5e7b396997414c9369d06ed908ae95a71c5c196e\" returns successfully" Sep 12 17:35:25.345631 systemd-networkd[1361]: calie5e443ac887: Link UP Sep 12 17:35:25.355314 systemd-networkd[1361]: calie5e443ac887: Gained carrier Sep 12 17:35:25.374928 containerd[1470]: time="2025-09-12T17:35:25.373853117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:25.374928 containerd[1470]: time="2025-09-12T17:35:25.373957771Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:25.374928 containerd[1470]: time="2025-09-12T17:35:25.373984665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:25.374928 containerd[1470]: time="2025-09-12T17:35:25.374157658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:24.862 [INFO][4548] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0 calico-apiserver-56b75c89b8- calico-apiserver fade2c26-8080-4e6a-9beb-ec982854f037 967 0 2025-09-12 17:34:46 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:56b75c89b8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.6-8-31c29e3945 calico-apiserver-56b75c89b8-5w8s5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie5e443ac887 [] [] }} ContainerID="4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-5w8s5" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-" Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:24.866 [INFO][4548] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-5w8s5" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.027 [INFO][4640] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" HandleID="k8s-pod-network.4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.028 [INFO][4640] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" HandleID="k8s-pod-network.4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fa30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.6-8-31c29e3945", "pod":"calico-apiserver-56b75c89b8-5w8s5", "timestamp":"2025-09-12 17:35:25.027771975 +0000 UTC"}, Hostname:"ci-4081.3.6-8-31c29e3945", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.028 [INFO][4640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.136 [INFO][4640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.136 [INFO][4640] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.6-8-31c29e3945' Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.164 [INFO][4640] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.183 [INFO][4640] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.202 [INFO][4640] ipam/ipam.go 511: Trying affinity for 192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.213 [INFO][4640] ipam/ipam.go 158: Attempting to load block cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.221 [INFO][4640] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.73.128/26 host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.221 [INFO][4640] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.73.128/26 handle="k8s-pod-network.4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.227 [INFO][4640] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4 Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.254 [INFO][4640] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.73.128/26 handle="k8s-pod-network.4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.284 [INFO][4640] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.73.136/26] block=192.168.73.128/26 handle="k8s-pod-network.4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.287 [INFO][4640] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.73.136/26] handle="k8s-pod-network.4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" host="ci-4081.3.6-8-31c29e3945" Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.288 [INFO][4640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:25.431603 containerd[1470]: 2025-09-12 17:35:25.289 [INFO][4640] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.73.136/26] IPv6=[] ContainerID="4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" HandleID="k8s-pod-network.4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:25.432488 containerd[1470]: 2025-09-12 17:35:25.331 [INFO][4548] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-5w8s5" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0", GenerateName:"calico-apiserver-56b75c89b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"fade2c26-8080-4e6a-9beb-ec982854f037", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56b75c89b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"", Pod:"calico-apiserver-56b75c89b8-5w8s5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5e443ac887", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:25.432488 containerd[1470]: 2025-09-12 17:35:25.334 [INFO][4548] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.73.136/32] ContainerID="4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-5w8s5" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:25.432488 containerd[1470]: 2025-09-12 17:35:25.334 [INFO][4548] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie5e443ac887 ContainerID="4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-5w8s5" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:25.432488 containerd[1470]: 2025-09-12 17:35:25.355 [INFO][4548] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-5w8s5" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:25.432488 containerd[1470]: 2025-09-12 17:35:25.359 [INFO][4548] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-5w8s5" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0", GenerateName:"calico-apiserver-56b75c89b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"fade2c26-8080-4e6a-9beb-ec982854f037", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56b75c89b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4", Pod:"calico-apiserver-56b75c89b8-5w8s5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5e443ac887", MAC:"82:de:10:08:6e:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:25.432488 containerd[1470]: 2025-09-12 17:35:25.400 [INFO][4548] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4" Namespace="calico-apiserver" Pod="calico-apiserver-56b75c89b8-5w8s5" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:25.452351 systemd[1]: Started cri-containerd-b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8.scope - libcontainer container b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8. Sep 12 17:35:25.485966 systemd-networkd[1361]: vxlan.calico: Link UP Sep 12 17:35:25.485979 systemd-networkd[1361]: vxlan.calico: Gained carrier Sep 12 17:35:25.553693 containerd[1470]: time="2025-09-12T17:35:25.551186996Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 17:35:25.553693 containerd[1470]: time="2025-09-12T17:35:25.551258826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 17:35:25.553693 containerd[1470]: time="2025-09-12T17:35:25.551270693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:25.553693 containerd[1470]: time="2025-09-12T17:35:25.551364038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 17:35:25.628369 systemd[1]: Started cri-containerd-4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4.scope - libcontainer container 4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4. Sep 12 17:35:25.726306 containerd[1470]: time="2025-09-12T17:35:25.726117633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tnpg9,Uid:37a86f89-3968-4ebf-bcc4-c3d17db0dd1b,Namespace:kube-system,Attempt:1,} returns sandbox id \"b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8\"" Sep 12 17:35:25.730440 kubelet[2521]: E0912 17:35:25.729658 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:25.737272 containerd[1470]: time="2025-09-12T17:35:25.736668782Z" level=info msg="CreateContainer within sandbox \"b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:35:25.779793 containerd[1470]: time="2025-09-12T17:35:25.779452501Z" level=info msg="CreateContainer within sandbox \"b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"87b00d62f0739c6f887d3b2ea1cf074ded771f54143730ae12c3fe0ea273df61\"" Sep 12 17:35:25.782669 containerd[1470]: time="2025-09-12T17:35:25.780997865Z" level=info msg="StartContainer for \"87b00d62f0739c6f887d3b2ea1cf074ded771f54143730ae12c3fe0ea273df61\"" Sep 12 17:35:25.900184 systemd[1]: Started cri-containerd-87b00d62f0739c6f887d3b2ea1cf074ded771f54143730ae12c3fe0ea273df61.scope - libcontainer container 87b00d62f0739c6f887d3b2ea1cf074ded771f54143730ae12c3fe0ea273df61. Sep 12 17:35:25.974365 systemd-networkd[1361]: calic3938091101: Gained IPv6LL Sep 12 17:35:26.010077 containerd[1470]: time="2025-09-12T17:35:26.009064801Z" level=info msg="StartContainer for \"87b00d62f0739c6f887d3b2ea1cf074ded771f54143730ae12c3fe0ea273df61\" returns successfully" Sep 12 17:35:26.107909 kubelet[2521]: E0912 17:35:26.107497 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:26.117096 kubelet[2521]: E0912 17:35:26.116312 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:26.183468 systemd[1]: Started sshd@7-64.227.109.162:22-147.75.109.163:35444.service - OpenSSH per-connection server daemon (147.75.109.163:35444). Sep 12 17:35:26.194881 containerd[1470]: time="2025-09-12T17:35:26.194816660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-56b75c89b8-5w8s5,Uid:fade2c26-8080-4e6a-9beb-ec982854f037,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4\"" Sep 12 17:35:26.407910 sshd[4841]: Accepted publickey for core from 147.75.109.163 port 35444 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:35:26.421523 sshd[4841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:26.436077 systemd-logind[1455]: New session 8 of user core. Sep 12 17:35:26.441353 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:35:26.552633 systemd-networkd[1361]: calie5e443ac887: Gained IPv6LL Sep 12 17:35:26.956611 containerd[1470]: time="2025-09-12T17:35:26.956375956Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:26.961367 containerd[1470]: time="2025-09-12T17:35:26.961279713Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 17:35:26.963311 containerd[1470]: time="2025-09-12T17:35:26.963226405Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:26.969205 containerd[1470]: time="2025-09-12T17:35:26.969140697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:26.972739 containerd[1470]: time="2025-09-12T17:35:26.972356369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.483412572s" Sep 12 17:35:26.972739 containerd[1470]: time="2025-09-12T17:35:26.972425953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 17:35:26.977772 containerd[1470]: time="2025-09-12T17:35:26.976886926Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 17:35:26.995403 containerd[1470]: time="2025-09-12T17:35:26.995098139Z" level=info msg="CreateContainer within sandbox \"44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 17:35:27.132713 systemd-networkd[1361]: calicfd3b01975d: Gained IPv6LL Sep 12 17:35:27.152850 containerd[1470]: time="2025-09-12T17:35:27.145727104Z" level=info msg="CreateContainer within sandbox \"44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7ecd1e198456b608a1388f3698a87058010f4089e6a14fda880750aee6049b26\"" Sep 12 17:35:27.156394 containerd[1470]: time="2025-09-12T17:35:27.154542948Z" level=info msg="StartContainer for \"7ecd1e198456b608a1388f3698a87058010f4089e6a14fda880750aee6049b26\"" Sep 12 17:35:27.173537 kubelet[2521]: E0912 17:35:27.173120 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:27.237785 kubelet[2521]: I0912 17:35:27.235248 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tnpg9" podStartSLOduration=51.235215402 podStartE2EDuration="51.235215402s" podCreationTimestamp="2025-09-12 17:34:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:35:26.176637466 +0000 UTC m=+54.098846537" watchObservedRunningTime="2025-09-12 17:35:27.235215402 +0000 UTC m=+55.157424444" Sep 12 17:35:27.373361 systemd[1]: Started cri-containerd-7ecd1e198456b608a1388f3698a87058010f4089e6a14fda880750aee6049b26.scope - libcontainer container 7ecd1e198456b608a1388f3698a87058010f4089e6a14fda880750aee6049b26. Sep 12 17:35:27.382217 systemd-networkd[1361]: vxlan.calico: Gained IPv6LL Sep 12 17:35:27.551812 sshd[4841]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:27.562124 systemd-logind[1455]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:35:27.563400 systemd[1]: sshd@7-64.227.109.162:22-147.75.109.163:35444.service: Deactivated successfully. Sep 12 17:35:27.568897 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:35:27.579105 systemd-logind[1455]: Removed session 8. Sep 12 17:35:27.588458 containerd[1470]: time="2025-09-12T17:35:27.588382540Z" level=info msg="StartContainer for \"7ecd1e198456b608a1388f3698a87058010f4089e6a14fda880750aee6049b26\" returns successfully" Sep 12 17:35:28.182053 kubelet[2521]: E0912 17:35:28.181862 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:30.087380 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2692370691.mount: Deactivated successfully. Sep 12 17:35:31.198256 containerd[1470]: time="2025-09-12T17:35:31.198173657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:31.199979 containerd[1470]: time="2025-09-12T17:35:31.199877313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 17:35:31.202202 containerd[1470]: time="2025-09-12T17:35:31.201147875Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:31.207650 containerd[1470]: time="2025-09-12T17:35:31.207557473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:31.209484 containerd[1470]: time="2025-09-12T17:35:31.209418331Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.232464773s" Sep 12 17:35:31.209484 containerd[1470]: time="2025-09-12T17:35:31.209485339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 17:35:31.220167 containerd[1470]: time="2025-09-12T17:35:31.220105662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:35:31.224457 containerd[1470]: time="2025-09-12T17:35:31.223554092Z" level=info msg="CreateContainer within sandbox \"8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 17:35:31.253562 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount189347408.mount: Deactivated successfully. Sep 12 17:35:31.257489 containerd[1470]: time="2025-09-12T17:35:31.257419047Z" level=info msg="CreateContainer within sandbox \"8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"edf6beac1343456e694fd8ac820db1d2c1f37b62673a3ea344f6f6488eb9ba14\"" Sep 12 17:35:31.260102 containerd[1470]: time="2025-09-12T17:35:31.259624694Z" level=info msg="StartContainer for \"edf6beac1343456e694fd8ac820db1d2c1f37b62673a3ea344f6f6488eb9ba14\"" Sep 12 17:35:31.390484 systemd[1]: Started cri-containerd-edf6beac1343456e694fd8ac820db1d2c1f37b62673a3ea344f6f6488eb9ba14.scope - libcontainer container edf6beac1343456e694fd8ac820db1d2c1f37b62673a3ea344f6f6488eb9ba14. Sep 12 17:35:31.492325 containerd[1470]: time="2025-09-12T17:35:31.491867796Z" level=info msg="StartContainer for \"edf6beac1343456e694fd8ac820db1d2c1f37b62673a3ea344f6f6488eb9ba14\" returns successfully" Sep 12 17:35:32.257330 systemd[1]: run-containerd-runc-k8s.io-edf6beac1343456e694fd8ac820db1d2c1f37b62673a3ea344f6f6488eb9ba14-runc.T5VcOd.mount: Deactivated successfully. Sep 12 17:35:32.508890 containerd[1470]: time="2025-09-12T17:35:32.508685625Z" level=info msg="StopPodSandbox for \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\"" Sep 12 17:35:32.576119 systemd[1]: Started sshd@8-64.227.109.162:22-147.75.109.163:44268.service - OpenSSH per-connection server daemon (147.75.109.163:44268). Sep 12 17:35:32.762811 containerd[1470]: 2025-09-12 17:35:32.669 [WARNING][5039] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-whisker--5c566ffb4c--zmk9h-eth0" Sep 12 17:35:32.762811 containerd[1470]: 2025-09-12 17:35:32.671 [INFO][5039] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Sep 12 17:35:32.762811 containerd[1470]: 2025-09-12 17:35:32.671 [INFO][5039] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" iface="eth0" netns="" Sep 12 17:35:32.762811 containerd[1470]: 2025-09-12 17:35:32.671 [INFO][5039] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Sep 12 17:35:32.762811 containerd[1470]: 2025-09-12 17:35:32.671 [INFO][5039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Sep 12 17:35:32.762811 containerd[1470]: 2025-09-12 17:35:32.735 [INFO][5051] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" HandleID="k8s-pod-network.3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Workload="ci--4081.3.6--8--31c29e3945-k8s-whisker--5c566ffb4c--zmk9h-eth0" Sep 12 17:35:32.762811 containerd[1470]: 2025-09-12 17:35:32.735 [INFO][5051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:32.762811 containerd[1470]: 2025-09-12 17:35:32.735 [INFO][5051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:32.762811 containerd[1470]: 2025-09-12 17:35:32.748 [WARNING][5051] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" HandleID="k8s-pod-network.3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Workload="ci--4081.3.6--8--31c29e3945-k8s-whisker--5c566ffb4c--zmk9h-eth0" Sep 12 17:35:32.762811 containerd[1470]: 2025-09-12 17:35:32.748 [INFO][5051] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" HandleID="k8s-pod-network.3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Workload="ci--4081.3.6--8--31c29e3945-k8s-whisker--5c566ffb4c--zmk9h-eth0" Sep 12 17:35:32.762811 containerd[1470]: 2025-09-12 17:35:32.751 [INFO][5051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:32.762811 containerd[1470]: 2025-09-12 17:35:32.757 [INFO][5039] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Sep 12 17:35:32.765708 containerd[1470]: time="2025-09-12T17:35:32.764166216Z" level=info msg="TearDown network for sandbox \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\" successfully" Sep 12 17:35:32.765708 containerd[1470]: time="2025-09-12T17:35:32.764213767Z" level=info msg="StopPodSandbox for \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\" returns successfully" Sep 12 17:35:32.765817 sshd[5046]: Accepted publickey for core from 147.75.109.163 port 44268 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:35:32.768300 sshd[5046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:32.789169 systemd-logind[1455]: New session 9 of user core. Sep 12 17:35:32.794746 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:35:32.833534 containerd[1470]: time="2025-09-12T17:35:32.833424233Z" level=info msg="RemovePodSandbox for \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\"" Sep 12 17:35:32.837891 containerd[1470]: time="2025-09-12T17:35:32.837684314Z" level=info msg="Forcibly stopping sandbox \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\"" Sep 12 17:35:33.054335 containerd[1470]: 2025-09-12 17:35:32.951 [WARNING][5067] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" WorkloadEndpoint="ci--4081.3.6--8--31c29e3945-k8s-whisker--5c566ffb4c--zmk9h-eth0" Sep 12 17:35:33.054335 containerd[1470]: 2025-09-12 17:35:32.952 [INFO][5067] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Sep 12 17:35:33.054335 containerd[1470]: 2025-09-12 17:35:32.952 [INFO][5067] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" iface="eth0" netns="" Sep 12 17:35:33.054335 containerd[1470]: 2025-09-12 17:35:32.953 [INFO][5067] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Sep 12 17:35:33.054335 containerd[1470]: 2025-09-12 17:35:32.953 [INFO][5067] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Sep 12 17:35:33.054335 containerd[1470]: 2025-09-12 17:35:33.027 [INFO][5078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" HandleID="k8s-pod-network.3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Workload="ci--4081.3.6--8--31c29e3945-k8s-whisker--5c566ffb4c--zmk9h-eth0" Sep 12 17:35:33.054335 containerd[1470]: 2025-09-12 17:35:33.028 [INFO][5078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:33.054335 containerd[1470]: 2025-09-12 17:35:33.028 [INFO][5078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:33.054335 containerd[1470]: 2025-09-12 17:35:33.042 [WARNING][5078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" HandleID="k8s-pod-network.3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Workload="ci--4081.3.6--8--31c29e3945-k8s-whisker--5c566ffb4c--zmk9h-eth0" Sep 12 17:35:33.054335 containerd[1470]: 2025-09-12 17:35:33.042 [INFO][5078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" HandleID="k8s-pod-network.3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Workload="ci--4081.3.6--8--31c29e3945-k8s-whisker--5c566ffb4c--zmk9h-eth0" Sep 12 17:35:33.054335 containerd[1470]: 2025-09-12 17:35:33.047 [INFO][5078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:33.054335 containerd[1470]: 2025-09-12 17:35:33.050 [INFO][5067] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8" Sep 12 17:35:33.054335 containerd[1470]: time="2025-09-12T17:35:33.053461122Z" level=info msg="TearDown network for sandbox \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\" successfully" Sep 12 17:35:33.098554 containerd[1470]: time="2025-09-12T17:35:33.098166676Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:35:33.151116 containerd[1470]: time="2025-09-12T17:35:33.150886545Z" level=info msg="RemovePodSandbox \"3faa6fbeca548782d61eb80823768f480dde2aadc022c7373b1e4ded2d3ad2f8\" returns successfully" Sep 12 17:35:33.196286 containerd[1470]: time="2025-09-12T17:35:33.195901490Z" level=info msg="StopPodSandbox for \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\"" Sep 12 17:35:33.588461 sshd[5046]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:33.599001 systemd[1]: sshd@8-64.227.109.162:22-147.75.109.163:44268.service: Deactivated successfully. Sep 12 17:35:33.604625 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:35:33.614700 systemd-logind[1455]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:35:33.622442 systemd-logind[1455]: Removed session 9. Sep 12 17:35:33.767583 containerd[1470]: 2025-09-12 17:35:33.499 [WARNING][5097] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d953a4e7-ed0f-478f-9cb2-fa836708ae8f", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6", Pod:"goldmane-7988f88666-kxmsn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.73.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali681be8ebc06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:33.767583 containerd[1470]: 2025-09-12 17:35:33.499 [INFO][5097] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Sep 12 17:35:33.767583 containerd[1470]: 2025-09-12 17:35:33.499 [INFO][5097] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" iface="eth0" netns="" Sep 12 17:35:33.767583 containerd[1470]: 2025-09-12 17:35:33.499 [INFO][5097] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Sep 12 17:35:33.767583 containerd[1470]: 2025-09-12 17:35:33.499 [INFO][5097] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Sep 12 17:35:33.767583 containerd[1470]: 2025-09-12 17:35:33.667 [INFO][5124] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" HandleID="k8s-pod-network.4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Workload="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:33.767583 containerd[1470]: 2025-09-12 17:35:33.669 [INFO][5124] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:33.767583 containerd[1470]: 2025-09-12 17:35:33.670 [INFO][5124] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:33.767583 containerd[1470]: 2025-09-12 17:35:33.719 [WARNING][5124] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" HandleID="k8s-pod-network.4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Workload="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:33.767583 containerd[1470]: 2025-09-12 17:35:33.720 [INFO][5124] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" HandleID="k8s-pod-network.4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Workload="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:33.767583 containerd[1470]: 2025-09-12 17:35:33.729 [INFO][5124] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:33.767583 containerd[1470]: 2025-09-12 17:35:33.748 [INFO][5097] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Sep 12 17:35:33.770740 containerd[1470]: time="2025-09-12T17:35:33.768053592Z" level=info msg="TearDown network for sandbox \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\" successfully" Sep 12 17:35:33.770740 containerd[1470]: time="2025-09-12T17:35:33.768104539Z" level=info msg="StopPodSandbox for \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\" returns successfully" Sep 12 17:35:33.770859 containerd[1470]: time="2025-09-12T17:35:33.770753058Z" level=info msg="RemovePodSandbox for \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\"" Sep 12 17:35:33.770859 containerd[1470]: time="2025-09-12T17:35:33.770798705Z" level=info msg="Forcibly stopping sandbox \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\"" Sep 12 17:35:34.077149 containerd[1470]: 2025-09-12 17:35:33.958 [WARNING][5142] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"d953a4e7-ed0f-478f-9cb2-fa836708ae8f", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"8249697dc2c5a2b8a65f3e60637ef66072ecf3c11a57922e4e439fed64b8adb6", Pod:"goldmane-7988f88666-kxmsn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.73.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali681be8ebc06", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:34.077149 containerd[1470]: 2025-09-12 17:35:33.959 [INFO][5142] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Sep 12 17:35:34.077149 containerd[1470]: 2025-09-12 17:35:33.959 [INFO][5142] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" iface="eth0" netns="" Sep 12 17:35:34.077149 containerd[1470]: 2025-09-12 17:35:33.959 [INFO][5142] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Sep 12 17:35:34.077149 containerd[1470]: 2025-09-12 17:35:33.959 [INFO][5142] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Sep 12 17:35:34.077149 containerd[1470]: 2025-09-12 17:35:34.044 [INFO][5149] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" HandleID="k8s-pod-network.4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Workload="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:34.077149 containerd[1470]: 2025-09-12 17:35:34.044 [INFO][5149] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:34.077149 containerd[1470]: 2025-09-12 17:35:34.044 [INFO][5149] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:34.077149 containerd[1470]: 2025-09-12 17:35:34.063 [WARNING][5149] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" HandleID="k8s-pod-network.4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Workload="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:34.077149 containerd[1470]: 2025-09-12 17:35:34.063 [INFO][5149] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" HandleID="k8s-pod-network.4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Workload="ci--4081.3.6--8--31c29e3945-k8s-goldmane--7988f88666--kxmsn-eth0" Sep 12 17:35:34.077149 containerd[1470]: 2025-09-12 17:35:34.066 [INFO][5149] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:34.077149 containerd[1470]: 2025-09-12 17:35:34.071 [INFO][5142] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e" Sep 12 17:35:34.077909 containerd[1470]: time="2025-09-12T17:35:34.077217300Z" level=info msg="TearDown network for sandbox \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\" successfully" Sep 12 17:35:34.096140 containerd[1470]: time="2025-09-12T17:35:34.096071144Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:35:34.096316 containerd[1470]: time="2025-09-12T17:35:34.096187520Z" level=info msg="RemovePodSandbox \"4a0783e5c35d5aa609ed4fa8bc75f3991790880b69dd0c8f0832ad7cbb28dd2e\" returns successfully" Sep 12 17:35:34.098739 containerd[1470]: time="2025-09-12T17:35:34.098257481Z" level=info msg="StopPodSandbox for \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\"" Sep 12 17:35:34.265745 containerd[1470]: 2025-09-12 17:35:34.184 [WARNING][5167] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0", GenerateName:"calico-apiserver-56b75c89b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56b75c89b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832", Pod:"calico-apiserver-56b75c89b8-mssfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7da59dc6419", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:34.265745 containerd[1470]: 2025-09-12 17:35:34.184 [INFO][5167] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Sep 12 17:35:34.265745 containerd[1470]: 2025-09-12 17:35:34.184 [INFO][5167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" iface="eth0" netns="" Sep 12 17:35:34.265745 containerd[1470]: 2025-09-12 17:35:34.184 [INFO][5167] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Sep 12 17:35:34.265745 containerd[1470]: 2025-09-12 17:35:34.184 [INFO][5167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Sep 12 17:35:34.265745 containerd[1470]: 2025-09-12 17:35:34.232 [INFO][5174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" HandleID="k8s-pod-network.620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:34.265745 containerd[1470]: 2025-09-12 17:35:34.232 [INFO][5174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:34.265745 containerd[1470]: 2025-09-12 17:35:34.232 [INFO][5174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:34.265745 containerd[1470]: 2025-09-12 17:35:34.247 [WARNING][5174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" HandleID="k8s-pod-network.620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:34.265745 containerd[1470]: 2025-09-12 17:35:34.247 [INFO][5174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" HandleID="k8s-pod-network.620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:34.265745 containerd[1470]: 2025-09-12 17:35:34.252 [INFO][5174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:34.265745 containerd[1470]: 2025-09-12 17:35:34.259 [INFO][5167] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Sep 12 17:35:34.266634 containerd[1470]: time="2025-09-12T17:35:34.266280845Z" level=info msg="TearDown network for sandbox \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\" successfully" Sep 12 17:35:34.266634 containerd[1470]: time="2025-09-12T17:35:34.266312660Z" level=info msg="StopPodSandbox for \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\" returns successfully" Sep 12 17:35:34.267684 containerd[1470]: time="2025-09-12T17:35:34.267248573Z" level=info msg="RemovePodSandbox for \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\"" Sep 12 17:35:34.267684 containerd[1470]: time="2025-09-12T17:35:34.267293569Z" level=info msg="Forcibly stopping sandbox \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\"" Sep 12 17:35:34.501062 containerd[1470]: 2025-09-12 17:35:34.377 [WARNING][5188] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0", GenerateName:"calico-apiserver-56b75c89b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa60dfc9-ae0e-4b18-997a-7dcfb50c4f05", ResourceVersion:"970", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56b75c89b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832", Pod:"calico-apiserver-56b75c89b8-mssfh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7da59dc6419", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:34.501062 containerd[1470]: 2025-09-12 17:35:34.378 [INFO][5188] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Sep 12 17:35:34.501062 containerd[1470]: 2025-09-12 17:35:34.378 [INFO][5188] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" iface="eth0" netns="" Sep 12 17:35:34.501062 containerd[1470]: 2025-09-12 17:35:34.378 [INFO][5188] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Sep 12 17:35:34.501062 containerd[1470]: 2025-09-12 17:35:34.378 [INFO][5188] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Sep 12 17:35:34.501062 containerd[1470]: 2025-09-12 17:35:34.474 [INFO][5195] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" HandleID="k8s-pod-network.620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:34.501062 containerd[1470]: 2025-09-12 17:35:34.476 [INFO][5195] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:34.501062 containerd[1470]: 2025-09-12 17:35:34.476 [INFO][5195] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:34.501062 containerd[1470]: 2025-09-12 17:35:34.492 [WARNING][5195] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" HandleID="k8s-pod-network.620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:34.501062 containerd[1470]: 2025-09-12 17:35:34.492 [INFO][5195] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" HandleID="k8s-pod-network.620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--mssfh-eth0" Sep 12 17:35:34.501062 containerd[1470]: 2025-09-12 17:35:34.495 [INFO][5195] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:34.501062 containerd[1470]: 2025-09-12 17:35:34.498 [INFO][5188] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee" Sep 12 17:35:34.502476 containerd[1470]: time="2025-09-12T17:35:34.501101836Z" level=info msg="TearDown network for sandbox \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\" successfully" Sep 12 17:35:34.508483 containerd[1470]: time="2025-09-12T17:35:34.508422850Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:35:34.508483 containerd[1470]: time="2025-09-12T17:35:34.508511567Z" level=info msg="RemovePodSandbox \"620ba0ed23cbe541662b97bf15e2133c7137396fc3b18e320bc36da6bcaaa2ee\" returns successfully" Sep 12 17:35:34.509892 containerd[1470]: time="2025-09-12T17:35:34.509463628Z" level=info msg="StopPodSandbox for \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\"" Sep 12 17:35:34.699831 containerd[1470]: 2025-09-12 17:35:34.608 [WARNING][5210] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0", GenerateName:"calico-apiserver-56b75c89b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"fade2c26-8080-4e6a-9beb-ec982854f037", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56b75c89b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4", Pod:"calico-apiserver-56b75c89b8-5w8s5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5e443ac887", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:34.699831 containerd[1470]: 2025-09-12 17:35:34.608 [INFO][5210] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Sep 12 17:35:34.699831 containerd[1470]: 2025-09-12 17:35:34.608 [INFO][5210] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" iface="eth0" netns="" Sep 12 17:35:34.699831 containerd[1470]: 2025-09-12 17:35:34.608 [INFO][5210] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Sep 12 17:35:34.699831 containerd[1470]: 2025-09-12 17:35:34.608 [INFO][5210] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Sep 12 17:35:34.699831 containerd[1470]: 2025-09-12 17:35:34.668 [INFO][5217] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" HandleID="k8s-pod-network.eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:34.699831 containerd[1470]: 2025-09-12 17:35:34.669 [INFO][5217] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:34.699831 containerd[1470]: 2025-09-12 17:35:34.669 [INFO][5217] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:34.699831 containerd[1470]: 2025-09-12 17:35:34.689 [WARNING][5217] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" HandleID="k8s-pod-network.eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:34.699831 containerd[1470]: 2025-09-12 17:35:34.689 [INFO][5217] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" HandleID="k8s-pod-network.eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:34.699831 containerd[1470]: 2025-09-12 17:35:34.692 [INFO][5217] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:34.699831 containerd[1470]: 2025-09-12 17:35:34.696 [INFO][5210] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Sep 12 17:35:34.700990 containerd[1470]: time="2025-09-12T17:35:34.699876412Z" level=info msg="TearDown network for sandbox \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\" successfully" Sep 12 17:35:34.700990 containerd[1470]: time="2025-09-12T17:35:34.699906984Z" level=info msg="StopPodSandbox for \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\" returns successfully" Sep 12 17:35:34.700990 containerd[1470]: time="2025-09-12T17:35:34.700953025Z" level=info msg="RemovePodSandbox for \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\"" Sep 12 17:35:34.700990 containerd[1470]: time="2025-09-12T17:35:34.700986180Z" level=info msg="Forcibly stopping sandbox \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\"" Sep 12 17:35:34.951326 containerd[1470]: 2025-09-12 17:35:34.793 [WARNING][5232] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0", GenerateName:"calico-apiserver-56b75c89b8-", Namespace:"calico-apiserver", SelfLink:"", UID:"fade2c26-8080-4e6a-9beb-ec982854f037", ResourceVersion:"1002", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"56b75c89b8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4", Pod:"calico-apiserver-56b75c89b8-5w8s5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.73.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie5e443ac887", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:34.951326 containerd[1470]: 2025-09-12 17:35:34.795 [INFO][5232] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Sep 12 17:35:34.951326 containerd[1470]: 2025-09-12 17:35:34.796 [INFO][5232] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" iface="eth0" netns="" Sep 12 17:35:34.951326 containerd[1470]: 2025-09-12 17:35:34.796 [INFO][5232] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Sep 12 17:35:34.951326 containerd[1470]: 2025-09-12 17:35:34.796 [INFO][5232] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Sep 12 17:35:34.951326 containerd[1470]: 2025-09-12 17:35:34.915 [INFO][5240] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" HandleID="k8s-pod-network.eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:34.951326 containerd[1470]: 2025-09-12 17:35:34.915 [INFO][5240] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:34.951326 containerd[1470]: 2025-09-12 17:35:34.915 [INFO][5240] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:34.951326 containerd[1470]: 2025-09-12 17:35:34.933 [WARNING][5240] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" HandleID="k8s-pod-network.eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:34.951326 containerd[1470]: 2025-09-12 17:35:34.933 [INFO][5240] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" HandleID="k8s-pod-network.eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--apiserver--56b75c89b8--5w8s5-eth0" Sep 12 17:35:34.951326 containerd[1470]: 2025-09-12 17:35:34.937 [INFO][5240] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:34.951326 containerd[1470]: 2025-09-12 17:35:34.946 [INFO][5232] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9" Sep 12 17:35:34.952711 containerd[1470]: time="2025-09-12T17:35:34.952591481Z" level=info msg="TearDown network for sandbox \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\" successfully" Sep 12 17:35:34.959767 containerd[1470]: time="2025-09-12T17:35:34.959711598Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:35:34.960255 containerd[1470]: time="2025-09-12T17:35:34.960224882Z" level=info msg="RemovePodSandbox \"eeb6a281f0cb2690b26f9f982e2bc84f73987c4d44240c606149cddef5c619e9\" returns successfully" Sep 12 17:35:34.961488 containerd[1470]: time="2025-09-12T17:35:34.961446414Z" level=info msg="StopPodSandbox for \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\"" Sep 12 17:35:35.148278 containerd[1470]: 2025-09-12 17:35:35.055 [WARNING][5254] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"058bb08d-c113-4683-9906-785068b1e043", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a", Pod:"coredns-7c65d6cfc9-hdjvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicdcea1ec6c0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:35.148278 containerd[1470]: 2025-09-12 17:35:35.056 [INFO][5254] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Sep 12 17:35:35.148278 containerd[1470]: 2025-09-12 17:35:35.056 [INFO][5254] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" iface="eth0" netns="" Sep 12 17:35:35.148278 containerd[1470]: 2025-09-12 17:35:35.056 [INFO][5254] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Sep 12 17:35:35.148278 containerd[1470]: 2025-09-12 17:35:35.056 [INFO][5254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Sep 12 17:35:35.148278 containerd[1470]: 2025-09-12 17:35:35.115 [INFO][5262] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" HandleID="k8s-pod-network.ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:35.148278 containerd[1470]: 2025-09-12 17:35:35.115 [INFO][5262] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:35.148278 containerd[1470]: 2025-09-12 17:35:35.115 [INFO][5262] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:35.148278 containerd[1470]: 2025-09-12 17:35:35.130 [WARNING][5262] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" HandleID="k8s-pod-network.ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:35.148278 containerd[1470]: 2025-09-12 17:35:35.130 [INFO][5262] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" HandleID="k8s-pod-network.ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:35.148278 containerd[1470]: 2025-09-12 17:35:35.134 [INFO][5262] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:35.148278 containerd[1470]: 2025-09-12 17:35:35.141 [INFO][5254] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Sep 12 17:35:35.149080 containerd[1470]: time="2025-09-12T17:35:35.148855170Z" level=info msg="TearDown network for sandbox \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\" successfully" Sep 12 17:35:35.149080 containerd[1470]: time="2025-09-12T17:35:35.148891494Z" level=info msg="StopPodSandbox for \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\" returns successfully" Sep 12 17:35:35.150325 containerd[1470]: time="2025-09-12T17:35:35.150280804Z" level=info msg="RemovePodSandbox for \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\"" Sep 12 17:35:35.151004 containerd[1470]: time="2025-09-12T17:35:35.150668289Z" level=info msg="Forcibly stopping sandbox \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\"" Sep 12 17:35:35.349407 containerd[1470]: 2025-09-12 17:35:35.237 [WARNING][5276] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"058bb08d-c113-4683-9906-785068b1e043", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"99076e7a50e97435f82f811c93cbf80c39a937c863c80d176bf49694e909cf9a", Pod:"coredns-7c65d6cfc9-hdjvw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicdcea1ec6c0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:35.349407 containerd[1470]: 2025-09-12 17:35:35.237 [INFO][5276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Sep 12 17:35:35.349407 containerd[1470]: 2025-09-12 17:35:35.237 [INFO][5276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" iface="eth0" netns="" Sep 12 17:35:35.349407 containerd[1470]: 2025-09-12 17:35:35.237 [INFO][5276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Sep 12 17:35:35.349407 containerd[1470]: 2025-09-12 17:35:35.237 [INFO][5276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Sep 12 17:35:35.349407 containerd[1470]: 2025-09-12 17:35:35.317 [INFO][5283] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" HandleID="k8s-pod-network.ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:35.349407 containerd[1470]: 2025-09-12 17:35:35.319 [INFO][5283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:35.349407 containerd[1470]: 2025-09-12 17:35:35.319 [INFO][5283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:35.349407 containerd[1470]: 2025-09-12 17:35:35.337 [WARNING][5283] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" HandleID="k8s-pod-network.ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:35.349407 containerd[1470]: 2025-09-12 17:35:35.337 [INFO][5283] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" HandleID="k8s-pod-network.ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--hdjvw-eth0" Sep 12 17:35:35.349407 containerd[1470]: 2025-09-12 17:35:35.341 [INFO][5283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:35.349407 containerd[1470]: 2025-09-12 17:35:35.346 [INFO][5276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e" Sep 12 17:35:35.351076 containerd[1470]: time="2025-09-12T17:35:35.350405069Z" level=info msg="TearDown network for sandbox \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\" successfully" Sep 12 17:35:35.358457 containerd[1470]: time="2025-09-12T17:35:35.358387877Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:35:35.358694 containerd[1470]: time="2025-09-12T17:35:35.358491709Z" level=info msg="RemovePodSandbox \"ce8e18dbe08c1c7261a9ec207ddbdeda06e44b7308b581c223b25ea8775b448e\" returns successfully" Sep 12 17:35:35.361073 containerd[1470]: time="2025-09-12T17:35:35.360787339Z" level=info msg="StopPodSandbox for \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\"" Sep 12 17:35:35.568558 containerd[1470]: 2025-09-12 17:35:35.474 [WARNING][5297] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0", GenerateName:"calico-kube-controllers-9d66ff4cd-", Namespace:"calico-system", SelfLink:"", UID:"47141fc5-2666-4c35-9d69-2367a7803e63", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9d66ff4cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f", Pod:"calico-kube-controllers-9d66ff4cd-cjszq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic3938091101", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:35.568558 containerd[1470]: 2025-09-12 17:35:35.475 [INFO][5297] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Sep 12 17:35:35.568558 containerd[1470]: 2025-09-12 17:35:35.475 [INFO][5297] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" iface="eth0" netns="" Sep 12 17:35:35.568558 containerd[1470]: 2025-09-12 17:35:35.475 [INFO][5297] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Sep 12 17:35:35.568558 containerd[1470]: 2025-09-12 17:35:35.475 [INFO][5297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Sep 12 17:35:35.568558 containerd[1470]: 2025-09-12 17:35:35.532 [INFO][5304] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" HandleID="k8s-pod-network.c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:35.568558 containerd[1470]: 2025-09-12 17:35:35.532 [INFO][5304] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:35.568558 containerd[1470]: 2025-09-12 17:35:35.532 [INFO][5304] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:35.568558 containerd[1470]: 2025-09-12 17:35:35.550 [WARNING][5304] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" HandleID="k8s-pod-network.c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:35.568558 containerd[1470]: 2025-09-12 17:35:35.550 [INFO][5304] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" HandleID="k8s-pod-network.c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:35.568558 containerd[1470]: 2025-09-12 17:35:35.556 [INFO][5304] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:35.568558 containerd[1470]: 2025-09-12 17:35:35.563 [INFO][5297] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Sep 12 17:35:35.568558 containerd[1470]: time="2025-09-12T17:35:35.568227067Z" level=info msg="TearDown network for sandbox \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\" successfully" Sep 12 17:35:35.568558 containerd[1470]: time="2025-09-12T17:35:35.568271759Z" level=info msg="StopPodSandbox for \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\" returns successfully" Sep 12 17:35:35.571459 containerd[1470]: time="2025-09-12T17:35:35.569080471Z" level=info msg="RemovePodSandbox for \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\"" Sep 12 17:35:35.571459 containerd[1470]: time="2025-09-12T17:35:35.569152951Z" level=info msg="Forcibly stopping sandbox \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\"" Sep 12 17:35:35.760292 containerd[1470]: 2025-09-12 17:35:35.666 [WARNING][5318] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0", GenerateName:"calico-kube-controllers-9d66ff4cd-", Namespace:"calico-system", SelfLink:"", UID:"47141fc5-2666-4c35-9d69-2367a7803e63", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"9d66ff4cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f", Pod:"calico-kube-controllers-9d66ff4cd-cjszq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.73.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic3938091101", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:35.760292 containerd[1470]: 2025-09-12 17:35:35.666 [INFO][5318] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Sep 12 17:35:35.760292 containerd[1470]: 2025-09-12 17:35:35.666 [INFO][5318] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" iface="eth0" netns="" Sep 12 17:35:35.760292 containerd[1470]: 2025-09-12 17:35:35.666 [INFO][5318] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Sep 12 17:35:35.760292 containerd[1470]: 2025-09-12 17:35:35.666 [INFO][5318] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Sep 12 17:35:35.760292 containerd[1470]: 2025-09-12 17:35:35.714 [INFO][5326] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" HandleID="k8s-pod-network.c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:35.760292 containerd[1470]: 2025-09-12 17:35:35.715 [INFO][5326] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:35.760292 containerd[1470]: 2025-09-12 17:35:35.715 [INFO][5326] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:35.760292 containerd[1470]: 2025-09-12 17:35:35.733 [WARNING][5326] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" HandleID="k8s-pod-network.c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:35.760292 containerd[1470]: 2025-09-12 17:35:35.733 [INFO][5326] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" HandleID="k8s-pod-network.c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Workload="ci--4081.3.6--8--31c29e3945-k8s-calico--kube--controllers--9d66ff4cd--cjszq-eth0" Sep 12 17:35:35.760292 containerd[1470]: 2025-09-12 17:35:35.747 [INFO][5326] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:35.760292 containerd[1470]: 2025-09-12 17:35:35.753 [INFO][5318] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42" Sep 12 17:35:35.762137 containerd[1470]: time="2025-09-12T17:35:35.761422805Z" level=info msg="TearDown network for sandbox \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\" successfully" Sep 12 17:35:35.772362 containerd[1470]: time="2025-09-12T17:35:35.772308204Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:35:35.772944 containerd[1470]: time="2025-09-12T17:35:35.772668872Z" level=info msg="RemovePodSandbox \"c82ed77e430b5f1d38d6e521fd83cdcb1159f8c30ac57b43c72560ec5c01ec42\" returns successfully" Sep 12 17:35:35.774145 containerd[1470]: time="2025-09-12T17:35:35.774038707Z" level=info msg="StopPodSandbox for \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\"" Sep 12 17:35:36.007587 containerd[1470]: 2025-09-12 17:35:35.889 [WARNING][5340] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"37a86f89-3968-4ebf-bcc4-c3d17db0dd1b", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8", Pod:"coredns-7c65d6cfc9-tnpg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicfd3b01975d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:36.007587 containerd[1470]: 2025-09-12 17:35:35.889 [INFO][5340] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:36.007587 containerd[1470]: 2025-09-12 17:35:35.889 [INFO][5340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" iface="eth0" netns="" Sep 12 17:35:36.007587 containerd[1470]: 2025-09-12 17:35:35.889 [INFO][5340] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:36.007587 containerd[1470]: 2025-09-12 17:35:35.889 [INFO][5340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:36.007587 containerd[1470]: 2025-09-12 17:35:35.967 [INFO][5348] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" HandleID="k8s-pod-network.a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:36.007587 containerd[1470]: 2025-09-12 17:35:35.968 [INFO][5348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:36.007587 containerd[1470]: 2025-09-12 17:35:35.968 [INFO][5348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:36.007587 containerd[1470]: 2025-09-12 17:35:35.985 [WARNING][5348] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" HandleID="k8s-pod-network.a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:36.007587 containerd[1470]: 2025-09-12 17:35:35.985 [INFO][5348] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" HandleID="k8s-pod-network.a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:36.007587 containerd[1470]: 2025-09-12 17:35:35.991 [INFO][5348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:36.007587 containerd[1470]: 2025-09-12 17:35:36.000 [INFO][5340] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:36.007587 containerd[1470]: time="2025-09-12T17:35:36.006335701Z" level=info msg="TearDown network for sandbox \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\" successfully" Sep 12 17:35:36.007587 containerd[1470]: time="2025-09-12T17:35:36.006372313Z" level=info msg="StopPodSandbox for \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\" returns successfully" Sep 12 17:35:36.012466 containerd[1470]: time="2025-09-12T17:35:36.011794580Z" level=info msg="RemovePodSandbox for \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\"" Sep 12 17:35:36.012466 containerd[1470]: time="2025-09-12T17:35:36.011848964Z" level=info msg="Forcibly stopping sandbox \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\"" Sep 12 17:35:36.304569 containerd[1470]: 2025-09-12 17:35:36.151 [WARNING][5369] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"37a86f89-3968-4ebf-bcc4-c3d17db0dd1b", ResourceVersion:"1048", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"b90f15313b15e3be94a561e7004405e9df1491de677a172eed4b1ea48a4b83f8", Pod:"coredns-7c65d6cfc9-tnpg9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.73.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicfd3b01975d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:36.304569 containerd[1470]: 2025-09-12 17:35:36.153 [INFO][5369] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:36.304569 containerd[1470]: 2025-09-12 17:35:36.154 [INFO][5369] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" iface="eth0" netns="" Sep 12 17:35:36.304569 containerd[1470]: 2025-09-12 17:35:36.154 [INFO][5369] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:36.304569 containerd[1470]: 2025-09-12 17:35:36.154 [INFO][5369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:36.304569 containerd[1470]: 2025-09-12 17:35:36.272 [INFO][5388] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" HandleID="k8s-pod-network.a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:36.304569 containerd[1470]: 2025-09-12 17:35:36.273 [INFO][5388] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:36.304569 containerd[1470]: 2025-09-12 17:35:36.274 [INFO][5388] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:36.304569 containerd[1470]: 2025-09-12 17:35:36.291 [WARNING][5388] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" HandleID="k8s-pod-network.a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:36.304569 containerd[1470]: 2025-09-12 17:35:36.291 [INFO][5388] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" HandleID="k8s-pod-network.a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Workload="ci--4081.3.6--8--31c29e3945-k8s-coredns--7c65d6cfc9--tnpg9-eth0" Sep 12 17:35:36.304569 containerd[1470]: 2025-09-12 17:35:36.296 [INFO][5388] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:36.304569 containerd[1470]: 2025-09-12 17:35:36.301 [INFO][5369] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627" Sep 12 17:35:36.305840 containerd[1470]: time="2025-09-12T17:35:36.305120506Z" level=info msg="TearDown network for sandbox \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\" successfully" Sep 12 17:35:36.331040 containerd[1470]: time="2025-09-12T17:35:36.330115716Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:35:36.331040 containerd[1470]: time="2025-09-12T17:35:36.330190457Z" level=info msg="RemovePodSandbox \"a08fede6d771ab5db261e0b9e0f5823e0414390000c46fba3adcbe96f596f627\" returns successfully" Sep 12 17:35:36.352710 containerd[1470]: time="2025-09-12T17:35:36.352653427Z" level=info msg="StopPodSandbox for \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\"" Sep 12 17:35:36.509232 containerd[1470]: time="2025-09-12T17:35:36.471631512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:36.510132 containerd[1470]: time="2025-09-12T17:35:36.479933609Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 17:35:36.510132 containerd[1470]: time="2025-09-12T17:35:36.500930860Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 5.280304764s" Sep 12 17:35:36.510132 containerd[1470]: time="2025-09-12T17:35:36.510138345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:35:36.510692 containerd[1470]: time="2025-09-12T17:35:36.510657960Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:36.512582 containerd[1470]: time="2025-09-12T17:35:36.511553495Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:36.515742 containerd[1470]: time="2025-09-12T17:35:36.515689415Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 17:35:36.517312 containerd[1470]: 2025-09-12 17:35:36.445 [WARNING][5403] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c589e8cc-160a-4a03-8cff-84be5e73deb3", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438", Pod:"csi-node-driver-cl7zl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e0fbe8684c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:36.517312 containerd[1470]: 2025-09-12 17:35:36.445 [INFO][5403] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Sep 12 17:35:36.517312 containerd[1470]: 2025-09-12 17:35:36.445 [INFO][5403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" iface="eth0" netns="" Sep 12 17:35:36.517312 containerd[1470]: 2025-09-12 17:35:36.445 [INFO][5403] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Sep 12 17:35:36.517312 containerd[1470]: 2025-09-12 17:35:36.445 [INFO][5403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Sep 12 17:35:36.517312 containerd[1470]: 2025-09-12 17:35:36.491 [INFO][5410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" HandleID="k8s-pod-network.b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Workload="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:36.517312 containerd[1470]: 2025-09-12 17:35:36.491 [INFO][5410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:36.517312 containerd[1470]: 2025-09-12 17:35:36.491 [INFO][5410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:36.517312 containerd[1470]: 2025-09-12 17:35:36.504 [WARNING][5410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" HandleID="k8s-pod-network.b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Workload="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:36.517312 containerd[1470]: 2025-09-12 17:35:36.504 [INFO][5410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" HandleID="k8s-pod-network.b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Workload="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:36.517312 containerd[1470]: 2025-09-12 17:35:36.506 [INFO][5410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:36.517312 containerd[1470]: 2025-09-12 17:35:36.509 [INFO][5403] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Sep 12 17:35:36.518880 containerd[1470]: time="2025-09-12T17:35:36.517369780Z" level=info msg="TearDown network for sandbox \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\" successfully" Sep 12 17:35:36.518880 containerd[1470]: time="2025-09-12T17:35:36.517415340Z" level=info msg="StopPodSandbox for \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\" returns successfully" Sep 12 17:35:36.518880 containerd[1470]: time="2025-09-12T17:35:36.517829082Z" level=info msg="RemovePodSandbox for \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\"" Sep 12 17:35:36.518880 containerd[1470]: time="2025-09-12T17:35:36.517859624Z" level=info msg="Forcibly stopping sandbox \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\"" Sep 12 17:35:36.519442 containerd[1470]: time="2025-09-12T17:35:36.519126383Z" level=info msg="CreateContainer within sandbox \"96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:35:36.555329 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771173456.mount: Deactivated successfully. Sep 12 17:35:36.568884 containerd[1470]: time="2025-09-12T17:35:36.568630766Z" level=info msg="CreateContainer within sandbox \"96afc601850fe3a4bf16842e3361b14f4e0626c37f26df116b893dfd66e68832\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"951aedbf274051ea223b1549db2c823d0965dd3c3d5f23f98b691fa09887c0f2\"" Sep 12 17:35:36.571910 containerd[1470]: time="2025-09-12T17:35:36.571859658Z" level=info msg="StartContainer for \"951aedbf274051ea223b1549db2c823d0965dd3c3d5f23f98b691fa09887c0f2\"" Sep 12 17:35:36.729535 containerd[1470]: 2025-09-12 17:35:36.634 [WARNING][5429] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c589e8cc-160a-4a03-8cff-84be5e73deb3", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 17, 34, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.6-8-31c29e3945", ContainerID:"44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438", Pod:"csi-node-driver-cl7zl", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.73.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0e0fbe8684c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 17:35:36.729535 containerd[1470]: 2025-09-12 17:35:36.635 [INFO][5429] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Sep 12 17:35:36.729535 containerd[1470]: 2025-09-12 17:35:36.635 [INFO][5429] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" iface="eth0" netns="" Sep 12 17:35:36.729535 containerd[1470]: 2025-09-12 17:35:36.635 [INFO][5429] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Sep 12 17:35:36.729535 containerd[1470]: 2025-09-12 17:35:36.635 [INFO][5429] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Sep 12 17:35:36.729535 containerd[1470]: 2025-09-12 17:35:36.690 [INFO][5440] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" HandleID="k8s-pod-network.b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Workload="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:36.729535 containerd[1470]: 2025-09-12 17:35:36.690 [INFO][5440] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 17:35:36.729535 containerd[1470]: 2025-09-12 17:35:36.690 [INFO][5440] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 17:35:36.729535 containerd[1470]: 2025-09-12 17:35:36.705 [WARNING][5440] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" HandleID="k8s-pod-network.b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Workload="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:36.729535 containerd[1470]: 2025-09-12 17:35:36.706 [INFO][5440] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" HandleID="k8s-pod-network.b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Workload="ci--4081.3.6--8--31c29e3945-k8s-csi--node--driver--cl7zl-eth0" Sep 12 17:35:36.729535 containerd[1470]: 2025-09-12 17:35:36.715 [INFO][5440] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 17:35:36.729535 containerd[1470]: 2025-09-12 17:35:36.721 [INFO][5429] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0" Sep 12 17:35:36.730830 containerd[1470]: time="2025-09-12T17:35:36.729603522Z" level=info msg="TearDown network for sandbox \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\" successfully" Sep 12 17:35:36.731256 systemd[1]: Started cri-containerd-951aedbf274051ea223b1549db2c823d0965dd3c3d5f23f98b691fa09887c0f2.scope - libcontainer container 951aedbf274051ea223b1549db2c823d0965dd3c3d5f23f98b691fa09887c0f2. Sep 12 17:35:36.739262 containerd[1470]: time="2025-09-12T17:35:36.738603828Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 17:35:36.739262 containerd[1470]: time="2025-09-12T17:35:36.738685290Z" level=info msg="RemovePodSandbox \"b84eaa10a3a8cc519fcbc67c41e800ec065eabf522c8c52ec4180c6d69bccfc0\" returns successfully" Sep 12 17:35:36.832082 containerd[1470]: time="2025-09-12T17:35:36.831718071Z" level=info msg="StartContainer for \"951aedbf274051ea223b1549db2c823d0965dd3c3d5f23f98b691fa09887c0f2\" returns successfully" Sep 12 17:35:37.327274 kubelet[2521]: I0912 17:35:37.327185 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-kxmsn" podStartSLOduration=39.224370039 podStartE2EDuration="46.327105946s" podCreationTimestamp="2025-09-12 17:34:51 +0000 UTC" firstStartedPulling="2025-09-12 17:35:24.116161194 +0000 UTC m=+52.038370222" lastFinishedPulling="2025-09-12 17:35:31.218897103 +0000 UTC m=+59.141106129" observedRunningTime="2025-09-12 17:35:32.227142151 +0000 UTC m=+60.149351183" watchObservedRunningTime="2025-09-12 17:35:37.327105946 +0000 UTC m=+65.249314994" Sep 12 17:35:37.329642 kubelet[2521]: I0912 17:35:37.328830 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56b75c89b8-mssfh" podStartSLOduration=39.691216676 podStartE2EDuration="51.32880872s" podCreationTimestamp="2025-09-12 17:34:46 +0000 UTC" firstStartedPulling="2025-09-12 17:35:24.875730691 +0000 UTC m=+52.797939713" lastFinishedPulling="2025-09-12 17:35:36.513322731 +0000 UTC m=+64.435531757" observedRunningTime="2025-09-12 17:35:37.328274687 +0000 UTC m=+65.250483716" watchObservedRunningTime="2025-09-12 17:35:37.32880872 +0000 UTC m=+65.251017762" Sep 12 17:35:38.616589 systemd[1]: Started sshd@9-64.227.109.162:22-147.75.109.163:44270.service - OpenSSH per-connection server daemon (147.75.109.163:44270). Sep 12 17:35:38.964090 sshd[5496]: Accepted publickey for core from 147.75.109.163 port 44270 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:35:38.969642 sshd[5496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:38.989717 systemd-logind[1455]: New session 10 of user core. Sep 12 17:35:38.996300 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:35:39.920259 sshd[5496]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:39.938910 systemd[1]: Started sshd@10-64.227.109.162:22-147.75.109.163:39852.service - OpenSSH per-connection server daemon (147.75.109.163:39852). Sep 12 17:35:39.942506 systemd[1]: sshd@9-64.227.109.162:22-147.75.109.163:44270.service: Deactivated successfully. Sep 12 17:35:39.952175 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:35:39.961805 systemd-logind[1455]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:35:39.973582 systemd-logind[1455]: Removed session 10. Sep 12 17:35:40.040545 sshd[5513]: Accepted publickey for core from 147.75.109.163 port 39852 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:35:40.047790 sshd[5513]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:40.065292 systemd-logind[1455]: New session 11 of user core. Sep 12 17:35:40.072311 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:35:40.576233 sshd[5513]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:40.591879 systemd[1]: sshd@10-64.227.109.162:22-147.75.109.163:39852.service: Deactivated successfully. Sep 12 17:35:40.600144 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:35:40.607314 systemd-logind[1455]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:35:40.623539 systemd[1]: Started sshd@11-64.227.109.162:22-147.75.109.163:39854.service - OpenSSH per-connection server daemon (147.75.109.163:39854). Sep 12 17:35:40.629345 systemd-logind[1455]: Removed session 11. Sep 12 17:35:40.750334 sshd[5526]: Accepted publickey for core from 147.75.109.163 port 39854 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:35:40.757748 sshd[5526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:40.775852 systemd-logind[1455]: New session 12 of user core. Sep 12 17:35:40.781400 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:35:41.131643 sshd[5526]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:41.145584 systemd[1]: sshd@11-64.227.109.162:22-147.75.109.163:39854.service: Deactivated successfully. Sep 12 17:35:41.158775 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:35:41.164963 systemd-logind[1455]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:35:41.168497 systemd-logind[1455]: Removed session 12. Sep 12 17:35:42.141339 containerd[1470]: time="2025-09-12T17:35:42.141268729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:42.144973 containerd[1470]: time="2025-09-12T17:35:42.144706674Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 17:35:42.145906 containerd[1470]: time="2025-09-12T17:35:42.145822897Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:42.151815 containerd[1470]: time="2025-09-12T17:35:42.149202155Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:42.151815 containerd[1470]: time="2025-09-12T17:35:42.151576318Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.635562009s" Sep 12 17:35:42.151815 containerd[1470]: time="2025-09-12T17:35:42.151625214Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 17:35:42.256396 containerd[1470]: time="2025-09-12T17:35:42.256173788Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 17:35:42.606858 containerd[1470]: time="2025-09-12T17:35:42.605567007Z" level=info msg="CreateContainer within sandbox \"989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 17:35:42.684297 containerd[1470]: time="2025-09-12T17:35:42.684205885Z" level=info msg="CreateContainer within sandbox \"989e0c13d3f9aeef79666aa926764f13c0fdb8698ad3a8a556bda048eb693f2f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2d66b76dc52bde2198fd982280e6e716a1c5e8cccb749a895f8de036be93aa73\"" Sep 12 17:35:42.696493 containerd[1470]: time="2025-09-12T17:35:42.696313284Z" level=info msg="StartContainer for \"2d66b76dc52bde2198fd982280e6e716a1c5e8cccb749a895f8de036be93aa73\"" Sep 12 17:35:42.902342 systemd[1]: Started cri-containerd-2d66b76dc52bde2198fd982280e6e716a1c5e8cccb749a895f8de036be93aa73.scope - libcontainer container 2d66b76dc52bde2198fd982280e6e716a1c5e8cccb749a895f8de036be93aa73. Sep 12 17:35:43.037296 containerd[1470]: time="2025-09-12T17:35:43.037078377Z" level=info msg="StartContainer for \"2d66b76dc52bde2198fd982280e6e716a1c5e8cccb749a895f8de036be93aa73\" returns successfully" Sep 12 17:35:43.592509 kubelet[2521]: I0912 17:35:43.591560 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-9d66ff4cd-cjszq" podStartSLOduration=34.349436292 podStartE2EDuration="51.568406674s" podCreationTimestamp="2025-09-12 17:34:52 +0000 UTC" firstStartedPulling="2025-09-12 17:35:25.018959111 +0000 UTC m=+52.941168139" lastFinishedPulling="2025-09-12 17:35:42.237929489 +0000 UTC m=+70.160138521" observedRunningTime="2025-09-12 17:35:43.565872338 +0000 UTC m=+71.488081379" watchObservedRunningTime="2025-09-12 17:35:43.568406674 +0000 UTC m=+71.490615720" Sep 12 17:35:43.624262 systemd[1]: run-containerd-runc-k8s.io-2d66b76dc52bde2198fd982280e6e716a1c5e8cccb749a895f8de036be93aa73-runc.dIFLye.mount: Deactivated successfully. Sep 12 17:35:45.435725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount427716352.mount: Deactivated successfully. Sep 12 17:35:45.483289 containerd[1470]: time="2025-09-12T17:35:45.483218144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:45.488125 containerd[1470]: time="2025-09-12T17:35:45.487919027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 17:35:45.491047 containerd[1470]: time="2025-09-12T17:35:45.489322905Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:45.494599 containerd[1470]: time="2025-09-12T17:35:45.494516146Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.23827364s" Sep 12 17:35:45.494878 containerd[1470]: time="2025-09-12T17:35:45.494846804Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 17:35:45.495181 containerd[1470]: time="2025-09-12T17:35:45.495066780Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:45.519314 containerd[1470]: time="2025-09-12T17:35:45.519260215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 17:35:45.536353 containerd[1470]: time="2025-09-12T17:35:45.536242356Z" level=info msg="CreateContainer within sandbox \"8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 17:35:45.571300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount317103896.mount: Deactivated successfully. Sep 12 17:35:45.579212 containerd[1470]: time="2025-09-12T17:35:45.579052478Z" level=info msg="CreateContainer within sandbox \"8c2ee777f932b99aed627ae968085768cf01e08fbda1e7911fb8e85b56891eb3\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"4691f164f64316ac825d067b9355369d650dabfb63a2336972293f0fd58c0a09\"" Sep 12 17:35:45.583348 containerd[1470]: time="2025-09-12T17:35:45.583289075Z" level=info msg="StartContainer for \"4691f164f64316ac825d067b9355369d650dabfb63a2336972293f0fd58c0a09\"" Sep 12 17:35:45.724416 systemd[1]: Started cri-containerd-4691f164f64316ac825d067b9355369d650dabfb63a2336972293f0fd58c0a09.scope - libcontainer container 4691f164f64316ac825d067b9355369d650dabfb63a2336972293f0fd58c0a09. Sep 12 17:35:45.887427 containerd[1470]: time="2025-09-12T17:35:45.887335152Z" level=info msg="StartContainer for \"4691f164f64316ac825d067b9355369d650dabfb63a2336972293f0fd58c0a09\" returns successfully" Sep 12 17:35:45.957414 containerd[1470]: time="2025-09-12T17:35:45.957347286Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:45.993386 containerd[1470]: time="2025-09-12T17:35:45.985056953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 17:35:46.000499 containerd[1470]: time="2025-09-12T17:35:45.997632937Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 478.00799ms" Sep 12 17:35:46.000499 containerd[1470]: time="2025-09-12T17:35:45.997710539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 17:35:46.000499 containerd[1470]: time="2025-09-12T17:35:45.999876822Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 17:35:46.015540 containerd[1470]: time="2025-09-12T17:35:46.013657958Z" level=info msg="CreateContainer within sandbox \"4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 17:35:46.046185 containerd[1470]: time="2025-09-12T17:35:46.045976687Z" level=info msg="CreateContainer within sandbox \"4a032fb49898f1e3d099c65e73f23d9f797ea54c1a22bef43e75c837c1c56fa4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"fadb408b63c58410b932276d321e7e84f9e4c2516396ebd4979d5a525e5e3a60\"" Sep 12 17:35:46.047672 containerd[1470]: time="2025-09-12T17:35:46.047642697Z" level=info msg="StartContainer for \"fadb408b63c58410b932276d321e7e84f9e4c2516396ebd4979d5a525e5e3a60\"" Sep 12 17:35:46.097386 systemd[1]: Started cri-containerd-fadb408b63c58410b932276d321e7e84f9e4c2516396ebd4979d5a525e5e3a60.scope - libcontainer container fadb408b63c58410b932276d321e7e84f9e4c2516396ebd4979d5a525e5e3a60. Sep 12 17:35:46.157698 systemd[1]: Started sshd@12-64.227.109.162:22-147.75.109.163:39866.service - OpenSSH per-connection server daemon (147.75.109.163:39866). Sep 12 17:35:46.234539 containerd[1470]: time="2025-09-12T17:35:46.234284127Z" level=info msg="StartContainer for \"fadb408b63c58410b932276d321e7e84f9e4c2516396ebd4979d5a525e5e3a60\" returns successfully" Sep 12 17:35:46.333428 sshd[5679]: Accepted publickey for core from 147.75.109.163 port 39866 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:35:46.341971 sshd[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:46.354512 systemd-logind[1455]: New session 13 of user core. Sep 12 17:35:46.360341 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:35:46.821547 kubelet[2521]: I0912 17:35:46.787829 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-56b75c89b8-5w8s5" podStartSLOduration=40.962751045 podStartE2EDuration="1m0.736166395s" podCreationTimestamp="2025-09-12 17:34:46 +0000 UTC" firstStartedPulling="2025-09-12 17:35:26.22557365 +0000 UTC m=+54.147782665" lastFinishedPulling="2025-09-12 17:35:45.998989 +0000 UTC m=+73.921198015" observedRunningTime="2025-09-12 17:35:46.697555669 +0000 UTC m=+74.619764706" watchObservedRunningTime="2025-09-12 17:35:46.736166395 +0000 UTC m=+74.658375428" Sep 12 17:35:46.834940 kubelet[2521]: I0912 17:35:46.832751 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-f7b85f5cd-b7jzh" podStartSLOduration=4.386818953 podStartE2EDuration="28.832718383s" podCreationTimestamp="2025-09-12 17:35:18 +0000 UTC" firstStartedPulling="2025-09-12 17:35:21.070945822 +0000 UTC m=+48.993154864" lastFinishedPulling="2025-09-12 17:35:45.516845264 +0000 UTC m=+73.439054294" observedRunningTime="2025-09-12 17:35:46.824733971 +0000 UTC m=+74.746943018" watchObservedRunningTime="2025-09-12 17:35:46.832718383 +0000 UTC m=+74.754927425" Sep 12 17:35:47.405493 sshd[5679]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:47.416847 systemd[1]: sshd@12-64.227.109.162:22-147.75.109.163:39866.service: Deactivated successfully. Sep 12 17:35:47.423589 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:35:47.426285 systemd-logind[1455]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:35:47.429157 systemd-logind[1455]: Removed session 13. Sep 12 17:35:47.657542 kubelet[2521]: I0912 17:35:47.657373 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:35:48.832931 kubelet[2521]: I0912 17:35:48.831237 2521 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:35:49.092785 containerd[1470]: time="2025-09-12T17:35:49.092600141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:49.096276 containerd[1470]: time="2025-09-12T17:35:49.096167909Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 17:35:49.097147 containerd[1470]: time="2025-09-12T17:35:49.097087506Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:49.105738 containerd[1470]: time="2025-09-12T17:35:49.105626118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:35:49.110045 containerd[1470]: time="2025-09-12T17:35:49.108174493Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 3.108208472s" Sep 12 17:35:49.110479 containerd[1470]: time="2025-09-12T17:35:49.110284563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 17:35:49.210923 containerd[1470]: time="2025-09-12T17:35:49.210586783Z" level=info msg="CreateContainer within sandbox \"44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 17:35:49.317067 containerd[1470]: time="2025-09-12T17:35:49.316132795Z" level=info msg="CreateContainer within sandbox \"44bc69cf7029e8335d9aad8eb4d250c81136ef5e32064b4028c947cec9373438\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"df304930a7e44cba3ca8bcf4d8c89a28123bdd773922bac8f8045bb00c8a93f4\"" Sep 12 17:35:49.595495 containerd[1470]: time="2025-09-12T17:35:49.595060851Z" level=info msg="StartContainer for \"df304930a7e44cba3ca8bcf4d8c89a28123bdd773922bac8f8045bb00c8a93f4\"" Sep 12 17:35:49.675209 kubelet[2521]: E0912 17:35:49.674969 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:49.758572 systemd[1]: Started cri-containerd-df304930a7e44cba3ca8bcf4d8c89a28123bdd773922bac8f8045bb00c8a93f4.scope - libcontainer container df304930a7e44cba3ca8bcf4d8c89a28123bdd773922bac8f8045bb00c8a93f4. Sep 12 17:35:49.919553 containerd[1470]: time="2025-09-12T17:35:49.919476841Z" level=info msg="StartContainer for \"df304930a7e44cba3ca8bcf4d8c89a28123bdd773922bac8f8045bb00c8a93f4\" returns successfully" Sep 12 17:35:50.598975 kubelet[2521]: I0912 17:35:50.597110 2521 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 17:35:50.605848 kubelet[2521]: I0912 17:35:50.605335 2521 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 17:35:50.770647 systemd[1]: run-containerd-runc-k8s.io-10f0f516c246948e6ec96d49d40708bc220712b906eb251394ed095883f145bc-runc.kZbz35.mount: Deactivated successfully. Sep 12 17:35:50.866140 kubelet[2521]: I0912 17:35:50.865141 2521 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cl7zl" podStartSLOduration=31.004514173 podStartE2EDuration="58.865105468s" podCreationTimestamp="2025-09-12 17:34:52 +0000 UTC" firstStartedPulling="2025-09-12 17:35:21.261048748 +0000 UTC m=+49.183257773" lastFinishedPulling="2025-09-12 17:35:49.121640022 +0000 UTC m=+77.043849068" observedRunningTime="2025-09-12 17:35:50.862691864 +0000 UTC m=+78.784900899" watchObservedRunningTime="2025-09-12 17:35:50.865105468 +0000 UTC m=+78.787314531" Sep 12 17:35:51.325375 kubelet[2521]: E0912 17:35:51.325305 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:52.427815 systemd[1]: Started sshd@13-64.227.109.162:22-147.75.109.163:40406.service - OpenSSH per-connection server daemon (147.75.109.163:40406). Sep 12 17:35:52.698617 sshd[5789]: Accepted publickey for core from 147.75.109.163 port 40406 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:35:52.707577 sshd[5789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:52.718376 systemd-logind[1455]: New session 14 of user core. Sep 12 17:35:52.722752 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:35:53.324750 kubelet[2521]: E0912 17:35:53.324598 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:35:53.546263 sshd[5789]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:53.554081 systemd[1]: sshd@13-64.227.109.162:22-147.75.109.163:40406.service: Deactivated successfully. Sep 12 17:35:53.560841 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:35:53.562856 systemd-logind[1455]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:35:53.567443 systemd-logind[1455]: Removed session 14. Sep 12 17:35:58.586853 systemd[1]: Started sshd@14-64.227.109.162:22-147.75.109.163:40408.service - OpenSSH per-connection server daemon (147.75.109.163:40408). Sep 12 17:35:58.700176 sshd[5807]: Accepted publickey for core from 147.75.109.163 port 40408 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:35:58.703541 sshd[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:35:58.710801 systemd-logind[1455]: New session 15 of user core. Sep 12 17:35:58.715343 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:35:59.169785 sshd[5807]: pam_unix(sshd:session): session closed for user core Sep 12 17:35:59.176145 systemd[1]: sshd@14-64.227.109.162:22-147.75.109.163:40408.service: Deactivated successfully. Sep 12 17:35:59.179867 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:35:59.181902 systemd-logind[1455]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:35:59.184576 systemd-logind[1455]: Removed session 15. Sep 12 17:36:02.343696 kubelet[2521]: E0912 17:36:02.343330 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:36:04.189579 systemd[1]: Started sshd@15-64.227.109.162:22-147.75.109.163:36822.service - OpenSSH per-connection server daemon (147.75.109.163:36822). Sep 12 17:36:04.331297 sshd[5820]: Accepted publickey for core from 147.75.109.163 port 36822 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:04.333874 sshd[5820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:04.342421 systemd-logind[1455]: New session 16 of user core. Sep 12 17:36:04.351464 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:36:04.696781 sshd[5820]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:04.723129 systemd[1]: Started sshd@16-64.227.109.162:22-147.75.109.163:36832.service - OpenSSH per-connection server daemon (147.75.109.163:36832). Sep 12 17:36:04.723972 systemd[1]: sshd@15-64.227.109.162:22-147.75.109.163:36822.service: Deactivated successfully. Sep 12 17:36:04.745497 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:36:04.749729 systemd-logind[1455]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:36:04.752175 systemd-logind[1455]: Removed session 16. Sep 12 17:36:04.814487 sshd[5830]: Accepted publickey for core from 147.75.109.163 port 36832 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:04.817584 sshd[5830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:04.826402 systemd-logind[1455]: New session 17 of user core. Sep 12 17:36:04.835389 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:36:05.350656 sshd[5830]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:05.368241 systemd[1]: Started sshd@17-64.227.109.162:22-147.75.109.163:36846.service - OpenSSH per-connection server daemon (147.75.109.163:36846). Sep 12 17:36:05.369883 systemd[1]: sshd@16-64.227.109.162:22-147.75.109.163:36832.service: Deactivated successfully. Sep 12 17:36:05.376421 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:36:05.379308 systemd-logind[1455]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:36:05.387775 systemd-logind[1455]: Removed session 17. Sep 12 17:36:05.497859 sshd[5841]: Accepted publickey for core from 147.75.109.163 port 36846 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:05.501345 sshd[5841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:05.509046 systemd-logind[1455]: New session 18 of user core. Sep 12 17:36:05.514574 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:36:09.502102 sshd[5841]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:09.564831 systemd[1]: Started sshd@18-64.227.109.162:22-147.75.109.163:36858.service - OpenSSH per-connection server daemon (147.75.109.163:36858). Sep 12 17:36:09.566079 systemd[1]: sshd@17-64.227.109.162:22-147.75.109.163:36846.service: Deactivated successfully. Sep 12 17:36:09.573357 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:36:09.573804 systemd[1]: session-18.scope: Consumed 1.006s CPU time. Sep 12 17:36:09.584243 systemd-logind[1455]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:36:09.593714 systemd-logind[1455]: Removed session 18. Sep 12 17:36:09.743328 sshd[5913]: Accepted publickey for core from 147.75.109.163 port 36858 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:09.749504 sshd[5913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:09.764571 systemd-logind[1455]: New session 19 of user core. Sep 12 17:36:09.771435 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:36:10.835498 sshd[5913]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:10.849122 systemd[1]: sshd@18-64.227.109.162:22-147.75.109.163:36858.service: Deactivated successfully. Sep 12 17:36:10.853880 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:36:10.859185 systemd-logind[1455]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:36:10.866610 systemd[1]: Started sshd@19-64.227.109.162:22-147.75.109.163:43070.service - OpenSSH per-connection server daemon (147.75.109.163:43070). Sep 12 17:36:10.869754 systemd-logind[1455]: Removed session 19. Sep 12 17:36:11.009066 sshd[5929]: Accepted publickey for core from 147.75.109.163 port 43070 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:11.011956 sshd[5929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:11.021918 systemd-logind[1455]: New session 20 of user core. Sep 12 17:36:11.027311 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:36:11.258804 sshd[5929]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:11.265270 systemd-logind[1455]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:36:11.266469 systemd[1]: sshd@19-64.227.109.162:22-147.75.109.163:43070.service: Deactivated successfully. Sep 12 17:36:11.270412 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:36:11.273004 systemd-logind[1455]: Removed session 20. Sep 12 17:36:16.277085 systemd[1]: Started sshd@20-64.227.109.162:22-147.75.109.163:43078.service - OpenSSH per-connection server daemon (147.75.109.163:43078). Sep 12 17:36:16.419222 sshd[5942]: Accepted publickey for core from 147.75.109.163 port 43078 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:16.427165 sshd[5942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:16.438134 systemd-logind[1455]: New session 21 of user core. Sep 12 17:36:16.444358 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:36:16.643452 sshd[5942]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:16.650262 systemd-logind[1455]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:36:16.651083 systemd[1]: sshd@20-64.227.109.162:22-147.75.109.163:43078.service: Deactivated successfully. Sep 12 17:36:16.653953 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:36:16.658505 systemd-logind[1455]: Removed session 21. Sep 12 17:36:21.667979 systemd[1]: Started sshd@21-64.227.109.162:22-147.75.109.163:57356.service - OpenSSH per-connection server daemon (147.75.109.163:57356). Sep 12 17:36:21.829196 sshd[5998]: Accepted publickey for core from 147.75.109.163 port 57356 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:21.834180 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:21.844258 systemd-logind[1455]: New session 22 of user core. Sep 12 17:36:21.850156 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:36:22.147889 sshd[5998]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:22.161200 systemd[1]: sshd@21-64.227.109.162:22-147.75.109.163:57356.service: Deactivated successfully. Sep 12 17:36:22.168561 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:36:22.170783 systemd-logind[1455]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:36:22.174151 systemd-logind[1455]: Removed session 22. Sep 12 17:36:27.175509 systemd[1]: Started sshd@22-64.227.109.162:22-147.75.109.163:57362.service - OpenSSH per-connection server daemon (147.75.109.163:57362). Sep 12 17:36:27.343509 sshd[6031]: Accepted publickey for core from 147.75.109.163 port 57362 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:27.350255 sshd[6031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:27.362457 systemd-logind[1455]: New session 23 of user core. Sep 12 17:36:27.370272 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:36:27.866454 sshd[6031]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:27.875969 systemd[1]: sshd@22-64.227.109.162:22-147.75.109.163:57362.service: Deactivated successfully. Sep 12 17:36:27.883672 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:36:27.886455 systemd-logind[1455]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:36:27.888740 systemd-logind[1455]: Removed session 23. Sep 12 17:36:28.393064 kubelet[2521]: E0912 17:36:28.389844 2521 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Sep 12 17:36:32.899895 systemd[1]: Started sshd@23-64.227.109.162:22-147.75.109.163:51900.service - OpenSSH per-connection server daemon (147.75.109.163:51900). Sep 12 17:36:33.070050 sshd[6049]: Accepted publickey for core from 147.75.109.163 port 51900 ssh2: RSA SHA256:mQbxIsnpfzSP9iEyvd0V/AYIen7HiZXzEdosYrDCki0 Sep 12 17:36:33.072926 sshd[6049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:36:33.085676 systemd-logind[1455]: New session 24 of user core. Sep 12 17:36:33.095402 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:36:33.684452 sshd[6049]: pam_unix(sshd:session): session closed for user core Sep 12 17:36:33.692779 systemd[1]: sshd@23-64.227.109.162:22-147.75.109.163:51900.service: Deactivated successfully. Sep 12 17:36:33.692855 systemd-logind[1455]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:36:33.701737 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:36:33.708934 systemd-logind[1455]: Removed session 24.