Sep 12 19:44:14.052833 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 19:44:14.052867 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 19:44:14.052895 kernel: BIOS-provided physical RAM map: Sep 12 19:44:14.052933 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 19:44:14.052944 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 19:44:14.052954 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 19:44:14.052978 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Sep 12 19:44:14.052988 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Sep 12 19:44:14.052999 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 19:44:14.053009 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 19:44:14.053019 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 19:44:14.053030 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 19:44:14.053045 kernel: NX (Execute Disable) protection: active Sep 12 19:44:14.053056 kernel: APIC: Static calls initialized Sep 12 19:44:14.053069 kernel: SMBIOS 2.8 present. Sep 12 19:44:14.053081 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Sep 12 19:44:14.053092 kernel: Hypervisor detected: KVM Sep 12 19:44:14.053108 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 19:44:14.053120 kernel: kvm-clock: using sched offset of 4374257177 cycles Sep 12 19:44:14.053132 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 19:44:14.053144 kernel: tsc: Detected 2499.998 MHz processor Sep 12 19:44:14.053155 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 19:44:14.053167 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 19:44:14.053179 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Sep 12 19:44:14.053190 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 19:44:14.053202 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 19:44:14.053218 kernel: Using GB pages for direct mapping Sep 12 19:44:14.053230 kernel: ACPI: Early table checksum verification disabled Sep 12 19:44:14.053242 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Sep 12 19:44:14.053253 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 19:44:14.053265 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 19:44:14.053276 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 19:44:14.053288 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Sep 12 19:44:14.053299 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 19:44:14.053311 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 19:44:14.053327 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 19:44:14.053351 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 19:44:14.053363 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Sep 12 19:44:14.053374 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Sep 12 19:44:14.053388 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Sep 12 19:44:14.053419 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Sep 12 19:44:14.053431 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Sep 12 19:44:14.053451 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Sep 12 19:44:14.053464 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Sep 12 19:44:14.053476 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 19:44:14.053488 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 19:44:14.053500 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 12 19:44:14.053512 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Sep 12 19:44:14.053524 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 12 19:44:14.053536 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Sep 12 19:44:14.053553 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 12 19:44:14.053565 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Sep 12 19:44:14.053577 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 12 19:44:14.053589 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Sep 12 19:44:14.053601 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 12 19:44:14.053613 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Sep 12 19:44:14.053625 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 12 19:44:14.053637 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Sep 12 19:44:14.053649 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 12 19:44:14.053666 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Sep 12 19:44:14.053678 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 19:44:14.053690 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 12 19:44:14.053702 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Sep 12 19:44:14.053715 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Sep 12 19:44:14.053727 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Sep 12 19:44:14.053739 kernel: Zone ranges: Sep 12 19:44:14.053752 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 19:44:14.053764 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Sep 12 19:44:14.053780 kernel: Normal empty Sep 12 19:44:14.053793 kernel: Movable zone start for each node Sep 12 19:44:14.053805 kernel: Early memory node ranges Sep 12 19:44:14.053829 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 19:44:14.053841 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Sep 12 19:44:14.053853 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Sep 12 19:44:14.053864 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 19:44:14.053876 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 19:44:14.053901 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Sep 12 19:44:14.053913 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 19:44:14.055007 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 19:44:14.055021 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 19:44:14.055034 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 19:44:14.055046 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 19:44:14.055058 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 19:44:14.055071 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 19:44:14.055083 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 19:44:14.055095 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 19:44:14.055107 kernel: TSC deadline timer available Sep 12 19:44:14.055126 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Sep 12 19:44:14.055139 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 19:44:14.055151 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 19:44:14.055164 kernel: Booting paravirtualized kernel on KVM Sep 12 19:44:14.055176 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 19:44:14.055189 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 12 19:44:14.055201 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 12 19:44:14.055214 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 12 19:44:14.055226 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 12 19:44:14.055243 kernel: kvm-guest: PV spinlocks enabled Sep 12 19:44:14.055256 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 19:44:14.055270 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 19:44:14.055283 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 19:44:14.055295 kernel: random: crng init done Sep 12 19:44:14.055307 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 19:44:14.055319 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 19:44:14.055331 kernel: Fallback order for Node 0: 0 Sep 12 19:44:14.055349 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Sep 12 19:44:14.055361 kernel: Policy zone: DMA32 Sep 12 19:44:14.055373 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 19:44:14.055386 kernel: software IO TLB: area num 16. Sep 12 19:44:14.055398 kernel: Memory: 1901536K/2096616K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 194820K reserved, 0K cma-reserved) Sep 12 19:44:14.055411 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 12 19:44:14.055423 kernel: Kernel/User page tables isolation: enabled Sep 12 19:44:14.055435 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 19:44:14.055448 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 19:44:14.055467 kernel: Dynamic Preempt: voluntary Sep 12 19:44:14.055480 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 19:44:14.055493 kernel: rcu: RCU event tracing is enabled. Sep 12 19:44:14.055505 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 12 19:44:14.055518 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 19:44:14.055543 kernel: Rude variant of Tasks RCU enabled. Sep 12 19:44:14.055561 kernel: Tracing variant of Tasks RCU enabled. Sep 12 19:44:14.055574 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 19:44:14.055587 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 12 19:44:14.055600 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Sep 12 19:44:14.055613 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 19:44:14.055625 kernel: Console: colour VGA+ 80x25 Sep 12 19:44:14.055643 kernel: printk: console [tty0] enabled Sep 12 19:44:14.055656 kernel: printk: console [ttyS0] enabled Sep 12 19:44:14.055669 kernel: ACPI: Core revision 20230628 Sep 12 19:44:14.055689 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 19:44:14.055702 kernel: x2apic enabled Sep 12 19:44:14.055719 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 19:44:14.055732 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 12 19:44:14.055751 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Sep 12 19:44:14.055765 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 19:44:14.055777 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 12 19:44:14.055790 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 12 19:44:14.055803 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 19:44:14.055815 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 19:44:14.055828 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 19:44:14.055841 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 12 19:44:14.055879 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 19:44:14.055894 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 19:44:14.055907 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 19:44:14.055920 kernel: MMIO Stale Data: Unknown: No mitigations Sep 12 19:44:14.055945 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 12 19:44:14.055958 kernel: active return thunk: its_return_thunk Sep 12 19:44:14.055970 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 19:44:14.055983 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 19:44:14.055996 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 19:44:14.056008 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 19:44:14.056021 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 19:44:14.056041 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 19:44:14.056055 kernel: Freeing SMP alternatives memory: 32K Sep 12 19:44:14.056067 kernel: pid_max: default: 32768 minimum: 301 Sep 12 19:44:14.056080 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 19:44:14.056092 kernel: landlock: Up and running. Sep 12 19:44:14.056105 kernel: SELinux: Initializing. Sep 12 19:44:14.056118 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 19:44:14.056130 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 19:44:14.056143 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Sep 12 19:44:14.056156 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 19:44:14.056169 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 19:44:14.056188 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 19:44:14.056201 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Sep 12 19:44:14.056214 kernel: signal: max sigframe size: 1776 Sep 12 19:44:14.056227 kernel: rcu: Hierarchical SRCU implementation. Sep 12 19:44:14.056240 kernel: rcu: Max phase no-delay instances is 400. Sep 12 19:44:14.056253 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 19:44:14.056266 kernel: smp: Bringing up secondary CPUs ... Sep 12 19:44:14.056279 kernel: smpboot: x86: Booting SMP configuration: Sep 12 19:44:14.056297 kernel: .... node #0, CPUs: #1 Sep 12 19:44:14.056310 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 12 19:44:14.056323 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 19:44:14.056335 kernel: smpboot: Max logical packages: 16 Sep 12 19:44:14.056348 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Sep 12 19:44:14.056361 kernel: devtmpfs: initialized Sep 12 19:44:14.056374 kernel: x86/mm: Memory block size: 128MB Sep 12 19:44:14.056387 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 19:44:14.056400 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 12 19:44:14.056412 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 19:44:14.056430 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 19:44:14.056443 kernel: audit: initializing netlink subsys (disabled) Sep 12 19:44:14.056456 kernel: audit: type=2000 audit(1757706252.050:1): state=initialized audit_enabled=0 res=1 Sep 12 19:44:14.056469 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 19:44:14.056482 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 19:44:14.056494 kernel: cpuidle: using governor menu Sep 12 19:44:14.056507 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 19:44:14.056520 kernel: dca service started, version 1.12.1 Sep 12 19:44:14.056533 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 12 19:44:14.056551 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 19:44:14.056564 kernel: PCI: Using configuration type 1 for base access Sep 12 19:44:14.056577 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 19:44:14.056590 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 19:44:14.056603 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 19:44:14.056616 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 19:44:14.056628 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 19:44:14.056641 kernel: ACPI: Added _OSI(Module Device) Sep 12 19:44:14.056659 kernel: ACPI: Added _OSI(Processor Device) Sep 12 19:44:14.056672 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 19:44:14.056685 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 19:44:14.056698 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 19:44:14.056710 kernel: ACPI: Interpreter enabled Sep 12 19:44:14.056723 kernel: ACPI: PM: (supports S0 S5) Sep 12 19:44:14.056736 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 19:44:14.056749 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 19:44:14.056762 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 19:44:14.056774 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 19:44:14.056793 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 19:44:14.057117 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 19:44:14.057306 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 19:44:14.057478 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 19:44:14.057497 kernel: PCI host bridge to bus 0000:00 Sep 12 19:44:14.057688 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 19:44:14.057871 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 19:44:14.058050 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 19:44:14.058207 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Sep 12 19:44:14.058362 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 19:44:14.058517 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Sep 12 19:44:14.058673 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 19:44:14.058876 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 19:44:14.059100 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Sep 12 19:44:14.059276 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Sep 12 19:44:14.059448 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Sep 12 19:44:14.059638 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Sep 12 19:44:14.059811 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 19:44:14.060050 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 12 19:44:14.060238 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Sep 12 19:44:14.060422 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 12 19:44:14.060598 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Sep 12 19:44:14.060781 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 12 19:44:14.062160 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Sep 12 19:44:14.062351 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 12 19:44:14.062524 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Sep 12 19:44:14.062734 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 12 19:44:14.062978 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Sep 12 19:44:14.063172 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 12 19:44:14.063345 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Sep 12 19:44:14.063568 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 12 19:44:14.063721 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Sep 12 19:44:14.065492 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 12 19:44:14.065705 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Sep 12 19:44:14.065983 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 12 19:44:14.066162 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 12 19:44:14.066332 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Sep 12 19:44:14.066500 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 12 19:44:14.066680 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Sep 12 19:44:14.066876 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 12 19:44:14.067070 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 12 19:44:14.067242 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Sep 12 19:44:14.067412 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Sep 12 19:44:14.067605 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 19:44:14.067775 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 19:44:14.070079 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 19:44:14.070277 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Sep 12 19:44:14.070464 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Sep 12 19:44:14.070659 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 19:44:14.070823 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 12 19:44:14.073080 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Sep 12 19:44:14.073271 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Sep 12 19:44:14.073459 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 12 19:44:14.073637 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 12 19:44:14.073815 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 12 19:44:14.074060 kernel: pci_bus 0000:02: extended config space not accessible Sep 12 19:44:14.074257 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Sep 12 19:44:14.074442 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Sep 12 19:44:14.074630 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 12 19:44:14.074820 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 12 19:44:14.078168 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 12 19:44:14.078352 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Sep 12 19:44:14.078550 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 12 19:44:14.078736 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 12 19:44:14.078935 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 12 19:44:14.079143 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 12 19:44:14.079374 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 12 19:44:14.079549 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 12 19:44:14.079722 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 12 19:44:14.079937 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 12 19:44:14.080113 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 12 19:44:14.080282 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 12 19:44:14.080461 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 12 19:44:14.080631 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 12 19:44:14.080848 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 12 19:44:14.082267 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 12 19:44:14.082487 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 12 19:44:14.082670 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 12 19:44:14.082851 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 12 19:44:14.083056 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 12 19:44:14.083256 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 12 19:44:14.083441 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 12 19:44:14.083626 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 12 19:44:14.083810 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 12 19:44:14.086093 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 12 19:44:14.086127 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 19:44:14.086142 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 19:44:14.086155 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 19:44:14.086168 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 19:44:14.086197 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 19:44:14.086210 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 19:44:14.086226 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 19:44:14.086239 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 19:44:14.086251 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 19:44:14.086264 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 19:44:14.086277 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 19:44:14.086293 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 19:44:14.086306 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 19:44:14.086324 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 19:44:14.086337 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 19:44:14.086362 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 19:44:14.086375 kernel: iommu: Default domain type: Translated Sep 12 19:44:14.086395 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 19:44:14.086421 kernel: PCI: Using ACPI for IRQ routing Sep 12 19:44:14.086433 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 19:44:14.086445 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 19:44:14.086457 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Sep 12 19:44:14.086670 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 19:44:14.086845 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 19:44:14.087074 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 19:44:14.087095 kernel: vgaarb: loaded Sep 12 19:44:14.087108 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 19:44:14.087121 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 19:44:14.087135 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 19:44:14.087148 kernel: pnp: PnP ACPI init Sep 12 19:44:14.087393 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 19:44:14.087414 kernel: pnp: PnP ACPI: found 5 devices Sep 12 19:44:14.087427 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 19:44:14.087440 kernel: NET: Registered PF_INET protocol family Sep 12 19:44:14.087453 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 19:44:14.087465 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 19:44:14.087478 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 19:44:14.087491 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 19:44:14.087520 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 19:44:14.087533 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 19:44:14.087546 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 19:44:14.087571 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 19:44:14.087583 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 19:44:14.087595 kernel: NET: Registered PF_XDP protocol family Sep 12 19:44:14.087776 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Sep 12 19:44:14.088161 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 12 19:44:14.088363 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 12 19:44:14.088571 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 12 19:44:14.088786 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 12 19:44:14.089077 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 12 19:44:14.089252 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 12 19:44:14.089422 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 12 19:44:14.089602 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 12 19:44:14.089793 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 12 19:44:14.090012 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 12 19:44:14.090186 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 12 19:44:14.090355 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 12 19:44:14.090525 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 12 19:44:14.090741 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 12 19:44:14.090991 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 12 19:44:14.091200 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 12 19:44:14.091386 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 12 19:44:14.091559 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 12 19:44:14.091723 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 12 19:44:14.091966 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 12 19:44:14.092141 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 12 19:44:14.092310 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 12 19:44:14.092500 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 12 19:44:14.092704 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 12 19:44:14.092873 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 12 19:44:14.093084 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 12 19:44:14.093255 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 12 19:44:14.093447 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 12 19:44:14.093630 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 12 19:44:14.093813 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 12 19:44:14.094033 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 12 19:44:14.094221 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 12 19:44:14.094416 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 12 19:44:14.094629 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 12 19:44:14.094817 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 12 19:44:14.095060 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 12 19:44:14.095232 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 12 19:44:14.095423 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 12 19:44:14.095591 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 12 19:44:14.095763 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 12 19:44:14.096017 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 12 19:44:14.096189 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 12 19:44:14.096359 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 12 19:44:14.096538 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 12 19:44:14.096708 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 12 19:44:14.096897 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 12 19:44:14.097096 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 12 19:44:14.097269 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 12 19:44:14.097439 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 12 19:44:14.097600 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 19:44:14.097766 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 19:44:14.097969 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 19:44:14.098138 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Sep 12 19:44:14.098295 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 19:44:14.098462 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Sep 12 19:44:14.098667 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 12 19:44:14.098838 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Sep 12 19:44:14.099089 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 12 19:44:14.099262 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 12 19:44:14.099444 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Sep 12 19:44:14.099613 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 12 19:44:14.099773 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 12 19:44:14.099997 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Sep 12 19:44:14.100164 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 12 19:44:14.100326 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 12 19:44:14.100504 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 12 19:44:14.100685 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 12 19:44:14.100891 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 12 19:44:14.101090 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Sep 12 19:44:14.101253 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 12 19:44:14.101414 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 12 19:44:14.101584 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Sep 12 19:44:14.101767 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 12 19:44:14.101986 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 12 19:44:14.102158 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Sep 12 19:44:14.102322 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Sep 12 19:44:14.102485 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 12 19:44:14.102659 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Sep 12 19:44:14.102835 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 12 19:44:14.103066 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 12 19:44:14.103088 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 19:44:14.103102 kernel: PCI: CLS 0 bytes, default 64 Sep 12 19:44:14.103116 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 19:44:14.103130 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Sep 12 19:44:14.103144 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 19:44:14.103158 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Sep 12 19:44:14.103172 kernel: Initialise system trusted keyrings Sep 12 19:44:14.103186 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 19:44:14.103207 kernel: Key type asymmetric registered Sep 12 19:44:14.103221 kernel: Asymmetric key parser 'x509' registered Sep 12 19:44:14.103235 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 19:44:14.103249 kernel: io scheduler mq-deadline registered Sep 12 19:44:14.103263 kernel: io scheduler kyber registered Sep 12 19:44:14.103277 kernel: io scheduler bfq registered Sep 12 19:44:14.103449 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 12 19:44:14.103623 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 12 19:44:14.103811 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:44:14.104043 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 12 19:44:14.104220 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 12 19:44:14.104403 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:44:14.104578 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 12 19:44:14.104751 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 12 19:44:14.105002 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:44:14.105177 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 12 19:44:14.105348 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 12 19:44:14.105521 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:44:14.105703 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 12 19:44:14.105891 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 12 19:44:14.106099 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:44:14.106274 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 12 19:44:14.106444 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 12 19:44:14.106615 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:44:14.106791 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 12 19:44:14.107022 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 12 19:44:14.107204 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:44:14.107377 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 12 19:44:14.107548 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 12 19:44:14.107720 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:44:14.107742 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 19:44:14.107757 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 19:44:14.107771 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 19:44:14.107792 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 19:44:14.107807 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 19:44:14.107821 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 19:44:14.107835 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 19:44:14.107848 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 19:44:14.107889 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 19:44:14.108081 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 12 19:44:14.108246 kernel: rtc_cmos 00:03: registered as rtc0 Sep 12 19:44:14.108418 kernel: rtc_cmos 00:03: setting system clock to 2025-09-12T19:44:13 UTC (1757706253) Sep 12 19:44:14.108580 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 12 19:44:14.108600 kernel: intel_pstate: CPU model not supported Sep 12 19:44:14.108614 kernel: NET: Registered PF_INET6 protocol family Sep 12 19:44:14.108627 kernel: Segment Routing with IPv6 Sep 12 19:44:14.108641 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 19:44:14.108655 kernel: NET: Registered PF_PACKET protocol family Sep 12 19:44:14.108668 kernel: Key type dns_resolver registered Sep 12 19:44:14.108682 kernel: IPI shorthand broadcast: enabled Sep 12 19:44:14.108703 kernel: sched_clock: Marking stable (1182014650, 247742490)->(1671779803, -242022663) Sep 12 19:44:14.108717 kernel: registered taskstats version 1 Sep 12 19:44:14.108731 kernel: Loading compiled-in X.509 certificates Sep 12 19:44:14.108744 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 19:44:14.108758 kernel: Key type .fscrypt registered Sep 12 19:44:14.108771 kernel: Key type fscrypt-provisioning registered Sep 12 19:44:14.108785 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 19:44:14.108798 kernel: ima: Allocated hash algorithm: sha1 Sep 12 19:44:14.108817 kernel: ima: No architecture policies found Sep 12 19:44:14.108831 kernel: clk: Disabling unused clocks Sep 12 19:44:14.108844 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 19:44:14.108898 kernel: Write protecting the kernel read-only data: 36864k Sep 12 19:44:14.108915 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 19:44:14.108940 kernel: Run /init as init process Sep 12 19:44:14.108954 kernel: with arguments: Sep 12 19:44:14.108967 kernel: /init Sep 12 19:44:14.108981 kernel: with environment: Sep 12 19:44:14.109001 kernel: HOME=/ Sep 12 19:44:14.109014 kernel: TERM=linux Sep 12 19:44:14.109028 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 19:44:14.109049 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 19:44:14.109067 systemd[1]: Detected virtualization kvm. Sep 12 19:44:14.109081 systemd[1]: Detected architecture x86-64. Sep 12 19:44:14.109096 systemd[1]: Running in initrd. Sep 12 19:44:14.109110 systemd[1]: No hostname configured, using default hostname. Sep 12 19:44:14.109129 systemd[1]: Hostname set to . Sep 12 19:44:14.109144 systemd[1]: Initializing machine ID from VM UUID. Sep 12 19:44:14.109158 systemd[1]: Queued start job for default target initrd.target. Sep 12 19:44:14.109173 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 19:44:14.109187 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 19:44:14.109203 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 19:44:14.109217 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 19:44:14.109232 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 19:44:14.109252 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 19:44:14.109269 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 19:44:14.109284 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 19:44:14.109298 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 19:44:14.109313 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 19:44:14.109328 systemd[1]: Reached target paths.target - Path Units. Sep 12 19:44:14.109342 systemd[1]: Reached target slices.target - Slice Units. Sep 12 19:44:14.109362 systemd[1]: Reached target swap.target - Swaps. Sep 12 19:44:14.109377 systemd[1]: Reached target timers.target - Timer Units. Sep 12 19:44:14.109391 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 19:44:14.109406 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 19:44:14.109421 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 19:44:14.109435 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 19:44:14.109450 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 19:44:14.109465 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 19:44:14.109484 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 19:44:14.109499 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 19:44:14.109514 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 19:44:14.109529 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 19:44:14.109543 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 19:44:14.109558 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 19:44:14.109572 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 19:44:14.109587 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 19:44:14.109602 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 19:44:14.109675 systemd-journald[201]: Collecting audit messages is disabled. Sep 12 19:44:14.109708 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 19:44:14.109731 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 19:44:14.109746 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 19:44:14.109768 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 19:44:14.109783 systemd-journald[201]: Journal started Sep 12 19:44:14.109823 systemd-journald[201]: Runtime Journal (/run/log/journal/677d32955dc64611a018117c72e5054b) is 4.7M, max 38.0M, 33.2M free. Sep 12 19:44:14.082965 systemd-modules-load[202]: Inserted module 'overlay' Sep 12 19:44:14.172685 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 19:44:14.172729 kernel: Bridge firewalling registered Sep 12 19:44:14.172748 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 19:44:14.130792 systemd-modules-load[202]: Inserted module 'br_netfilter' Sep 12 19:44:14.175612 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 19:44:14.177972 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 19:44:14.188154 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 19:44:14.205110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 19:44:14.208125 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 19:44:14.213979 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 19:44:14.225858 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 19:44:14.242906 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 19:44:14.245091 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 19:44:14.247388 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 19:44:14.254082 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 19:44:14.266115 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 19:44:14.268381 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 19:44:14.287910 dracut-cmdline[234]: dracut-dracut-053 Sep 12 19:44:14.295442 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 19:44:14.310737 systemd-resolved[235]: Positive Trust Anchors: Sep 12 19:44:14.310758 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 19:44:14.310807 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 19:44:14.320485 systemd-resolved[235]: Defaulting to hostname 'linux'. Sep 12 19:44:14.323000 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 19:44:14.323862 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 19:44:14.403022 kernel: SCSI subsystem initialized Sep 12 19:44:14.414950 kernel: Loading iSCSI transport class v2.0-870. Sep 12 19:44:14.428885 kernel: iscsi: registered transport (tcp) Sep 12 19:44:14.455271 kernel: iscsi: registered transport (qla4xxx) Sep 12 19:44:14.455325 kernel: QLogic iSCSI HBA Driver Sep 12 19:44:14.514138 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 19:44:14.524112 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 19:44:14.558509 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 19:44:14.558567 kernel: device-mapper: uevent: version 1.0.3 Sep 12 19:44:14.560960 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 19:44:14.609935 kernel: raid6: sse2x4 gen() 13476 MB/s Sep 12 19:44:14.627950 kernel: raid6: sse2x2 gen() 9301 MB/s Sep 12 19:44:14.646710 kernel: raid6: sse2x1 gen() 9783 MB/s Sep 12 19:44:14.646773 kernel: raid6: using algorithm sse2x4 gen() 13476 MB/s Sep 12 19:44:14.665746 kernel: raid6: .... xor() 7612 MB/s, rmw enabled Sep 12 19:44:14.665801 kernel: raid6: using ssse3x2 recovery algorithm Sep 12 19:44:14.691895 kernel: xor: automatically using best checksumming function avx Sep 12 19:44:14.890972 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 19:44:14.907687 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 19:44:14.918087 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 19:44:14.935817 systemd-udevd[419]: Using default interface naming scheme 'v255'. Sep 12 19:44:14.942945 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 19:44:14.952057 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 19:44:14.985150 dracut-pre-trigger[429]: rd.md=0: removing MD RAID activation Sep 12 19:44:15.026196 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 19:44:15.034093 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 19:44:15.151466 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 19:44:15.162081 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 19:44:15.188649 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 19:44:15.192712 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 19:44:15.194495 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 19:44:15.196167 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 19:44:15.204992 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 19:44:15.237268 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 19:44:15.285884 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Sep 12 19:44:15.309915 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 12 19:44:15.330686 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 19:44:15.330742 kernel: GPT:17805311 != 125829119 Sep 12 19:44:15.330770 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 19:44:15.330788 kernel: GPT:17805311 != 125829119 Sep 12 19:44:15.330817 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 19:44:15.330834 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 19:44:15.336954 kernel: ACPI: bus type USB registered Sep 12 19:44:15.336986 kernel: usbcore: registered new interface driver usbfs Sep 12 19:44:15.337005 kernel: usbcore: registered new interface driver hub Sep 12 19:44:15.341432 kernel: usbcore: registered new device driver usb Sep 12 19:44:15.341466 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 19:44:15.346309 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 19:44:15.346494 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 19:44:15.350103 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 19:44:15.351122 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 19:44:15.351326 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 19:44:15.355696 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 19:44:15.359993 kernel: AVX version of gcm_enc/dec engaged. Sep 12 19:44:15.361923 kernel: AES CTR mode by8 optimization enabled Sep 12 19:44:15.369197 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 19:44:15.385917 kernel: libata version 3.00 loaded. Sep 12 19:44:15.398309 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 19:44:15.401737 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 19:44:15.406753 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 19:44:15.407036 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 19:44:15.413917 kernel: scsi host0: ahci Sep 12 19:44:15.415905 kernel: scsi host1: ahci Sep 12 19:44:15.419952 kernel: scsi host2: ahci Sep 12 19:44:15.421951 kernel: scsi host3: ahci Sep 12 19:44:15.422899 kernel: scsi host4: ahci Sep 12 19:44:15.427344 kernel: scsi host5: ahci Sep 12 19:44:15.428011 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Sep 12 19:44:15.428035 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Sep 12 19:44:15.428053 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Sep 12 19:44:15.428070 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Sep 12 19:44:15.428088 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Sep 12 19:44:15.428105 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Sep 12 19:44:15.462917 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (470) Sep 12 19:44:15.477955 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (469) Sep 12 19:44:15.488469 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 19:44:15.549968 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 19:44:15.564719 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 19:44:15.572215 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 19:44:15.578364 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 19:44:15.579303 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 19:44:15.591157 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 19:44:15.597117 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 19:44:15.599978 disk-uuid[558]: Primary Header is updated. Sep 12 19:44:15.599978 disk-uuid[558]: Secondary Entries is updated. Sep 12 19:44:15.599978 disk-uuid[558]: Secondary Header is updated. Sep 12 19:44:15.607932 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 19:44:15.614893 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 19:44:15.644196 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 19:44:15.738913 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 19:44:15.738980 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 19:44:15.741604 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 12 19:44:15.741904 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 19:44:15.744007 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 19:44:15.746723 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 19:44:15.766906 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 12 19:44:15.769894 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Sep 12 19:44:15.773914 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 12 19:44:15.779003 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 12 19:44:15.779243 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Sep 12 19:44:15.781272 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Sep 12 19:44:15.784335 kernel: hub 1-0:1.0: USB hub found Sep 12 19:44:15.784601 kernel: hub 1-0:1.0: 4 ports detected Sep 12 19:44:15.789762 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 12 19:44:15.790089 kernel: hub 2-0:1.0: USB hub found Sep 12 19:44:15.791891 kernel: hub 2-0:1.0: 4 ports detected Sep 12 19:44:16.028929 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 12 19:44:16.170918 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 19:44:16.177413 kernel: usbcore: registered new interface driver usbhid Sep 12 19:44:16.177449 kernel: usbhid: USB HID core driver Sep 12 19:44:16.185363 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Sep 12 19:44:16.185405 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Sep 12 19:44:16.618910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 19:44:16.621335 disk-uuid[559]: The operation has completed successfully. Sep 12 19:44:16.675226 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 19:44:16.675414 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 19:44:16.702058 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 19:44:16.707234 sh[585]: Success Sep 12 19:44:16.725903 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Sep 12 19:44:16.795649 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 19:44:16.806998 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 19:44:16.810816 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 19:44:16.839943 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 19:44:16.840033 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 19:44:16.840055 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 19:44:16.844021 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 19:44:16.844061 kernel: BTRFS info (device dm-0): using free space tree Sep 12 19:44:16.856220 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 19:44:16.857731 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 19:44:16.864053 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 19:44:16.868370 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 19:44:16.881226 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 19:44:16.881299 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 19:44:16.881323 kernel: BTRFS info (device vda6): using free space tree Sep 12 19:44:16.890511 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 19:44:16.904475 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 19:44:16.907185 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 19:44:16.912814 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 19:44:16.921072 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 19:44:17.028411 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 19:44:17.047149 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 19:44:17.076825 ignition[672]: Ignition 2.19.0 Sep 12 19:44:17.077832 ignition[672]: Stage: fetch-offline Sep 12 19:44:17.077960 ignition[672]: no configs at "/usr/lib/ignition/base.d" Sep 12 19:44:17.077980 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 12 19:44:17.078146 ignition[672]: parsed url from cmdline: "" Sep 12 19:44:17.078153 ignition[672]: no config URL provided Sep 12 19:44:17.078163 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 19:44:17.082227 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 19:44:17.078179 ignition[672]: no config at "/usr/lib/ignition/user.ign" Sep 12 19:44:17.083904 systemd-networkd[770]: lo: Link UP Sep 12 19:44:17.078188 ignition[672]: failed to fetch config: resource requires networking Sep 12 19:44:17.083918 systemd-networkd[770]: lo: Gained carrier Sep 12 19:44:17.078464 ignition[672]: Ignition finished successfully Sep 12 19:44:17.086778 systemd-networkd[770]: Enumeration completed Sep 12 19:44:17.087345 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 19:44:17.087351 systemd-networkd[770]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 19:44:17.089154 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 19:44:17.091380 systemd-networkd[770]: eth0: Link UP Sep 12 19:44:17.091386 systemd-networkd[770]: eth0: Gained carrier Sep 12 19:44:17.091410 systemd-networkd[770]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 19:44:17.091734 systemd[1]: Reached target network.target - Network. Sep 12 19:44:17.105062 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 19:44:17.108952 systemd-networkd[770]: eth0: DHCPv4 address 10.230.9.238/30, gateway 10.230.9.237 acquired from 10.230.9.237 Sep 12 19:44:17.127438 ignition[777]: Ignition 2.19.0 Sep 12 19:44:17.127461 ignition[777]: Stage: fetch Sep 12 19:44:17.127751 ignition[777]: no configs at "/usr/lib/ignition/base.d" Sep 12 19:44:17.127771 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 12 19:44:17.127947 ignition[777]: parsed url from cmdline: "" Sep 12 19:44:17.127954 ignition[777]: no config URL provided Sep 12 19:44:17.127964 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 19:44:17.127980 ignition[777]: no config at "/usr/lib/ignition/user.ign" Sep 12 19:44:17.128220 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 12 19:44:17.128313 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 12 19:44:17.128367 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 12 19:44:17.151116 ignition[777]: GET result: OK Sep 12 19:44:17.151252 ignition[777]: parsing config with SHA512: a1043275d291ae990d7047fb0e597131f90802df45284ef027553d891b087921c46d89f56b030b7bbe421aadbce1b1fa1595cbe80806b7d882859d5e6d650c82 Sep 12 19:44:17.160582 unknown[777]: fetched base config from "system" Sep 12 19:44:17.160599 unknown[777]: fetched base config from "system" Sep 12 19:44:17.161157 ignition[777]: fetch: fetch complete Sep 12 19:44:17.160608 unknown[777]: fetched user config from "openstack" Sep 12 19:44:17.161166 ignition[777]: fetch: fetch passed Sep 12 19:44:17.163434 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 19:44:17.161228 ignition[777]: Ignition finished successfully Sep 12 19:44:17.175097 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 19:44:17.195145 ignition[785]: Ignition 2.19.0 Sep 12 19:44:17.195164 ignition[785]: Stage: kargs Sep 12 19:44:17.195438 ignition[785]: no configs at "/usr/lib/ignition/base.d" Sep 12 19:44:17.195459 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 12 19:44:17.196661 ignition[785]: kargs: kargs passed Sep 12 19:44:17.199300 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 19:44:17.196731 ignition[785]: Ignition finished successfully Sep 12 19:44:17.213112 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 19:44:17.231189 ignition[791]: Ignition 2.19.0 Sep 12 19:44:17.231209 ignition[791]: Stage: disks Sep 12 19:44:17.231468 ignition[791]: no configs at "/usr/lib/ignition/base.d" Sep 12 19:44:17.231488 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 12 19:44:17.232736 ignition[791]: disks: disks passed Sep 12 19:44:17.232808 ignition[791]: Ignition finished successfully Sep 12 19:44:17.236150 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 19:44:17.237845 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 19:44:17.239427 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 19:44:17.241206 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 19:44:17.242886 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 19:44:17.244350 systemd[1]: Reached target basic.target - Basic System. Sep 12 19:44:17.251025 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 19:44:17.272718 systemd-fsck[799]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 19:44:17.276826 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 19:44:17.281977 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 19:44:17.403890 kernel: EXT4-fs (vda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 19:44:17.404386 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 19:44:17.405684 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 19:44:17.418983 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 19:44:17.421985 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 19:44:17.424379 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 19:44:17.434614 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (807) Sep 12 19:44:17.434670 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 19:44:17.439879 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 19:44:17.436480 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Sep 12 19:44:17.443391 kernel: BTRFS info (device vda6): using free space tree Sep 12 19:44:17.437314 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 19:44:17.437358 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 19:44:17.449279 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 19:44:17.459094 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 19:44:17.462050 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 19:44:17.470841 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 19:44:17.554472 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 19:44:17.562122 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Sep 12 19:44:17.568677 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 19:44:17.576280 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 19:44:17.685402 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 19:44:17.690004 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 19:44:17.693047 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 19:44:17.708886 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 19:44:17.729833 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 19:44:17.746651 ignition[925]: INFO : Ignition 2.19.0 Sep 12 19:44:17.746651 ignition[925]: INFO : Stage: mount Sep 12 19:44:17.749465 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 19:44:17.749465 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 12 19:44:17.749465 ignition[925]: INFO : mount: mount passed Sep 12 19:44:17.749465 ignition[925]: INFO : Ignition finished successfully Sep 12 19:44:17.752692 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 19:44:17.836734 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 19:44:18.521240 systemd-networkd[770]: eth0: Gained IPv6LL Sep 12 19:44:20.026469 systemd-networkd[770]: eth0: Ignoring DHCPv6 address 2a02:1348:179:827b:24:19ff:fee6:9ee/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:827b:24:19ff:fee6:9ee/64 assigned by NDisc. Sep 12 19:44:20.026487 systemd-networkd[770]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 12 19:44:24.611414 coreos-metadata[809]: Sep 12 19:44:24.611 WARN failed to locate config-drive, using the metadata service API instead Sep 12 19:44:24.636652 coreos-metadata[809]: Sep 12 19:44:24.636 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 12 19:44:24.652812 coreos-metadata[809]: Sep 12 19:44:24.652 INFO Fetch successful Sep 12 19:44:24.653661 coreos-metadata[809]: Sep 12 19:44:24.653 INFO wrote hostname srv-l18mb.gb1.brightbox.com to /sysroot/etc/hostname Sep 12 19:44:24.655732 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 12 19:44:24.655914 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Sep 12 19:44:24.663967 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 19:44:24.684109 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 19:44:24.702881 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Sep 12 19:44:24.706109 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 19:44:24.706143 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 19:44:24.708118 kernel: BTRFS info (device vda6): using free space tree Sep 12 19:44:24.714901 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 19:44:24.716140 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 19:44:24.747338 ignition[958]: INFO : Ignition 2.19.0 Sep 12 19:44:24.747338 ignition[958]: INFO : Stage: files Sep 12 19:44:24.749135 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 19:44:24.749135 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 12 19:44:24.750953 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Sep 12 19:44:24.750953 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 19:44:24.750953 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 19:44:24.754019 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 19:44:24.754019 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 19:44:24.755999 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 19:44:24.754309 unknown[958]: wrote ssh authorized keys file for user: core Sep 12 19:44:24.757989 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 19:44:24.757989 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 12 19:44:24.757989 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 19:44:24.757989 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Sep 12 19:44:24.944769 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 19:44:25.482923 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Sep 12 19:44:25.485926 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 19:44:25.485926 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 19:44:25.485926 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 19:44:25.485926 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 19:44:25.485926 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 19:44:25.485926 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 19:44:25.485926 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 19:44:25.485926 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 19:44:25.485926 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 19:44:25.485926 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 19:44:25.485926 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 19:44:25.504888 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 19:44:25.504888 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 19:44:25.504888 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Sep 12 19:44:25.903988 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 19:44:27.550613 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Sep 12 19:44:27.550613 ignition[958]: INFO : files: op(c): [started] processing unit "containerd.service" Sep 12 19:44:27.557055 ignition[958]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 19:44:27.557055 ignition[958]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 12 19:44:27.557055 ignition[958]: INFO : files: op(c): [finished] processing unit "containerd.service" Sep 12 19:44:27.557055 ignition[958]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Sep 12 19:44:27.557055 ignition[958]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 19:44:27.557055 ignition[958]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 19:44:27.557055 ignition[958]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Sep 12 19:44:27.557055 ignition[958]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Sep 12 19:44:27.557055 ignition[958]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 19:44:27.557055 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 19:44:27.557055 ignition[958]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 19:44:27.557055 ignition[958]: INFO : files: files passed Sep 12 19:44:27.557055 ignition[958]: INFO : Ignition finished successfully Sep 12 19:44:27.556148 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 19:44:27.569161 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 19:44:27.582135 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 19:44:27.589102 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 19:44:27.589489 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 19:44:27.598885 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 19:44:27.598885 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 19:44:27.601279 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 19:44:27.603495 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 19:44:27.604600 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 19:44:27.617506 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 19:44:27.646495 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 19:44:27.646701 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 19:44:27.648598 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 19:44:27.650090 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 19:44:27.651774 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 19:44:27.657056 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 19:44:27.675743 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 19:44:27.683102 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 19:44:27.697419 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 19:44:27.699421 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 19:44:27.700364 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 19:44:27.701894 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 19:44:27.702086 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 19:44:27.703931 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 19:44:27.704942 systemd[1]: Stopped target basic.target - Basic System. Sep 12 19:44:27.706471 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 19:44:27.708140 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 19:44:27.709571 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 19:44:27.711205 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 19:44:27.712777 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 19:44:27.714427 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 19:44:27.715986 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 19:44:27.717604 systemd[1]: Stopped target swap.target - Swaps. Sep 12 19:44:27.718982 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 19:44:27.719170 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 19:44:27.720928 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 19:44:27.721853 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 19:44:27.723278 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 19:44:27.724951 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 19:44:27.725998 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 19:44:27.726162 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 19:44:27.728016 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 19:44:27.728188 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 19:44:27.730100 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 19:44:27.730257 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 19:44:27.741966 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 19:44:27.742758 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 19:44:27.743001 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 19:44:27.749251 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 19:44:27.756347 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 19:44:27.756533 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 19:44:27.759391 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 19:44:27.759574 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 19:44:27.766113 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 19:44:27.766280 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 19:44:27.774641 ignition[1010]: INFO : Ignition 2.19.0 Sep 12 19:44:27.777397 ignition[1010]: INFO : Stage: umount Sep 12 19:44:27.778631 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 19:44:27.780916 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 12 19:44:27.780916 ignition[1010]: INFO : umount: umount passed Sep 12 19:44:27.780916 ignition[1010]: INFO : Ignition finished successfully Sep 12 19:44:27.784127 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 19:44:27.784300 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 19:44:27.786320 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 19:44:27.786437 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 19:44:27.787360 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 19:44:27.787432 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 19:44:27.788717 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 19:44:27.788784 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 19:44:27.790107 systemd[1]: Stopped target network.target - Network. Sep 12 19:44:27.791366 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 19:44:27.791441 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 19:44:27.792938 systemd[1]: Stopped target paths.target - Path Units. Sep 12 19:44:27.794232 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 19:44:27.794668 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 19:44:27.795773 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 19:44:27.797122 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 19:44:27.798641 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 19:44:27.798719 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 19:44:27.800204 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 19:44:27.800267 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 19:44:27.801604 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 19:44:27.801675 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 19:44:27.803123 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 19:44:27.803194 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 19:44:27.805063 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 19:44:27.807838 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 19:44:27.810021 systemd-networkd[770]: eth0: DHCPv6 lease lost Sep 12 19:44:27.813115 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 19:44:27.813306 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 19:44:27.815237 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 19:44:27.815302 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 19:44:27.823073 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 19:44:27.826033 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 19:44:27.826115 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 19:44:27.827905 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 19:44:27.830788 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 19:44:27.830989 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 19:44:27.835473 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 19:44:27.835751 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 19:44:27.849485 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 19:44:27.849655 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 19:44:27.852025 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 19:44:27.852083 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 19:44:27.853558 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 19:44:27.853638 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 19:44:27.855847 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 19:44:27.855942 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 19:44:27.857331 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 19:44:27.857405 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 19:44:27.865080 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 19:44:27.865998 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 19:44:27.866084 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 19:44:27.868559 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 19:44:27.868657 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 19:44:27.869411 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 19:44:27.869478 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 19:44:27.870318 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 19:44:27.870389 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 19:44:27.873050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 19:44:27.873117 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 19:44:27.875128 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 19:44:27.875288 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 19:44:27.880723 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 19:44:27.880930 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 19:44:27.904538 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 19:44:27.909504 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 19:44:27.909703 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 19:44:27.911733 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 19:44:27.912742 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 19:44:27.912828 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 19:44:27.929147 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 19:44:27.947800 systemd[1]: Switching root. Sep 12 19:44:27.983623 systemd-journald[201]: Journal stopped Sep 12 19:44:29.605939 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Sep 12 19:44:29.606153 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 19:44:29.606201 kernel: SELinux: policy capability open_perms=1 Sep 12 19:44:29.606228 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 19:44:29.606255 kernel: SELinux: policy capability always_check_network=0 Sep 12 19:44:29.606280 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 19:44:29.606300 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 19:44:29.606344 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 19:44:29.606370 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 19:44:29.606404 kernel: audit: type=1403 audit(1757706268.308:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 19:44:29.606454 systemd[1]: Successfully loaded SELinux policy in 58.193ms. Sep 12 19:44:29.606514 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.834ms. Sep 12 19:44:29.606545 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 19:44:29.606577 systemd[1]: Detected virtualization kvm. Sep 12 19:44:29.606600 systemd[1]: Detected architecture x86-64. Sep 12 19:44:29.606620 systemd[1]: Detected first boot. Sep 12 19:44:29.606641 systemd[1]: Hostname set to . Sep 12 19:44:29.606671 systemd[1]: Initializing machine ID from VM UUID. Sep 12 19:44:29.606693 zram_generator::config[1071]: No configuration found. Sep 12 19:44:29.606726 systemd[1]: Populated /etc with preset unit settings. Sep 12 19:44:29.606748 systemd[1]: Queued start job for default target multi-user.target. Sep 12 19:44:29.606775 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 19:44:29.606798 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 19:44:29.606826 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 19:44:29.606848 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 19:44:29.609620 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 19:44:29.609650 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 19:44:29.609681 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 19:44:29.609718 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 19:44:29.609746 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 19:44:29.609774 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 19:44:29.609809 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 19:44:29.609832 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 19:44:29.609853 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 19:44:29.612093 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 19:44:29.612136 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 19:44:29.612159 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 19:44:29.612181 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 19:44:29.612202 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 19:44:29.612222 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 19:44:29.612265 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 19:44:29.612286 systemd[1]: Reached target slices.target - Slice Units. Sep 12 19:44:29.612330 systemd[1]: Reached target swap.target - Swaps. Sep 12 19:44:29.612353 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 19:44:29.612391 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 19:44:29.612412 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 19:44:29.612458 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 19:44:29.612490 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 19:44:29.612527 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 19:44:29.612557 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 19:44:29.612584 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 19:44:29.612613 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 19:44:29.612640 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 19:44:29.612662 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 19:44:29.612684 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:44:29.612710 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 19:44:29.612732 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 19:44:29.612758 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 19:44:29.612781 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 19:44:29.612810 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 19:44:29.612839 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 19:44:29.612973 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 19:44:29.613003 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 19:44:29.613024 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 19:44:29.613045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 19:44:29.613073 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 19:44:29.613095 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 19:44:29.613116 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 19:44:29.613137 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 12 19:44:29.613168 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 12 19:44:29.613191 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 19:44:29.613211 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 19:44:29.613232 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 19:44:29.613258 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 19:44:29.613297 kernel: fuse: init (API version 7.39) Sep 12 19:44:29.613317 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 19:44:29.613344 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:44:29.613364 kernel: loop: module loaded Sep 12 19:44:29.613406 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 19:44:29.613433 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 19:44:29.613455 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 19:44:29.613476 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 19:44:29.613506 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 19:44:29.613537 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 19:44:29.613559 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 19:44:29.613579 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 19:44:29.613608 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 19:44:29.613630 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 19:44:29.613651 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 19:44:29.613678 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 19:44:29.613714 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 19:44:29.613742 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 19:44:29.613774 kernel: ACPI: bus type drm_connector registered Sep 12 19:44:29.613832 systemd-journald[1180]: Collecting audit messages is disabled. Sep 12 19:44:29.617442 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 19:44:29.617509 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 19:44:29.617535 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 19:44:29.617557 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 19:44:29.617579 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 19:44:29.617599 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 19:44:29.617620 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 19:44:29.617655 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 19:44:29.617680 systemd-journald[1180]: Journal started Sep 12 19:44:29.617719 systemd-journald[1180]: Runtime Journal (/run/log/journal/677d32955dc64611a018117c72e5054b) is 4.7M, max 38.0M, 33.2M free. Sep 12 19:44:29.621879 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 19:44:29.626665 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 19:44:29.641660 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 19:44:29.650977 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 19:44:29.663967 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 19:44:29.664920 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 19:44:29.670156 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 19:44:29.681105 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 19:44:29.683969 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 19:44:29.692031 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 19:44:29.695015 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 19:44:29.705998 systemd-journald[1180]: Time spent on flushing to /var/log/journal/677d32955dc64611a018117c72e5054b is 68.883ms for 1122 entries. Sep 12 19:44:29.705998 systemd-journald[1180]: System Journal (/var/log/journal/677d32955dc64611a018117c72e5054b) is 8.0M, max 584.8M, 576.8M free. Sep 12 19:44:29.811750 systemd-journald[1180]: Received client request to flush runtime journal. Sep 12 19:44:29.700061 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 19:44:29.716597 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 19:44:29.723359 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 19:44:29.732965 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 19:44:29.750391 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 19:44:29.751429 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 19:44:29.784110 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 19:44:29.817068 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 19:44:29.828288 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Sep 12 19:44:29.828317 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Sep 12 19:44:29.845454 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 19:44:29.863128 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 19:44:29.865264 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 19:44:29.875140 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 19:44:29.892203 udevadm[1241]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 19:44:29.926779 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 19:44:29.936134 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 19:44:29.962937 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Sep 12 19:44:29.962964 systemd-tmpfiles[1247]: ACLs are not supported, ignoring. Sep 12 19:44:29.971436 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 19:44:30.423975 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 19:44:30.433087 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 19:44:30.468249 systemd-udevd[1253]: Using default interface naming scheme 'v255'. Sep 12 19:44:30.503207 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 19:44:30.515050 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 19:44:30.543076 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 19:44:30.607550 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Sep 12 19:44:30.647982 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1261) Sep 12 19:44:30.659012 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 19:44:30.715887 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 19:44:30.742010 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 19:44:30.747908 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 19:44:30.755895 kernel: ACPI: button: Power Button [PWRF] Sep 12 19:44:30.829310 systemd-networkd[1258]: lo: Link UP Sep 12 19:44:30.831212 systemd-networkd[1258]: lo: Gained carrier Sep 12 19:44:30.834445 systemd-networkd[1258]: Enumeration completed Sep 12 19:44:30.835064 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 19:44:30.838663 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 19:44:30.840109 systemd-networkd[1258]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 19:44:30.841509 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 19:44:30.842149 systemd-networkd[1258]: eth0: Link UP Sep 12 19:44:30.842283 systemd-networkd[1258]: eth0: Gained carrier Sep 12 19:44:30.842386 systemd-networkd[1258]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 19:44:30.843050 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 19:44:30.864990 systemd-networkd[1258]: eth0: DHCPv4 address 10.230.9.238/30, gateway 10.230.9.237 acquired from 10.230.9.237 Sep 12 19:44:30.868885 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 12 19:44:30.890504 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 19:44:30.893711 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 19:44:30.894050 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 19:44:30.942139 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 19:44:31.127709 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 19:44:31.146075 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 19:44:31.159167 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 19:44:31.175800 lvm[1292]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 19:44:31.210210 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 19:44:31.211947 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 19:44:31.219056 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 19:44:31.226136 lvm[1296]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 19:44:31.259093 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 19:44:31.260762 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 19:44:31.262107 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 19:44:31.262262 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 19:44:31.263228 systemd[1]: Reached target machines.target - Containers. Sep 12 19:44:31.265715 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 19:44:31.272076 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 19:44:31.274966 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 19:44:31.276040 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 19:44:31.283100 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 19:44:31.296063 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 19:44:31.307601 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 19:44:31.315058 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 19:44:31.321158 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 19:44:31.346891 kernel: loop0: detected capacity change from 0 to 140768 Sep 12 19:44:31.353233 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 19:44:31.356220 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 19:44:31.386115 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 19:44:31.409920 kernel: loop1: detected capacity change from 0 to 142488 Sep 12 19:44:31.448900 kernel: loop2: detected capacity change from 0 to 221472 Sep 12 19:44:31.491515 kernel: loop3: detected capacity change from 0 to 8 Sep 12 19:44:31.524377 kernel: loop4: detected capacity change from 0 to 140768 Sep 12 19:44:31.546269 kernel: loop5: detected capacity change from 0 to 142488 Sep 12 19:44:31.566908 kernel: loop6: detected capacity change from 0 to 221472 Sep 12 19:44:31.580724 kernel: loop7: detected capacity change from 0 to 8 Sep 12 19:44:31.585502 (sd-merge)[1317]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Sep 12 19:44:31.588898 (sd-merge)[1317]: Merged extensions into '/usr'. Sep 12 19:44:31.602813 systemd[1]: Reloading requested from client PID 1305 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 19:44:31.602940 systemd[1]: Reloading... Sep 12 19:44:31.697892 zram_generator::config[1345]: No configuration found. Sep 12 19:44:31.919280 ldconfig[1300]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 19:44:31.939131 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 19:44:32.029734 systemd[1]: Reloading finished in 426 ms. Sep 12 19:44:32.056565 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 19:44:32.058134 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 19:44:32.073139 systemd[1]: Starting ensure-sysext.service... Sep 12 19:44:32.076074 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 19:44:32.084094 systemd[1]: Reloading requested from client PID 1408 ('systemctl') (unit ensure-sysext.service)... Sep 12 19:44:32.084117 systemd[1]: Reloading... Sep 12 19:44:32.132090 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 19:44:32.132732 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 19:44:32.139930 zram_generator::config[1435]: No configuration found. Sep 12 19:44:32.142812 systemd-tmpfiles[1409]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 19:44:32.143303 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Sep 12 19:44:32.143450 systemd-tmpfiles[1409]: ACLs are not supported, ignoring. Sep 12 19:44:32.154792 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 19:44:32.154810 systemd-tmpfiles[1409]: Skipping /boot Sep 12 19:44:32.179730 systemd-tmpfiles[1409]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 19:44:32.179751 systemd-tmpfiles[1409]: Skipping /boot Sep 12 19:44:32.379350 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 19:44:32.473623 systemd[1]: Reloading finished in 388 ms. Sep 12 19:44:32.502611 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 19:44:32.514168 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 19:44:32.519042 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 19:44:32.529081 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 19:44:32.535087 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 19:44:32.542179 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 19:44:32.567239 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:44:32.567567 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 19:44:32.577972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 19:44:32.593688 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 19:44:32.601700 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 19:44:32.606550 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 19:44:32.608042 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:44:32.626244 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 19:44:32.626535 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 19:44:32.633587 augenrules[1525]: No rules Sep 12 19:44:32.634519 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 19:44:32.638164 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 19:44:32.638669 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 19:44:32.649171 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:44:32.649619 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 19:44:32.658217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 19:44:32.670841 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 19:44:32.674038 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 19:44:32.674227 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:44:32.677809 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 19:44:32.686223 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 19:44:32.690181 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 19:44:32.690472 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 19:44:32.691940 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 19:44:32.692192 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 19:44:32.696042 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 19:44:32.697130 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 19:44:32.706949 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 19:44:32.726366 systemd[1]: Finished ensure-sysext.service. Sep 12 19:44:32.731493 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:44:32.731771 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 19:44:32.739139 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 19:44:32.744833 systemd-resolved[1511]: Positive Trust Anchors: Sep 12 19:44:32.745473 systemd-resolved[1511]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 19:44:32.745624 systemd-resolved[1511]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 19:44:32.746048 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 19:44:32.751702 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 19:44:32.755236 systemd-resolved[1511]: Using system hostname 'srv-l18mb.gb1.brightbox.com'. Sep 12 19:44:32.763617 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 19:44:32.764562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 19:44:32.770092 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 19:44:32.786281 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 19:44:32.787087 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 19:44:32.787135 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:44:32.787680 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 19:44:32.790602 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 19:44:32.790959 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 19:44:32.793979 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 19:44:32.794228 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 19:44:32.797571 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 19:44:32.797829 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 19:44:32.801121 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 19:44:32.801471 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 19:44:32.809970 systemd[1]: Reached target network.target - Network. Sep 12 19:44:32.811188 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 19:44:32.812086 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 19:44:32.812189 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 19:44:32.814462 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 19:44:32.857622 systemd-networkd[1258]: eth0: Gained IPv6LL Sep 12 19:44:32.862840 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 19:44:32.867520 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 19:44:32.889228 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 19:44:32.891588 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 19:44:32.892502 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 19:44:32.893350 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 19:44:32.894317 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 19:44:32.895168 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 19:44:32.895217 systemd[1]: Reached target paths.target - Path Units. Sep 12 19:44:32.895890 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 19:44:32.902825 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 19:44:32.903706 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 19:44:32.904514 systemd[1]: Reached target timers.target - Timer Units. Sep 12 19:44:32.906684 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 19:44:32.909681 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 19:44:32.912581 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 19:44:32.915189 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 19:44:32.915988 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 19:44:32.916677 systemd[1]: Reached target basic.target - Basic System. Sep 12 19:44:32.917687 systemd[1]: System is tainted: cgroupsv1 Sep 12 19:44:32.917750 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 19:44:32.917796 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 19:44:32.920777 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 19:44:32.924052 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 19:44:32.931084 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 19:44:32.944126 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 19:44:32.950233 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 19:44:32.951987 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 19:44:32.961938 jq[1580]: false Sep 12 19:44:32.964977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:44:32.979087 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 19:44:32.986155 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 19:44:33.003981 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 19:44:33.011598 extend-filesystems[1581]: Found loop4 Sep 12 19:44:33.011598 extend-filesystems[1581]: Found loop5 Sep 12 19:44:33.011598 extend-filesystems[1581]: Found loop6 Sep 12 19:44:33.011598 extend-filesystems[1581]: Found loop7 Sep 12 19:44:33.011598 extend-filesystems[1581]: Found vda Sep 12 19:44:33.011598 extend-filesystems[1581]: Found vda1 Sep 12 19:44:33.011598 extend-filesystems[1581]: Found vda2 Sep 12 19:44:33.011598 extend-filesystems[1581]: Found vda3 Sep 12 19:44:33.011598 extend-filesystems[1581]: Found usr Sep 12 19:44:33.011598 extend-filesystems[1581]: Found vda4 Sep 12 19:44:33.011598 extend-filesystems[1581]: Found vda6 Sep 12 19:44:33.011598 extend-filesystems[1581]: Found vda7 Sep 12 19:44:33.011598 extend-filesystems[1581]: Found vda9 Sep 12 19:44:33.011598 extend-filesystems[1581]: Checking size of /dev/vda9 Sep 12 19:44:33.022287 systemd-timesyncd[1560]: Contacted time server 139.162.242.115:123 (0.flatcar.pool.ntp.org). Sep 12 19:44:33.022400 systemd-timesyncd[1560]: Initial clock synchronization to Fri 2025-09-12 19:44:33.121558 UTC. Sep 12 19:44:33.047572 dbus-daemon[1579]: [system] SELinux support is enabled Sep 12 19:44:33.029267 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 19:44:33.050323 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 19:44:33.054641 dbus-daemon[1579]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1258 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 19:44:33.061061 extend-filesystems[1581]: Resized partition /dev/vda9 Sep 12 19:44:33.067882 extend-filesystems[1608]: resize2fs 1.47.1 (20-May-2024) Sep 12 19:44:33.069368 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 19:44:33.073883 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Sep 12 19:44:33.071720 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 19:44:33.086037 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 19:44:33.102974 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 19:44:33.110714 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 19:44:33.121699 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 19:44:33.124784 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 19:44:33.136645 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 19:44:33.140107 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 19:44:33.143528 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 19:44:33.151932 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1254) Sep 12 19:44:33.161627 jq[1615]: true Sep 12 19:44:33.163853 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 19:44:33.169235 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 19:44:33.174679 update_engine[1611]: I20250912 19:44:33.174096 1611 main.cc:92] Flatcar Update Engine starting Sep 12 19:44:33.185189 update_engine[1611]: I20250912 19:44:33.183077 1611 update_check_scheduler.cc:74] Next update check in 3m52s Sep 12 19:44:33.226450 (ntainerd)[1629]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 19:44:33.228126 jq[1624]: true Sep 12 19:44:33.227997 dbus-daemon[1579]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 19:44:33.249160 tar[1620]: linux-amd64/helm Sep 12 19:44:33.264697 systemd[1]: Started update-engine.service - Update Engine. Sep 12 19:44:33.273723 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 19:44:33.273773 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 19:44:33.285578 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 19:44:33.286416 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 19:44:33.286462 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 19:44:33.288138 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 19:44:33.296145 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 19:44:33.384769 systemd-logind[1607]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 19:44:33.384820 systemd-logind[1607]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 19:44:33.400025 systemd-logind[1607]: New seat seat0. Sep 12 19:44:33.410186 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 19:44:33.500899 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 12 19:44:33.544944 bash[1656]: Updated "/home/core/.ssh/authorized_keys" Sep 12 19:44:33.502828 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 19:44:33.521241 systemd[1]: Starting sshkeys.service... Sep 12 19:44:33.548997 extend-filesystems[1608]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 19:44:33.548997 extend-filesystems[1608]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 12 19:44:33.548997 extend-filesystems[1608]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 12 19:44:33.547687 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 19:44:33.574232 extend-filesystems[1581]: Resized filesystem in /dev/vda9 Sep 12 19:44:33.551561 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 19:44:33.601724 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 19:44:33.611071 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 19:44:33.630668 locksmithd[1644]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 19:44:33.696332 sshd_keygen[1621]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 19:44:33.735346 dbus-daemon[1579]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 19:44:33.736623 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 19:44:33.738500 dbus-daemon[1579]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1642 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 19:44:33.752283 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 19:44:33.796041 polkitd[1679]: Started polkitd version 121 Sep 12 19:44:33.800108 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 19:44:33.812247 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 19:44:33.825938 containerd[1629]: time="2025-09-12T19:44:33.824941276Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 19:44:33.835716 polkitd[1679]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 19:44:33.835827 polkitd[1679]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 19:44:33.842952 polkitd[1679]: Finished loading, compiling and executing 2 rules Sep 12 19:44:33.844817 dbus-daemon[1579]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 19:44:33.845080 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 19:44:33.847152 polkitd[1679]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 19:44:33.851462 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 19:44:33.852230 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 19:44:33.864966 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 19:44:33.871155 systemd-networkd[1258]: eth0: Ignoring DHCPv6 address 2a02:1348:179:827b:24:19ff:fee6:9ee/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:827b:24:19ff:fee6:9ee/64 assigned by NDisc. Sep 12 19:44:33.871168 systemd-networkd[1258]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 12 19:44:33.892613 systemd-hostnamed[1642]: Hostname set to (static) Sep 12 19:44:33.904392 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 19:44:33.915339 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 19:44:33.925317 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 19:44:33.928241 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 19:44:33.936481 containerd[1629]: time="2025-09-12T19:44:33.936399412Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 19:44:33.938773 containerd[1629]: time="2025-09-12T19:44:33.938722180Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 19:44:33.938850 containerd[1629]: time="2025-09-12T19:44:33.938771580Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 19:44:33.938850 containerd[1629]: time="2025-09-12T19:44:33.938801530Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 19:44:33.939446 containerd[1629]: time="2025-09-12T19:44:33.939071695Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 19:44:33.939446 containerd[1629]: time="2025-09-12T19:44:33.939107235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 19:44:33.939446 containerd[1629]: time="2025-09-12T19:44:33.939220222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 19:44:33.939446 containerd[1629]: time="2025-09-12T19:44:33.939244586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 19:44:33.939629 containerd[1629]: time="2025-09-12T19:44:33.939561880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 19:44:33.939629 containerd[1629]: time="2025-09-12T19:44:33.939588000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 19:44:33.939629 containerd[1629]: time="2025-09-12T19:44:33.939617568Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 19:44:33.939723 containerd[1629]: time="2025-09-12T19:44:33.939636126Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 19:44:33.940161 containerd[1629]: time="2025-09-12T19:44:33.939785051Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 19:44:33.940240 containerd[1629]: time="2025-09-12T19:44:33.940205039Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 19:44:33.940640 containerd[1629]: time="2025-09-12T19:44:33.940433064Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 19:44:33.940640 containerd[1629]: time="2025-09-12T19:44:33.940465069Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 19:44:33.940640 containerd[1629]: time="2025-09-12T19:44:33.940595863Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 19:44:33.940793 containerd[1629]: time="2025-09-12T19:44:33.940682458Z" level=info msg="metadata content store policy set" policy=shared Sep 12 19:44:33.961359 containerd[1629]: time="2025-09-12T19:44:33.961037555Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 19:44:33.961359 containerd[1629]: time="2025-09-12T19:44:33.961155714Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 19:44:33.961359 containerd[1629]: time="2025-09-12T19:44:33.961253770Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 19:44:33.961359 containerd[1629]: time="2025-09-12T19:44:33.961283626Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 19:44:33.962998 containerd[1629]: time="2025-09-12T19:44:33.962895557Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 19:44:33.963268 containerd[1629]: time="2025-09-12T19:44:33.963226823Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 19:44:33.964161 containerd[1629]: time="2025-09-12T19:44:33.964073075Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 19:44:33.964366 containerd[1629]: time="2025-09-12T19:44:33.964326049Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 19:44:33.964429 containerd[1629]: time="2025-09-12T19:44:33.964395772Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 19:44:33.964492 containerd[1629]: time="2025-09-12T19:44:33.964423561Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 19:44:33.964492 containerd[1629]: time="2025-09-12T19:44:33.964469048Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 19:44:33.964589 containerd[1629]: time="2025-09-12T19:44:33.964493618Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 19:44:33.964589 containerd[1629]: time="2025-09-12T19:44:33.964519685Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 19:44:33.964589 containerd[1629]: time="2025-09-12T19:44:33.964562896Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 19:44:33.964589 containerd[1629]: time="2025-09-12T19:44:33.964585503Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 19:44:33.964721 containerd[1629]: time="2025-09-12T19:44:33.964605475Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 19:44:33.964721 containerd[1629]: time="2025-09-12T19:44:33.964654697Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 19:44:33.964721 containerd[1629]: time="2025-09-12T19:44:33.964677778Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 19:44:33.964851 containerd[1629]: time="2025-09-12T19:44:33.964750545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.964851 containerd[1629]: time="2025-09-12T19:44:33.964799169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.964851 containerd[1629]: time="2025-09-12T19:44:33.964821049Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.964851 containerd[1629]: time="2025-09-12T19:44:33.964842184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.966006 containerd[1629]: time="2025-09-12T19:44:33.965687579Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.966006 containerd[1629]: time="2025-09-12T19:44:33.965761175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.966006 containerd[1629]: time="2025-09-12T19:44:33.965785341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.966006 containerd[1629]: time="2025-09-12T19:44:33.965804885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.966006 containerd[1629]: time="2025-09-12T19:44:33.965843082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.966006 containerd[1629]: time="2025-09-12T19:44:33.965900262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.966006 containerd[1629]: time="2025-09-12T19:44:33.965929862Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.966006 containerd[1629]: time="2025-09-12T19:44:33.965967880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.966006 containerd[1629]: time="2025-09-12T19:44:33.965998855Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.966515 containerd[1629]: time="2025-09-12T19:44:33.966056220Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 19:44:33.966515 containerd[1629]: time="2025-09-12T19:44:33.966099527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.966515 containerd[1629]: time="2025-09-12T19:44:33.966140960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.966515 containerd[1629]: time="2025-09-12T19:44:33.966163257Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 19:44:33.966515 containerd[1629]: time="2025-09-12T19:44:33.966245464Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 19:44:33.966515 containerd[1629]: time="2025-09-12T19:44:33.966282746Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 19:44:33.966515 containerd[1629]: time="2025-09-12T19:44:33.966304136Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 19:44:33.966515 containerd[1629]: time="2025-09-12T19:44:33.966323341Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 19:44:33.966515 containerd[1629]: time="2025-09-12T19:44:33.966339412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.966515 containerd[1629]: time="2025-09-12T19:44:33.966366734Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 19:44:33.966515 containerd[1629]: time="2025-09-12T19:44:33.966395036Z" level=info msg="NRI interface is disabled by configuration." Sep 12 19:44:33.966515 containerd[1629]: time="2025-09-12T19:44:33.966425334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 19:44:33.968060 containerd[1629]: time="2025-09-12T19:44:33.966821888Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 19:44:33.968060 containerd[1629]: time="2025-09-12T19:44:33.967784486Z" level=info msg="Connect containerd service" Sep 12 19:44:33.968060 containerd[1629]: time="2025-09-12T19:44:33.967881161Z" level=info msg="using legacy CRI server" Sep 12 19:44:33.968060 containerd[1629]: time="2025-09-12T19:44:33.967911898Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 19:44:33.969281 containerd[1629]: time="2025-09-12T19:44:33.968166404Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 19:44:33.971574 containerd[1629]: time="2025-09-12T19:44:33.970814212Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 19:44:33.972125 containerd[1629]: time="2025-09-12T19:44:33.972064456Z" level=info msg="Start subscribing containerd event" Sep 12 19:44:33.973405 containerd[1629]: time="2025-09-12T19:44:33.973337270Z" level=info msg="Start recovering state" Sep 12 19:44:33.974647 containerd[1629]: time="2025-09-12T19:44:33.973548206Z" level=info msg="Start event monitor" Sep 12 19:44:33.974647 containerd[1629]: time="2025-09-12T19:44:33.973622048Z" level=info msg="Start snapshots syncer" Sep 12 19:44:33.974647 containerd[1629]: time="2025-09-12T19:44:33.973644363Z" level=info msg="Start cni network conf syncer for default" Sep 12 19:44:33.974647 containerd[1629]: time="2025-09-12T19:44:33.973658047Z" level=info msg="Start streaming server" Sep 12 19:44:33.974647 containerd[1629]: time="2025-09-12T19:44:33.973783499Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 19:44:33.974647 containerd[1629]: time="2025-09-12T19:44:33.973898002Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 19:44:33.976457 containerd[1629]: time="2025-09-12T19:44:33.975941503Z" level=info msg="containerd successfully booted in 0.153512s" Sep 12 19:44:33.976185 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 19:44:34.359647 tar[1620]: linux-amd64/LICENSE Sep 12 19:44:34.360302 tar[1620]: linux-amd64/README.md Sep 12 19:44:34.374396 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 19:44:34.846093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:44:34.850952 (kubelet)[1722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 19:44:35.532048 kubelet[1722]: E0912 19:44:35.531943 1722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 19:44:35.534238 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 19:44:35.534567 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 19:44:37.449408 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 19:44:37.461491 systemd[1]: Started sshd@0-10.230.9.238:22-139.178.68.195:44492.service - OpenSSH per-connection server daemon (139.178.68.195:44492). Sep 12 19:44:38.396037 sshd[1732]: Accepted publickey for core from 139.178.68.195 port 44492 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:44:38.404060 sshd[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:44:38.421410 systemd-logind[1607]: New session 1 of user core. Sep 12 19:44:38.424169 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 19:44:38.435448 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 19:44:38.459149 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 19:44:38.471375 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 19:44:38.488292 (systemd)[1738]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 19:44:38.637519 systemd[1738]: Queued start job for default target default.target. Sep 12 19:44:38.638092 systemd[1738]: Created slice app.slice - User Application Slice. Sep 12 19:44:38.638132 systemd[1738]: Reached target paths.target - Paths. Sep 12 19:44:38.638156 systemd[1738]: Reached target timers.target - Timers. Sep 12 19:44:38.643995 systemd[1738]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 19:44:38.665274 systemd[1738]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 19:44:38.665544 systemd[1738]: Reached target sockets.target - Sockets. Sep 12 19:44:38.665712 systemd[1738]: Reached target basic.target - Basic System. Sep 12 19:44:38.666150 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 19:44:38.667484 systemd[1738]: Reached target default.target - Main User Target. Sep 12 19:44:38.667971 systemd[1738]: Startup finished in 167ms. Sep 12 19:44:38.683596 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 19:44:38.986858 login[1704]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 19:44:38.991808 login[1703]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 19:44:38.994895 systemd-logind[1607]: New session 2 of user core. Sep 12 19:44:39.003764 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 19:44:39.007405 systemd-logind[1607]: New session 3 of user core. Sep 12 19:44:39.012731 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 19:44:39.327416 systemd[1]: Started sshd@1-10.230.9.238:22-139.178.68.195:44502.service - OpenSSH per-connection server daemon (139.178.68.195:44502). Sep 12 19:44:40.037576 coreos-metadata[1577]: Sep 12 19:44:40.037 WARN failed to locate config-drive, using the metadata service API instead Sep 12 19:44:40.066255 coreos-metadata[1577]: Sep 12 19:44:40.066 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Sep 12 19:44:40.078399 coreos-metadata[1577]: Sep 12 19:44:40.078 INFO Fetch failed with 404: resource not found Sep 12 19:44:40.078399 coreos-metadata[1577]: Sep 12 19:44:40.078 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 12 19:44:40.079183 coreos-metadata[1577]: Sep 12 19:44:40.079 INFO Fetch successful Sep 12 19:44:40.079306 coreos-metadata[1577]: Sep 12 19:44:40.079 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Sep 12 19:44:40.094020 coreos-metadata[1577]: Sep 12 19:44:40.093 INFO Fetch successful Sep 12 19:44:40.094191 coreos-metadata[1577]: Sep 12 19:44:40.094 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Sep 12 19:44:40.108780 coreos-metadata[1577]: Sep 12 19:44:40.108 INFO Fetch successful Sep 12 19:44:40.108952 coreos-metadata[1577]: Sep 12 19:44:40.108 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Sep 12 19:44:40.134563 coreos-metadata[1577]: Sep 12 19:44:40.134 INFO Fetch successful Sep 12 19:44:40.134724 coreos-metadata[1577]: Sep 12 19:44:40.134 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Sep 12 19:44:40.156198 coreos-metadata[1577]: Sep 12 19:44:40.156 INFO Fetch successful Sep 12 19:44:40.183510 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 19:44:40.186112 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 19:44:40.215985 sshd[1778]: Accepted publickey for core from 139.178.68.195 port 44502 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:44:40.218143 sshd[1778]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:44:40.227097 systemd-logind[1607]: New session 4 of user core. Sep 12 19:44:40.234337 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 19:44:40.708588 coreos-metadata[1670]: Sep 12 19:44:40.708 WARN failed to locate config-drive, using the metadata service API instead Sep 12 19:44:40.731233 coreos-metadata[1670]: Sep 12 19:44:40.731 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 12 19:44:40.763058 coreos-metadata[1670]: Sep 12 19:44:40.763 INFO Fetch successful Sep 12 19:44:40.763284 coreos-metadata[1670]: Sep 12 19:44:40.763 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 19:44:40.790560 coreos-metadata[1670]: Sep 12 19:44:40.790 INFO Fetch successful Sep 12 19:44:40.792651 unknown[1670]: wrote ssh authorized keys file for user: core Sep 12 19:44:40.812536 update-ssh-keys[1796]: Updated "/home/core/.ssh/authorized_keys" Sep 12 19:44:40.815157 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 19:44:40.819986 systemd[1]: Finished sshkeys.service. Sep 12 19:44:40.826407 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 19:44:40.826909 systemd[1]: Startup finished in 15.906s (kernel) + 12.575s (userspace) = 28.482s. Sep 12 19:44:40.844763 sshd[1778]: pam_unix(sshd:session): session closed for user core Sep 12 19:44:40.850571 systemd[1]: sshd@1-10.230.9.238:22-139.178.68.195:44502.service: Deactivated successfully. Sep 12 19:44:40.850996 systemd-logind[1607]: Session 4 logged out. Waiting for processes to exit. Sep 12 19:44:40.854519 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 19:44:40.855794 systemd-logind[1607]: Removed session 4. Sep 12 19:44:41.008279 systemd[1]: Started sshd@2-10.230.9.238:22-139.178.68.195:36832.service - OpenSSH per-connection server daemon (139.178.68.195:36832). Sep 12 19:44:41.904702 sshd[1806]: Accepted publickey for core from 139.178.68.195 port 36832 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:44:41.907310 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:44:41.915953 systemd-logind[1607]: New session 5 of user core. Sep 12 19:44:41.926384 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 19:44:42.531200 sshd[1806]: pam_unix(sshd:session): session closed for user core Sep 12 19:44:42.534733 systemd[1]: sshd@2-10.230.9.238:22-139.178.68.195:36832.service: Deactivated successfully. Sep 12 19:44:42.539350 systemd-logind[1607]: Session 5 logged out. Waiting for processes to exit. Sep 12 19:44:42.540755 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 19:44:42.542148 systemd-logind[1607]: Removed session 5. Sep 12 19:44:45.577774 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 19:44:45.592099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:44:45.792080 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:44:45.798196 (kubelet)[1826]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 19:44:45.881478 kubelet[1826]: E0912 19:44:45.881224 1826 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 19:44:45.888091 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 19:44:45.888422 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 19:44:52.715209 systemd[1]: Started sshd@3-10.230.9.238:22-139.178.68.195:41562.service - OpenSSH per-connection server daemon (139.178.68.195:41562). Sep 12 19:44:53.600691 sshd[1834]: Accepted publickey for core from 139.178.68.195 port 41562 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:44:53.602835 sshd[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:44:53.609558 systemd-logind[1607]: New session 6 of user core. Sep 12 19:44:53.617274 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 19:44:54.220811 sshd[1834]: pam_unix(sshd:session): session closed for user core Sep 12 19:44:54.224742 systemd[1]: sshd@3-10.230.9.238:22-139.178.68.195:41562.service: Deactivated successfully. Sep 12 19:44:54.228035 systemd-logind[1607]: Session 6 logged out. Waiting for processes to exit. Sep 12 19:44:54.229784 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 19:44:54.231777 systemd-logind[1607]: Removed session 6. Sep 12 19:44:54.377235 systemd[1]: Started sshd@4-10.230.9.238:22-139.178.68.195:41572.service - OpenSSH per-connection server daemon (139.178.68.195:41572). Sep 12 19:44:55.261175 sshd[1842]: Accepted publickey for core from 139.178.68.195 port 41572 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:44:55.263407 sshd[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:44:55.271993 systemd-logind[1607]: New session 7 of user core. Sep 12 19:44:55.278332 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 19:44:55.876323 sshd[1842]: pam_unix(sshd:session): session closed for user core Sep 12 19:44:55.880230 systemd[1]: sshd@4-10.230.9.238:22-139.178.68.195:41572.service: Deactivated successfully. Sep 12 19:44:55.884260 systemd-logind[1607]: Session 7 logged out. Waiting for processes to exit. Sep 12 19:44:55.884979 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 19:44:55.887494 systemd-logind[1607]: Removed session 7. Sep 12 19:44:56.018607 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 19:44:56.032116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:44:56.036212 systemd[1]: Started sshd@5-10.230.9.238:22-139.178.68.195:41576.service - OpenSSH per-connection server daemon (139.178.68.195:41576). Sep 12 19:44:56.198073 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:44:56.211438 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 19:44:56.317697 kubelet[1864]: E0912 19:44:56.317605 1864 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 19:44:56.321090 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 19:44:56.321435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 19:44:56.929464 sshd[1851]: Accepted publickey for core from 139.178.68.195 port 41576 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:44:56.931965 sshd[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:44:56.941686 systemd-logind[1607]: New session 8 of user core. Sep 12 19:44:56.948381 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 19:44:57.546284 sshd[1851]: pam_unix(sshd:session): session closed for user core Sep 12 19:44:57.550678 systemd[1]: sshd@5-10.230.9.238:22-139.178.68.195:41576.service: Deactivated successfully. Sep 12 19:44:57.554020 systemd-logind[1607]: Session 8 logged out. Waiting for processes to exit. Sep 12 19:44:57.555675 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 19:44:57.558039 systemd-logind[1607]: Removed session 8. Sep 12 19:44:57.702331 systemd[1]: Started sshd@6-10.230.9.238:22-139.178.68.195:41592.service - OpenSSH per-connection server daemon (139.178.68.195:41592). Sep 12 19:44:58.579581 sshd[1878]: Accepted publickey for core from 139.178.68.195 port 41592 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:44:58.581755 sshd[1878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:44:58.588749 systemd-logind[1607]: New session 9 of user core. Sep 12 19:44:58.598309 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 19:44:59.066325 sudo[1882]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 19:44:59.066814 sudo[1882]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 19:44:59.085521 sudo[1882]: pam_unix(sudo:session): session closed for user root Sep 12 19:44:59.230343 sshd[1878]: pam_unix(sshd:session): session closed for user core Sep 12 19:44:59.237432 systemd[1]: sshd@6-10.230.9.238:22-139.178.68.195:41592.service: Deactivated successfully. Sep 12 19:44:59.241423 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 19:44:59.242838 systemd-logind[1607]: Session 9 logged out. Waiting for processes to exit. Sep 12 19:44:59.244713 systemd-logind[1607]: Removed session 9. Sep 12 19:44:59.389309 systemd[1]: Started sshd@7-10.230.9.238:22-139.178.68.195:41600.service - OpenSSH per-connection server daemon (139.178.68.195:41600). Sep 12 19:45:00.347334 sshd[1887]: Accepted publickey for core from 139.178.68.195 port 41600 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:45:00.349572 sshd[1887]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:45:00.356129 systemd-logind[1607]: New session 10 of user core. Sep 12 19:45:00.365468 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 19:45:00.861754 sudo[1892]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 19:45:00.862754 sudo[1892]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 19:45:00.868607 sudo[1892]: pam_unix(sudo:session): session closed for user root Sep 12 19:45:00.876204 sudo[1891]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 19:45:00.876663 sudo[1891]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 19:45:00.897235 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 19:45:00.901327 auditctl[1895]: No rules Sep 12 19:45:00.901979 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 19:45:00.902390 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 19:45:00.923125 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 19:45:00.956954 augenrules[1914]: No rules Sep 12 19:45:00.957770 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 19:45:00.959523 sudo[1891]: pam_unix(sudo:session): session closed for user root Sep 12 19:45:01.113492 sshd[1887]: pam_unix(sshd:session): session closed for user core Sep 12 19:45:01.119111 systemd[1]: sshd@7-10.230.9.238:22-139.178.68.195:41600.service: Deactivated successfully. Sep 12 19:45:01.122915 systemd-logind[1607]: Session 10 logged out. Waiting for processes to exit. Sep 12 19:45:01.123303 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 19:45:01.125463 systemd-logind[1607]: Removed session 10. Sep 12 19:45:01.265203 systemd[1]: Started sshd@8-10.230.9.238:22-139.178.68.195:42640.service - OpenSSH per-connection server daemon (139.178.68.195:42640). Sep 12 19:45:02.150954 sshd[1923]: Accepted publickey for core from 139.178.68.195 port 42640 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:45:02.153178 sshd[1923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:45:02.163297 systemd-logind[1607]: New session 11 of user core. Sep 12 19:45:02.171554 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 19:45:02.629025 sudo[1927]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 19:45:02.629507 sudo[1927]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 19:45:03.110636 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 19:45:03.119759 (dockerd)[1942]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 19:45:03.563710 dockerd[1942]: time="2025-09-12T19:45:03.563377302Z" level=info msg="Starting up" Sep 12 19:45:03.823141 dockerd[1942]: time="2025-09-12T19:45:03.822768166Z" level=info msg="Loading containers: start." Sep 12 19:45:03.955342 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 19:45:03.972902 kernel: Initializing XFRM netlink socket Sep 12 19:45:04.096333 systemd-networkd[1258]: docker0: Link UP Sep 12 19:45:04.114894 dockerd[1942]: time="2025-09-12T19:45:04.114660872Z" level=info msg="Loading containers: done." Sep 12 19:45:04.133671 dockerd[1942]: time="2025-09-12T19:45:04.133617991Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 19:45:04.133944 dockerd[1942]: time="2025-09-12T19:45:04.133764832Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 19:45:04.134026 dockerd[1942]: time="2025-09-12T19:45:04.133949731Z" level=info msg="Daemon has completed initialization" Sep 12 19:45:04.179591 dockerd[1942]: time="2025-09-12T19:45:04.179458085Z" level=info msg="API listen on /run/docker.sock" Sep 12 19:45:04.180047 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 19:45:05.512211 containerd[1629]: time="2025-09-12T19:45:05.511474579Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 12 19:45:06.328338 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 19:45:06.338157 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:45:06.431760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4021587255.mount: Deactivated successfully. Sep 12 19:45:06.576118 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:45:06.582203 (kubelet)[2105]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 19:45:06.704693 kubelet[2105]: E0912 19:45:06.704604 2105 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 19:45:06.709109 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 19:45:06.709456 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 19:45:08.643947 containerd[1629]: time="2025-09-12T19:45:08.642328425Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:08.643947 containerd[1629]: time="2025-09-12T19:45:08.643929626Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=28117132" Sep 12 19:45:08.645533 containerd[1629]: time="2025-09-12T19:45:08.645499116Z" level=info msg="ImageCreate event name:\"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:08.649844 containerd[1629]: time="2025-09-12T19:45:08.649795618Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:08.651760 containerd[1629]: time="2025-09-12T19:45:08.651720433Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"28113723\" in 3.140136549s" Sep 12 19:45:08.651964 containerd[1629]: time="2025-09-12T19:45:08.651932460Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:368da3301bb03f4bef9f7dc2084f5fc5954b0ac1bf1e49ca502e3a7604011e54\"" Sep 12 19:45:08.652959 containerd[1629]: time="2025-09-12T19:45:08.652928628Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 12 19:45:10.492084 containerd[1629]: time="2025-09-12T19:45:10.491958856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:10.494506 containerd[1629]: time="2025-09-12T19:45:10.494204714Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=24716640" Sep 12 19:45:10.496079 containerd[1629]: time="2025-09-12T19:45:10.495405659Z" level=info msg="ImageCreate event name:\"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:10.500381 containerd[1629]: time="2025-09-12T19:45:10.500339582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:10.501986 containerd[1629]: time="2025-09-12T19:45:10.501935022Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"26351311\" in 1.848844434s" Sep 12 19:45:10.502073 containerd[1629]: time="2025-09-12T19:45:10.502014709Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:cbd19105c6bcbedf394f51c8bb963def5195c300fc7d04bc39d48d14d23c0ff0\"" Sep 12 19:45:10.503762 containerd[1629]: time="2025-09-12T19:45:10.503689392Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 12 19:45:12.267893 containerd[1629]: time="2025-09-12T19:45:12.266306263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:12.268544 containerd[1629]: time="2025-09-12T19:45:12.267926575Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=18787706" Sep 12 19:45:12.268877 containerd[1629]: time="2025-09-12T19:45:12.268825314Z" level=info msg="ImageCreate event name:\"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:12.273488 containerd[1629]: time="2025-09-12T19:45:12.273455240Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:12.275343 containerd[1629]: time="2025-09-12T19:45:12.275301867Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"20422395\" in 1.771523713s" Sep 12 19:45:12.275441 containerd[1629]: time="2025-09-12T19:45:12.275346607Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:d019d989e2b1f0b08ea7eebd4dd7673bdd6ba2218a3c5a6bd53f6848d5fc1af6\"" Sep 12 19:45:12.277549 containerd[1629]: time="2025-09-12T19:45:12.277498300Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 12 19:45:13.853729 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3252483979.mount: Deactivated successfully. Sep 12 19:45:14.546913 containerd[1629]: time="2025-09-12T19:45:14.545765996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:14.546913 containerd[1629]: time="2025-09-12T19:45:14.546842048Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=30410260" Sep 12 19:45:14.547698 containerd[1629]: time="2025-09-12T19:45:14.547663741Z" level=info msg="ImageCreate event name:\"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:14.550301 containerd[1629]: time="2025-09-12T19:45:14.550266633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:14.551652 containerd[1629]: time="2025-09-12T19:45:14.551588458Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"30409271\" in 2.273795785s" Sep 12 19:45:14.551759 containerd[1629]: time="2025-09-12T19:45:14.551663296Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:21d97a49eeb0b08ecaba421a84a79ca44cf2bc57773c085bbfda537488790ad7\"" Sep 12 19:45:14.552326 containerd[1629]: time="2025-09-12T19:45:14.552261087Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 19:45:15.139791 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount731636482.mount: Deactivated successfully. Sep 12 19:45:16.828020 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 19:45:16.842126 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:45:17.084077 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:45:17.095663 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 19:45:17.141885 containerd[1629]: time="2025-09-12T19:45:17.140195021Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:17.143119 containerd[1629]: time="2025-09-12T19:45:17.143032485Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 12 19:45:17.143296 containerd[1629]: time="2025-09-12T19:45:17.143237052Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:17.154837 containerd[1629]: time="2025-09-12T19:45:17.152324658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:17.155315 containerd[1629]: time="2025-09-12T19:45:17.155275522Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.602940427s" Sep 12 19:45:17.155391 containerd[1629]: time="2025-09-12T19:45:17.155350218Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 19:45:17.157200 containerd[1629]: time="2025-09-12T19:45:17.157141407Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 19:45:17.174694 kubelet[2246]: E0912 19:45:17.174579 2246 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 19:45:17.178027 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 19:45:17.178467 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 19:45:17.754634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount994882234.mount: Deactivated successfully. Sep 12 19:45:17.761466 containerd[1629]: time="2025-09-12T19:45:17.761409715Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:17.778240 containerd[1629]: time="2025-09-12T19:45:17.778160799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 12 19:45:17.779445 containerd[1629]: time="2025-09-12T19:45:17.779382527Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:17.784294 containerd[1629]: time="2025-09-12T19:45:17.782923090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:17.784294 containerd[1629]: time="2025-09-12T19:45:17.784067093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 626.562726ms" Sep 12 19:45:17.784294 containerd[1629]: time="2025-09-12T19:45:17.784109133Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 19:45:17.784986 containerd[1629]: time="2025-09-12T19:45:17.784938438Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 12 19:45:18.360612 update_engine[1611]: I20250912 19:45:18.360342 1611 update_attempter.cc:509] Updating boot flags... Sep 12 19:45:18.425922 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2264) Sep 12 19:45:18.516261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2402043705.mount: Deactivated successfully. Sep 12 19:45:18.552705 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2264) Sep 12 19:45:21.063402 containerd[1629]: time="2025-09-12T19:45:21.063295912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:21.065889 containerd[1629]: time="2025-09-12T19:45:21.065828170Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56910717" Sep 12 19:45:21.067098 containerd[1629]: time="2025-09-12T19:45:21.067039403Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:21.071965 containerd[1629]: time="2025-09-12T19:45:21.071567823Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:21.075454 containerd[1629]: time="2025-09-12T19:45:21.074855214Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.28974052s" Sep 12 19:45:21.075454 containerd[1629]: time="2025-09-12T19:45:21.074937389Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Sep 12 19:45:25.588530 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:45:25.606489 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:45:25.647523 systemd[1]: Reloading requested from client PID 2352 ('systemctl') (unit session-11.scope)... Sep 12 19:45:25.647737 systemd[1]: Reloading... Sep 12 19:45:25.876906 zram_generator::config[2387]: No configuration found. Sep 12 19:45:26.012483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 19:45:26.122567 systemd[1]: Reloading finished in 473 ms. Sep 12 19:45:26.185652 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 19:45:26.188071 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 19:45:26.188565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:45:26.199305 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:45:26.479152 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:45:26.481096 (kubelet)[2467]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 19:45:26.546792 kubelet[2467]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 19:45:26.547562 kubelet[2467]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 19:45:26.547562 kubelet[2467]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 19:45:26.549343 kubelet[2467]: I0912 19:45:26.549222 2467 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 19:45:26.985916 kubelet[2467]: I0912 19:45:26.985376 2467 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 19:45:26.985916 kubelet[2467]: I0912 19:45:26.985419 2467 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 19:45:26.985916 kubelet[2467]: I0912 19:45:26.985795 2467 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 19:45:27.017488 kubelet[2467]: I0912 19:45:27.016949 2467 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 19:45:27.021937 kubelet[2467]: E0912 19:45:27.021878 2467 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.9.238:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.9.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:45:27.035712 kubelet[2467]: E0912 19:45:27.035663 2467 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 19:45:27.035712 kubelet[2467]: I0912 19:45:27.035708 2467 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 19:45:27.045634 kubelet[2467]: I0912 19:45:27.045557 2467 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 19:45:27.047370 kubelet[2467]: I0912 19:45:27.047324 2467 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 19:45:27.047614 kubelet[2467]: I0912 19:45:27.047539 2467 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 19:45:27.047880 kubelet[2467]: I0912 19:45:27.047620 2467 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-l18mb.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 19:45:27.048146 kubelet[2467]: I0912 19:45:27.047929 2467 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 19:45:27.048146 kubelet[2467]: I0912 19:45:27.047948 2467 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 19:45:27.048297 kubelet[2467]: I0912 19:45:27.048147 2467 state_mem.go:36] "Initialized new in-memory state store" Sep 12 19:45:27.051438 kubelet[2467]: I0912 19:45:27.051386 2467 kubelet.go:408] "Attempting to sync node with API server" Sep 12 19:45:27.051438 kubelet[2467]: I0912 19:45:27.051432 2467 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 19:45:27.053943 kubelet[2467]: I0912 19:45:27.053885 2467 kubelet.go:314] "Adding apiserver pod source" Sep 12 19:45:27.053943 kubelet[2467]: I0912 19:45:27.053942 2467 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 19:45:27.060648 kubelet[2467]: W0912 19:45:27.059558 2467 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.9.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-l18mb.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.9.238:6443: connect: connection refused Sep 12 19:45:27.060648 kubelet[2467]: E0912 19:45:27.059639 2467 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.9.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-l18mb.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.9.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:45:27.060648 kubelet[2467]: W0912 19:45:27.060271 2467 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.9.238:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.9.238:6443: connect: connection refused Sep 12 19:45:27.060648 kubelet[2467]: E0912 19:45:27.060356 2467 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.9.238:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.9.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:45:27.061088 kubelet[2467]: I0912 19:45:27.061063 2467 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 19:45:27.064386 kubelet[2467]: I0912 19:45:27.064361 2467 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 19:45:27.064612 kubelet[2467]: W0912 19:45:27.064590 2467 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 19:45:27.066695 kubelet[2467]: I0912 19:45:27.066675 2467 server.go:1274] "Started kubelet" Sep 12 19:45:27.067111 kubelet[2467]: I0912 19:45:27.067050 2467 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 19:45:27.069518 kubelet[2467]: I0912 19:45:27.069494 2467 server.go:449] "Adding debug handlers to kubelet server" Sep 12 19:45:27.070975 kubelet[2467]: I0912 19:45:27.070933 2467 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 19:45:27.071711 kubelet[2467]: I0912 19:45:27.071373 2467 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 19:45:27.073372 kubelet[2467]: E0912 19:45:27.071605 2467 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.9.238:6443/api/v1/namespaces/default/events\": dial tcp 10.230.9.238:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-l18mb.gb1.brightbox.com.1864a0977c4377c6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-l18mb.gb1.brightbox.com,UID:srv-l18mb.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-l18mb.gb1.brightbox.com,},FirstTimestamp:2025-09-12 19:45:27.066638278 +0000 UTC m=+0.578527536,LastTimestamp:2025-09-12 19:45:27.066638278 +0000 UTC m=+0.578527536,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-l18mb.gb1.brightbox.com,}" Sep 12 19:45:27.080561 kubelet[2467]: I0912 19:45:27.076904 2467 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 19:45:27.082020 kubelet[2467]: I0912 19:45:27.081993 2467 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 19:45:27.089002 kubelet[2467]: I0912 19:45:27.087423 2467 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 19:45:27.089002 kubelet[2467]: E0912 19:45:27.087708 2467 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-l18mb.gb1.brightbox.com\" not found" Sep 12 19:45:27.091532 kubelet[2467]: I0912 19:45:27.091503 2467 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 19:45:27.091619 kubelet[2467]: I0912 19:45:27.091607 2467 reconciler.go:26] "Reconciler: start to sync state" Sep 12 19:45:27.093301 kubelet[2467]: E0912 19:45:27.093234 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.9.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-l18mb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.9.238:6443: connect: connection refused" interval="200ms" Sep 12 19:45:27.093642 kubelet[2467]: I0912 19:45:27.093616 2467 factory.go:221] Registration of the systemd container factory successfully Sep 12 19:45:27.093889 kubelet[2467]: I0912 19:45:27.093847 2467 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 19:45:27.110221 kubelet[2467]: W0912 19:45:27.110170 2467 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.9.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.9.238:6443: connect: connection refused Sep 12 19:45:27.197415 kubelet[2467]: E0912 19:45:27.197246 2467 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.9.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.9.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:45:27.197415 kubelet[2467]: E0912 19:45:27.197136 2467 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-l18mb.gb1.brightbox.com\" not found" Sep 12 19:45:27.199394 kubelet[2467]: I0912 19:45:27.199356 2467 factory.go:221] Registration of the containerd container factory successfully Sep 12 19:45:27.298968 kubelet[2467]: E0912 19:45:27.294854 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.9.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-l18mb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.9.238:6443: connect: connection refused" interval="400ms" Sep 12 19:45:27.337088 kubelet[2467]: E0912 19:45:27.336965 2467 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-l18mb.gb1.brightbox.com\" not found" Sep 12 19:45:27.348952 kubelet[2467]: I0912 19:45:27.346793 2467 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 19:45:27.348952 kubelet[2467]: I0912 19:45:27.348378 2467 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 19:45:27.348952 kubelet[2467]: I0912 19:45:27.348426 2467 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 19:45:27.348952 kubelet[2467]: I0912 19:45:27.348472 2467 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 19:45:27.348952 kubelet[2467]: E0912 19:45:27.348546 2467 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 19:45:27.365314 kubelet[2467]: W0912 19:45:27.365215 2467 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.9.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.9.238:6443: connect: connection refused Sep 12 19:45:27.365539 kubelet[2467]: E0912 19:45:27.365507 2467 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.9.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.9.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:45:27.381025 kubelet[2467]: I0912 19:45:27.380990 2467 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 19:45:27.381025 kubelet[2467]: I0912 19:45:27.381020 2467 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 19:45:27.381228 kubelet[2467]: I0912 19:45:27.381051 2467 state_mem.go:36] "Initialized new in-memory state store" Sep 12 19:45:27.383316 kubelet[2467]: I0912 19:45:27.383262 2467 policy_none.go:49] "None policy: Start" Sep 12 19:45:27.384136 kubelet[2467]: I0912 19:45:27.384099 2467 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 19:45:27.384224 kubelet[2467]: I0912 19:45:27.384148 2467 state_mem.go:35] "Initializing new in-memory state store" Sep 12 19:45:27.391201 kubelet[2467]: I0912 19:45:27.391147 2467 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 19:45:27.391439 kubelet[2467]: I0912 19:45:27.391399 2467 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 19:45:27.391506 kubelet[2467]: I0912 19:45:27.391436 2467 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 19:45:27.393655 kubelet[2467]: I0912 19:45:27.393591 2467 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 19:45:27.400723 kubelet[2467]: E0912 19:45:27.400680 2467 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-l18mb.gb1.brightbox.com\" not found" Sep 12 19:45:27.494716 kubelet[2467]: I0912 19:45:27.494236 2467 kubelet_node_status.go:72] "Attempting to register node" node="srv-l18mb.gb1.brightbox.com" Sep 12 19:45:27.495067 kubelet[2467]: E0912 19:45:27.495022 2467 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.9.238:6443/api/v1/nodes\": dial tcp 10.230.9.238:6443: connect: connection refused" node="srv-l18mb.gb1.brightbox.com" Sep 12 19:45:27.537940 kubelet[2467]: I0912 19:45:27.537686 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/646dbaf3a05b435a850f4b4f1c2696a1-ca-certs\") pod \"kube-apiserver-srv-l18mb.gb1.brightbox.com\" (UID: \"646dbaf3a05b435a850f4b4f1c2696a1\") " pod="kube-system/kube-apiserver-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:27.537940 kubelet[2467]: I0912 19:45:27.537743 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/646dbaf3a05b435a850f4b4f1c2696a1-usr-share-ca-certificates\") pod \"kube-apiserver-srv-l18mb.gb1.brightbox.com\" (UID: \"646dbaf3a05b435a850f4b4f1c2696a1\") " pod="kube-system/kube-apiserver-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:27.537940 kubelet[2467]: I0912 19:45:27.537784 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0014f465e34c14feb0c1324717cd888-ca-certs\") pod \"kube-controller-manager-srv-l18mb.gb1.brightbox.com\" (UID: \"e0014f465e34c14feb0c1324717cd888\") " pod="kube-system/kube-controller-manager-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:27.537940 kubelet[2467]: I0912 19:45:27.537812 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e0014f465e34c14feb0c1324717cd888-flexvolume-dir\") pod \"kube-controller-manager-srv-l18mb.gb1.brightbox.com\" (UID: \"e0014f465e34c14feb0c1324717cd888\") " pod="kube-system/kube-controller-manager-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:27.537940 kubelet[2467]: I0912 19:45:27.537840 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0014f465e34c14feb0c1324717cd888-k8s-certs\") pod \"kube-controller-manager-srv-l18mb.gb1.brightbox.com\" (UID: \"e0014f465e34c14feb0c1324717cd888\") " pod="kube-system/kube-controller-manager-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:27.538365 kubelet[2467]: I0912 19:45:27.537907 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e0014f465e34c14feb0c1324717cd888-kubeconfig\") pod \"kube-controller-manager-srv-l18mb.gb1.brightbox.com\" (UID: \"e0014f465e34c14feb0c1324717cd888\") " pod="kube-system/kube-controller-manager-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:27.538365 kubelet[2467]: I0912 19:45:27.537951 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4547be40e0fdbed438707680f1bbc55-kubeconfig\") pod \"kube-scheduler-srv-l18mb.gb1.brightbox.com\" (UID: \"e4547be40e0fdbed438707680f1bbc55\") " pod="kube-system/kube-scheduler-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:27.538365 kubelet[2467]: I0912 19:45:27.537983 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/646dbaf3a05b435a850f4b4f1c2696a1-k8s-certs\") pod \"kube-apiserver-srv-l18mb.gb1.brightbox.com\" (UID: \"646dbaf3a05b435a850f4b4f1c2696a1\") " pod="kube-system/kube-apiserver-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:27.538365 kubelet[2467]: I0912 19:45:27.538015 2467 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0014f465e34c14feb0c1324717cd888-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-l18mb.gb1.brightbox.com\" (UID: \"e0014f465e34c14feb0c1324717cd888\") " pod="kube-system/kube-controller-manager-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:27.696420 kubelet[2467]: E0912 19:45:27.696342 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.9.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-l18mb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.9.238:6443: connect: connection refused" interval="800ms" Sep 12 19:45:27.698733 kubelet[2467]: I0912 19:45:27.698686 2467 kubelet_node_status.go:72] "Attempting to register node" node="srv-l18mb.gb1.brightbox.com" Sep 12 19:45:27.699152 kubelet[2467]: E0912 19:45:27.699112 2467 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.9.238:6443/api/v1/nodes\": dial tcp 10.230.9.238:6443: connect: connection refused" node="srv-l18mb.gb1.brightbox.com" Sep 12 19:45:27.763606 containerd[1629]: time="2025-09-12T19:45:27.762963960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-l18mb.gb1.brightbox.com,Uid:646dbaf3a05b435a850f4b4f1c2696a1,Namespace:kube-system,Attempt:0,}" Sep 12 19:45:27.768758 containerd[1629]: time="2025-09-12T19:45:27.768701610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-l18mb.gb1.brightbox.com,Uid:e4547be40e0fdbed438707680f1bbc55,Namespace:kube-system,Attempt:0,}" Sep 12 19:45:27.769366 containerd[1629]: time="2025-09-12T19:45:27.769290443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-l18mb.gb1.brightbox.com,Uid:e0014f465e34c14feb0c1324717cd888,Namespace:kube-system,Attempt:0,}" Sep 12 19:45:28.103458 kubelet[2467]: I0912 19:45:28.103199 2467 kubelet_node_status.go:72] "Attempting to register node" node="srv-l18mb.gb1.brightbox.com" Sep 12 19:45:28.104032 kubelet[2467]: E0912 19:45:28.103999 2467 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.9.238:6443/api/v1/nodes\": dial tcp 10.230.9.238:6443: connect: connection refused" node="srv-l18mb.gb1.brightbox.com" Sep 12 19:45:28.193234 kubelet[2467]: W0912 19:45:28.193093 2467 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.9.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.9.238:6443: connect: connection refused Sep 12 19:45:28.193234 kubelet[2467]: E0912 19:45:28.193184 2467 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.9.238:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.9.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:45:28.222443 kubelet[2467]: W0912 19:45:28.222239 2467 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.9.238:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.9.238:6443: connect: connection refused Sep 12 19:45:28.222443 kubelet[2467]: E0912 19:45:28.222394 2467 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.9.238:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.9.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:45:28.394612 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1113040638.mount: Deactivated successfully. Sep 12 19:45:28.403440 containerd[1629]: time="2025-09-12T19:45:28.401817047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 19:45:28.403440 containerd[1629]: time="2025-09-12T19:45:28.403396808Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 19:45:28.404115 containerd[1629]: time="2025-09-12T19:45:28.403971747Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 19:45:28.406166 containerd[1629]: time="2025-09-12T19:45:28.405972002Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 12 19:45:28.407144 containerd[1629]: time="2025-09-12T19:45:28.407109342Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 19:45:28.408891 containerd[1629]: time="2025-09-12T19:45:28.408572542Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 19:45:28.409620 containerd[1629]: time="2025-09-12T19:45:28.409587149Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 19:45:28.415529 containerd[1629]: time="2025-09-12T19:45:28.415495043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 19:45:28.418735 containerd[1629]: time="2025-09-12T19:45:28.418352074Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 648.947431ms" Sep 12 19:45:28.421893 containerd[1629]: time="2025-09-12T19:45:28.421853776Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 653.069937ms" Sep 12 19:45:28.422915 containerd[1629]: time="2025-09-12T19:45:28.422722549Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 659.587003ms" Sep 12 19:45:28.456920 kubelet[2467]: W0912 19:45:28.455986 2467 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.9.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-l18mb.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.9.238:6443: connect: connection refused Sep 12 19:45:28.456920 kubelet[2467]: E0912 19:45:28.456087 2467 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.9.238:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-l18mb.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.9.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:45:28.497138 kubelet[2467]: E0912 19:45:28.497028 2467 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.9.238:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-l18mb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.9.238:6443: connect: connection refused" interval="1.6s" Sep 12 19:45:28.578385 kubelet[2467]: W0912 19:45:28.578149 2467 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.9.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.9.238:6443: connect: connection refused Sep 12 19:45:28.578385 kubelet[2467]: E0912 19:45:28.578236 2467 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.9.238:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.9.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:45:28.591257 containerd[1629]: time="2025-09-12T19:45:28.591100528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:45:28.591896 containerd[1629]: time="2025-09-12T19:45:28.591574169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:45:28.591896 containerd[1629]: time="2025-09-12T19:45:28.591778508Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:45:28.593623 containerd[1629]: time="2025-09-12T19:45:28.593478422Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:45:28.595463 containerd[1629]: time="2025-09-12T19:45:28.595128487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:45:28.595463 containerd[1629]: time="2025-09-12T19:45:28.595192324Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:45:28.595463 containerd[1629]: time="2025-09-12T19:45:28.595214364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:45:28.595463 containerd[1629]: time="2025-09-12T19:45:28.595336472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:45:28.597835 containerd[1629]: time="2025-09-12T19:45:28.597643891Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:45:28.597835 containerd[1629]: time="2025-09-12T19:45:28.597767800Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:45:28.598144 containerd[1629]: time="2025-09-12T19:45:28.597847855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:45:28.598144 containerd[1629]: time="2025-09-12T19:45:28.598094232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:45:28.710407 containerd[1629]: time="2025-09-12T19:45:28.707602859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-l18mb.gb1.brightbox.com,Uid:e0014f465e34c14feb0c1324717cd888,Namespace:kube-system,Attempt:0,} returns sandbox id \"d225a315bcad4835882f8d0a5b6a3c7b79a24f6b525bfc19daee91e59dec2935\"" Sep 12 19:45:28.722048 containerd[1629]: time="2025-09-12T19:45:28.722012365Z" level=info msg="CreateContainer within sandbox \"d225a315bcad4835882f8d0a5b6a3c7b79a24f6b525bfc19daee91e59dec2935\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 19:45:28.748024 containerd[1629]: time="2025-09-12T19:45:28.747892177Z" level=info msg="CreateContainer within sandbox \"d225a315bcad4835882f8d0a5b6a3c7b79a24f6b525bfc19daee91e59dec2935\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"424c70fc620e8980d8e37a69704a75cb6b2bd5634965bbc29889aa72b6c9a000\"" Sep 12 19:45:28.749006 containerd[1629]: time="2025-09-12T19:45:28.748903149Z" level=info msg="StartContainer for \"424c70fc620e8980d8e37a69704a75cb6b2bd5634965bbc29889aa72b6c9a000\"" Sep 12 19:45:28.758205 containerd[1629]: time="2025-09-12T19:45:28.757999105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-l18mb.gb1.brightbox.com,Uid:e4547be40e0fdbed438707680f1bbc55,Namespace:kube-system,Attempt:0,} returns sandbox id \"6320da7f60c9a450e988821bb0ebf20cabe4b19311eb7cec91827ddf5b359fa4\"" Sep 12 19:45:28.764542 containerd[1629]: time="2025-09-12T19:45:28.764432868Z" level=info msg="CreateContainer within sandbox \"6320da7f60c9a450e988821bb0ebf20cabe4b19311eb7cec91827ddf5b359fa4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 19:45:28.769107 containerd[1629]: time="2025-09-12T19:45:28.769072277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-l18mb.gb1.brightbox.com,Uid:646dbaf3a05b435a850f4b4f1c2696a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e57888244af2919fd2ba00b6f2924a0a773618f8b8e5443ee5a9761854372f6\"" Sep 12 19:45:28.773538 containerd[1629]: time="2025-09-12T19:45:28.773505994Z" level=info msg="CreateContainer within sandbox \"8e57888244af2919fd2ba00b6f2924a0a773618f8b8e5443ee5a9761854372f6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 19:45:28.813839 containerd[1629]: time="2025-09-12T19:45:28.813660696Z" level=info msg="CreateContainer within sandbox \"6320da7f60c9a450e988821bb0ebf20cabe4b19311eb7cec91827ddf5b359fa4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"040d148e67a50b791b7ad84dedd37e8f0d1d16546b4090134bad64944f50fd3c\"" Sep 12 19:45:28.814906 containerd[1629]: time="2025-09-12T19:45:28.814743911Z" level=info msg="StartContainer for \"040d148e67a50b791b7ad84dedd37e8f0d1d16546b4090134bad64944f50fd3c\"" Sep 12 19:45:28.817269 containerd[1629]: time="2025-09-12T19:45:28.817236748Z" level=info msg="CreateContainer within sandbox \"8e57888244af2919fd2ba00b6f2924a0a773618f8b8e5443ee5a9761854372f6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"db04c058960c25ce425664e93acdfcc54b7acfe9bcf991a975f7f84fc2fb33fb\"" Sep 12 19:45:28.818424 containerd[1629]: time="2025-09-12T19:45:28.818364997Z" level=info msg="StartContainer for \"db04c058960c25ce425664e93acdfcc54b7acfe9bcf991a975f7f84fc2fb33fb\"" Sep 12 19:45:28.899144 containerd[1629]: time="2025-09-12T19:45:28.898447656Z" level=info msg="StartContainer for \"424c70fc620e8980d8e37a69704a75cb6b2bd5634965bbc29889aa72b6c9a000\" returns successfully" Sep 12 19:45:28.917405 kubelet[2467]: I0912 19:45:28.916154 2467 kubelet_node_status.go:72] "Attempting to register node" node="srv-l18mb.gb1.brightbox.com" Sep 12 19:45:28.919901 kubelet[2467]: E0912 19:45:28.919347 2467 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.9.238:6443/api/v1/nodes\": dial tcp 10.230.9.238:6443: connect: connection refused" node="srv-l18mb.gb1.brightbox.com" Sep 12 19:45:28.973985 containerd[1629]: time="2025-09-12T19:45:28.972498423Z" level=info msg="StartContainer for \"db04c058960c25ce425664e93acdfcc54b7acfe9bcf991a975f7f84fc2fb33fb\" returns successfully" Sep 12 19:45:29.029553 containerd[1629]: time="2025-09-12T19:45:29.029497827Z" level=info msg="StartContainer for \"040d148e67a50b791b7ad84dedd37e8f0d1d16546b4090134bad64944f50fd3c\" returns successfully" Sep 12 19:45:29.174899 kubelet[2467]: E0912 19:45:29.174211 2467 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.9.238:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.9.238:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:45:30.526041 kubelet[2467]: I0912 19:45:30.525198 2467 kubelet_node_status.go:72] "Attempting to register node" node="srv-l18mb.gb1.brightbox.com" Sep 12 19:45:32.152483 kubelet[2467]: E0912 19:45:32.152401 2467 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-l18mb.gb1.brightbox.com\" not found" node="srv-l18mb.gb1.brightbox.com" Sep 12 19:45:32.169675 kubelet[2467]: I0912 19:45:32.169464 2467 kubelet_node_status.go:75] "Successfully registered node" node="srv-l18mb.gb1.brightbox.com" Sep 12 19:45:32.169675 kubelet[2467]: E0912 19:45:32.169522 2467 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"srv-l18mb.gb1.brightbox.com\": node \"srv-l18mb.gb1.brightbox.com\" not found" Sep 12 19:45:33.068256 kubelet[2467]: I0912 19:45:33.068035 2467 apiserver.go:52] "Watching apiserver" Sep 12 19:45:33.092656 kubelet[2467]: I0912 19:45:33.092622 2467 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 19:45:34.453435 systemd[1]: Reloading requested from client PID 2743 ('systemctl') (unit session-11.scope)... Sep 12 19:45:34.454498 systemd[1]: Reloading... Sep 12 19:45:34.563988 zram_generator::config[2782]: No configuration found. Sep 12 19:45:34.765686 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 19:45:34.887467 systemd[1]: Reloading finished in 432 ms. Sep 12 19:45:34.948191 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:45:34.967614 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 19:45:34.968260 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:45:34.976476 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:45:35.216108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:45:35.228424 (kubelet)[2856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 19:45:35.344910 kubelet[2856]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 19:45:35.344910 kubelet[2856]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 12 19:45:35.344910 kubelet[2856]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 19:45:35.345562 kubelet[2856]: I0912 19:45:35.344992 2856 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 19:45:35.354931 kubelet[2856]: I0912 19:45:35.354026 2856 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 12 19:45:35.354931 kubelet[2856]: I0912 19:45:35.354056 2856 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 19:45:35.354931 kubelet[2856]: I0912 19:45:35.354539 2856 server.go:934] "Client rotation is on, will bootstrap in background" Sep 12 19:45:35.358465 kubelet[2856]: I0912 19:45:35.358441 2856 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 19:45:35.361639 kubelet[2856]: I0912 19:45:35.361603 2856 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 19:45:35.371472 kubelet[2856]: E0912 19:45:35.371419 2856 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 19:45:35.371584 kubelet[2856]: I0912 19:45:35.371473 2856 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 19:45:35.380925 kubelet[2856]: I0912 19:45:35.379440 2856 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 19:45:35.380925 kubelet[2856]: I0912 19:45:35.380011 2856 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 12 19:45:35.380925 kubelet[2856]: I0912 19:45:35.380209 2856 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 19:45:35.381128 kubelet[2856]: I0912 19:45:35.380260 2856 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-l18mb.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 12 19:45:35.381128 kubelet[2856]: I0912 19:45:35.380526 2856 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 19:45:35.381128 kubelet[2856]: I0912 19:45:35.380543 2856 container_manager_linux.go:300] "Creating device plugin manager" Sep 12 19:45:35.381128 kubelet[2856]: I0912 19:45:35.380620 2856 state_mem.go:36] "Initialized new in-memory state store" Sep 12 19:45:35.381128 kubelet[2856]: I0912 19:45:35.380790 2856 kubelet.go:408] "Attempting to sync node with API server" Sep 12 19:45:35.381128 kubelet[2856]: I0912 19:45:35.380819 2856 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 19:45:35.381691 kubelet[2856]: I0912 19:45:35.381649 2856 kubelet.go:314] "Adding apiserver pod source" Sep 12 19:45:35.381833 kubelet[2856]: I0912 19:45:35.381813 2856 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 19:45:35.387262 kubelet[2856]: I0912 19:45:35.386987 2856 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 19:45:35.392406 kubelet[2856]: I0912 19:45:35.392371 2856 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 19:45:35.396686 kubelet[2856]: I0912 19:45:35.396320 2856 server.go:1274] "Started kubelet" Sep 12 19:45:35.401119 kubelet[2856]: I0912 19:45:35.401083 2856 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 19:45:35.412223 kubelet[2856]: I0912 19:45:35.412088 2856 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 19:45:35.416068 kubelet[2856]: I0912 19:45:35.414929 2856 server.go:449] "Adding debug handlers to kubelet server" Sep 12 19:45:35.418591 kubelet[2856]: I0912 19:45:35.417315 2856 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 12 19:45:35.420646 kubelet[2856]: I0912 19:45:35.419607 2856 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 19:45:35.423395 kubelet[2856]: I0912 19:45:35.421658 2856 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 19:45:35.423499 kubelet[2856]: I0912 19:45:35.423399 2856 reconciler.go:26] "Reconciler: start to sync state" Sep 12 19:45:35.423848 kubelet[2856]: I0912 19:45:35.423167 2856 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 12 19:45:35.427347 kubelet[2856]: I0912 19:45:35.427302 2856 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 19:45:35.432097 kubelet[2856]: I0912 19:45:35.430830 2856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 19:45:35.434507 kubelet[2856]: I0912 19:45:35.434447 2856 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 19:45:35.434599 kubelet[2856]: I0912 19:45:35.434526 2856 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 12 19:45:35.434599 kubelet[2856]: I0912 19:45:35.434593 2856 kubelet.go:2321] "Starting kubelet main sync loop" Sep 12 19:45:35.434787 kubelet[2856]: E0912 19:45:35.434762 2856 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 19:45:35.436740 kubelet[2856]: E0912 19:45:35.434671 2856 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 19:45:35.436740 kubelet[2856]: I0912 19:45:35.435276 2856 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 19:45:35.449598 kubelet[2856]: I0912 19:45:35.449561 2856 factory.go:221] Registration of the containerd container factory successfully Sep 12 19:45:35.450102 kubelet[2856]: I0912 19:45:35.450081 2856 factory.go:221] Registration of the systemd container factory successfully Sep 12 19:45:35.536143 kubelet[2856]: E0912 19:45:35.535958 2856 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 19:45:35.559110 kubelet[2856]: I0912 19:45:35.559056 2856 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 12 19:45:35.559110 kubelet[2856]: I0912 19:45:35.559085 2856 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 12 19:45:35.559110 kubelet[2856]: I0912 19:45:35.559119 2856 state_mem.go:36] "Initialized new in-memory state store" Sep 12 19:45:35.559423 kubelet[2856]: I0912 19:45:35.559369 2856 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 19:45:35.559423 kubelet[2856]: I0912 19:45:35.559394 2856 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 19:45:35.559564 kubelet[2856]: I0912 19:45:35.559433 2856 policy_none.go:49] "None policy: Start" Sep 12 19:45:35.560729 kubelet[2856]: I0912 19:45:35.560288 2856 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 12 19:45:35.560729 kubelet[2856]: I0912 19:45:35.560332 2856 state_mem.go:35] "Initializing new in-memory state store" Sep 12 19:45:35.560729 kubelet[2856]: I0912 19:45:35.560510 2856 state_mem.go:75] "Updated machine memory state" Sep 12 19:45:35.570469 kubelet[2856]: I0912 19:45:35.568609 2856 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 19:45:35.570469 kubelet[2856]: I0912 19:45:35.568955 2856 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 19:45:35.570469 kubelet[2856]: I0912 19:45:35.568986 2856 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 19:45:35.574916 kubelet[2856]: I0912 19:45:35.574761 2856 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 19:45:35.690483 kubelet[2856]: I0912 19:45:35.690369 2856 kubelet_node_status.go:72] "Attempting to register node" node="srv-l18mb.gb1.brightbox.com" Sep 12 19:45:35.702526 kubelet[2856]: I0912 19:45:35.702469 2856 kubelet_node_status.go:111] "Node was previously registered" node="srv-l18mb.gb1.brightbox.com" Sep 12 19:45:35.702947 kubelet[2856]: I0912 19:45:35.702819 2856 kubelet_node_status.go:75] "Successfully registered node" node="srv-l18mb.gb1.brightbox.com" Sep 12 19:45:35.753886 kubelet[2856]: W0912 19:45:35.753213 2856 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 19:45:35.756540 kubelet[2856]: W0912 19:45:35.756241 2856 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 19:45:35.758598 kubelet[2856]: W0912 19:45:35.758232 2856 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 19:45:35.825974 kubelet[2856]: I0912 19:45:35.824928 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/646dbaf3a05b435a850f4b4f1c2696a1-usr-share-ca-certificates\") pod \"kube-apiserver-srv-l18mb.gb1.brightbox.com\" (UID: \"646dbaf3a05b435a850f4b4f1c2696a1\") " pod="kube-system/kube-apiserver-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:35.825974 kubelet[2856]: I0912 19:45:35.824997 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e0014f465e34c14feb0c1324717cd888-flexvolume-dir\") pod \"kube-controller-manager-srv-l18mb.gb1.brightbox.com\" (UID: \"e0014f465e34c14feb0c1324717cd888\") " pod="kube-system/kube-controller-manager-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:35.825974 kubelet[2856]: I0912 19:45:35.825079 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0014f465e34c14feb0c1324717cd888-k8s-certs\") pod \"kube-controller-manager-srv-l18mb.gb1.brightbox.com\" (UID: \"e0014f465e34c14feb0c1324717cd888\") " pod="kube-system/kube-controller-manager-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:35.825974 kubelet[2856]: I0912 19:45:35.825131 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0014f465e34c14feb0c1324717cd888-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-l18mb.gb1.brightbox.com\" (UID: \"e0014f465e34c14feb0c1324717cd888\") " pod="kube-system/kube-controller-manager-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:35.825974 kubelet[2856]: I0912 19:45:35.825208 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4547be40e0fdbed438707680f1bbc55-kubeconfig\") pod \"kube-scheduler-srv-l18mb.gb1.brightbox.com\" (UID: \"e4547be40e0fdbed438707680f1bbc55\") " pod="kube-system/kube-scheduler-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:35.825974 kubelet[2856]: I0912 19:45:35.825253 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/646dbaf3a05b435a850f4b4f1c2696a1-ca-certs\") pod \"kube-apiserver-srv-l18mb.gb1.brightbox.com\" (UID: \"646dbaf3a05b435a850f4b4f1c2696a1\") " pod="kube-system/kube-apiserver-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:35.825974 kubelet[2856]: I0912 19:45:35.825751 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/646dbaf3a05b435a850f4b4f1c2696a1-k8s-certs\") pod \"kube-apiserver-srv-l18mb.gb1.brightbox.com\" (UID: \"646dbaf3a05b435a850f4b4f1c2696a1\") " pod="kube-system/kube-apiserver-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:35.825974 kubelet[2856]: I0912 19:45:35.825785 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0014f465e34c14feb0c1324717cd888-ca-certs\") pod \"kube-controller-manager-srv-l18mb.gb1.brightbox.com\" (UID: \"e0014f465e34c14feb0c1324717cd888\") " pod="kube-system/kube-controller-manager-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:35.825974 kubelet[2856]: I0912 19:45:35.825830 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e0014f465e34c14feb0c1324717cd888-kubeconfig\") pod \"kube-controller-manager-srv-l18mb.gb1.brightbox.com\" (UID: \"e0014f465e34c14feb0c1324717cd888\") " pod="kube-system/kube-controller-manager-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:36.384813 kubelet[2856]: I0912 19:45:36.383136 2856 apiserver.go:52] "Watching apiserver" Sep 12 19:45:36.424630 kubelet[2856]: I0912 19:45:36.424503 2856 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 12 19:45:36.496446 kubelet[2856]: W0912 19:45:36.496129 2856 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 19:45:36.496446 kubelet[2856]: E0912 19:45:36.496227 2856 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-l18mb.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-l18mb.gb1.brightbox.com" Sep 12 19:45:36.540932 kubelet[2856]: I0912 19:45:36.540827 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-l18mb.gb1.brightbox.com" podStartSLOduration=1.540792098 podStartE2EDuration="1.540792098s" podCreationTimestamp="2025-09-12 19:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 19:45:36.527715357 +0000 UTC m=+1.279927383" watchObservedRunningTime="2025-09-12 19:45:36.540792098 +0000 UTC m=+1.293004107" Sep 12 19:45:36.559723 kubelet[2856]: I0912 19:45:36.558842 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-l18mb.gb1.brightbox.com" podStartSLOduration=1.558824553 podStartE2EDuration="1.558824553s" podCreationTimestamp="2025-09-12 19:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 19:45:36.557089632 +0000 UTC m=+1.309301656" watchObservedRunningTime="2025-09-12 19:45:36.558824553 +0000 UTC m=+1.311036556" Sep 12 19:45:36.559723 kubelet[2856]: I0912 19:45:36.559021 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-l18mb.gb1.brightbox.com" podStartSLOduration=1.55901192 podStartE2EDuration="1.55901192s" podCreationTimestamp="2025-09-12 19:45:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 19:45:36.543523702 +0000 UTC m=+1.295735730" watchObservedRunningTime="2025-09-12 19:45:36.55901192 +0000 UTC m=+1.311223929" Sep 12 19:45:40.769580 kubelet[2856]: I0912 19:45:40.769527 2856 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 19:45:40.782995 kubelet[2856]: I0912 19:45:40.773500 2856 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 19:45:40.783074 containerd[1629]: time="2025-09-12T19:45:40.770821885Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 19:45:40.954050 kubelet[2856]: I0912 19:45:40.953704 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d2cacb7c-f569-408f-905f-7f199b178498-kube-proxy\") pod \"kube-proxy-64vpr\" (UID: \"d2cacb7c-f569-408f-905f-7f199b178498\") " pod="kube-system/kube-proxy-64vpr" Sep 12 19:45:40.954050 kubelet[2856]: I0912 19:45:40.953788 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2cacb7c-f569-408f-905f-7f199b178498-xtables-lock\") pod \"kube-proxy-64vpr\" (UID: \"d2cacb7c-f569-408f-905f-7f199b178498\") " pod="kube-system/kube-proxy-64vpr" Sep 12 19:45:40.954050 kubelet[2856]: I0912 19:45:40.953821 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2cacb7c-f569-408f-905f-7f199b178498-lib-modules\") pod \"kube-proxy-64vpr\" (UID: \"d2cacb7c-f569-408f-905f-7f199b178498\") " pod="kube-system/kube-proxy-64vpr" Sep 12 19:45:40.954050 kubelet[2856]: I0912 19:45:40.953898 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4wm\" (UniqueName: \"kubernetes.io/projected/d2cacb7c-f569-408f-905f-7f199b178498-kube-api-access-6x4wm\") pod \"kube-proxy-64vpr\" (UID: \"d2cacb7c-f569-408f-905f-7f199b178498\") " pod="kube-system/kube-proxy-64vpr" Sep 12 19:45:41.068241 kubelet[2856]: E0912 19:45:41.067671 2856 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 12 19:45:41.068241 kubelet[2856]: E0912 19:45:41.067790 2856 projected.go:194] Error preparing data for projected volume kube-api-access-6x4wm for pod kube-system/kube-proxy-64vpr: configmap "kube-root-ca.crt" not found Sep 12 19:45:41.068241 kubelet[2856]: E0912 19:45:41.067945 2856 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d2cacb7c-f569-408f-905f-7f199b178498-kube-api-access-6x4wm podName:d2cacb7c-f569-408f-905f-7f199b178498 nodeName:}" failed. No retries permitted until 2025-09-12 19:45:41.56789996 +0000 UTC m=+6.320111973 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6x4wm" (UniqueName: "kubernetes.io/projected/d2cacb7c-f569-408f-905f-7f199b178498-kube-api-access-6x4wm") pod "kube-proxy-64vpr" (UID: "d2cacb7c-f569-408f-905f-7f199b178498") : configmap "kube-root-ca.crt" not found Sep 12 19:45:41.700574 containerd[1629]: time="2025-09-12T19:45:41.700401471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-64vpr,Uid:d2cacb7c-f569-408f-905f-7f199b178498,Namespace:kube-system,Attempt:0,}" Sep 12 19:45:41.760900 containerd[1629]: time="2025-09-12T19:45:41.760586509Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:45:41.760900 containerd[1629]: time="2025-09-12T19:45:41.760730280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:45:41.760900 containerd[1629]: time="2025-09-12T19:45:41.760757821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:45:41.762690 containerd[1629]: time="2025-09-12T19:45:41.761475931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:45:41.810735 systemd[1]: run-containerd-runc-k8s.io-bb33dc922694d48b7ed4204acbcf6ca8b60b2caa3bee07482099323f051bb0c3-runc.mS3FkB.mount: Deactivated successfully. Sep 12 19:45:41.936069 containerd[1629]: time="2025-09-12T19:45:41.935990147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-64vpr,Uid:d2cacb7c-f569-408f-905f-7f199b178498,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb33dc922694d48b7ed4204acbcf6ca8b60b2caa3bee07482099323f051bb0c3\"" Sep 12 19:45:41.942906 containerd[1629]: time="2025-09-12T19:45:41.942133376Z" level=info msg="CreateContainer within sandbox \"bb33dc922694d48b7ed4204acbcf6ca8b60b2caa3bee07482099323f051bb0c3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 19:45:41.978559 kubelet[2856]: I0912 19:45:41.970816 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c45wm\" (UniqueName: \"kubernetes.io/projected/40e242e0-42ef-4706-8f21-1b06d188f4d0-kube-api-access-c45wm\") pod \"tigera-operator-58fc44c59b-ppphl\" (UID: \"40e242e0-42ef-4706-8f21-1b06d188f4d0\") " pod="tigera-operator/tigera-operator-58fc44c59b-ppphl" Sep 12 19:45:41.978559 kubelet[2856]: I0912 19:45:41.971063 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/40e242e0-42ef-4706-8f21-1b06d188f4d0-var-lib-calico\") pod \"tigera-operator-58fc44c59b-ppphl\" (UID: \"40e242e0-42ef-4706-8f21-1b06d188f4d0\") " pod="tigera-operator/tigera-operator-58fc44c59b-ppphl" Sep 12 19:45:41.980266 containerd[1629]: time="2025-09-12T19:45:41.980200735Z" level=info msg="CreateContainer within sandbox \"bb33dc922694d48b7ed4204acbcf6ca8b60b2caa3bee07482099323f051bb0c3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"736579d7211c1c5a6e768d86142c4302ffe89c9757a1cab071b34d6e27e6640f\"" Sep 12 19:45:41.982531 containerd[1629]: time="2025-09-12T19:45:41.981268903Z" level=info msg="StartContainer for \"736579d7211c1c5a6e768d86142c4302ffe89c9757a1cab071b34d6e27e6640f\"" Sep 12 19:45:42.078544 containerd[1629]: time="2025-09-12T19:45:42.078475521Z" level=info msg="StartContainer for \"736579d7211c1c5a6e768d86142c4302ffe89c9757a1cab071b34d6e27e6640f\" returns successfully" Sep 12 19:45:42.209538 containerd[1629]: time="2025-09-12T19:45:42.209485272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-ppphl,Uid:40e242e0-42ef-4706-8f21-1b06d188f4d0,Namespace:tigera-operator,Attempt:0,}" Sep 12 19:45:42.260478 containerd[1629]: time="2025-09-12T19:45:42.259606254Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:45:42.262760 containerd[1629]: time="2025-09-12T19:45:42.262656570Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:45:42.263877 containerd[1629]: time="2025-09-12T19:45:42.263219020Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:45:42.267038 containerd[1629]: time="2025-09-12T19:45:42.266534976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:45:42.386813 containerd[1629]: time="2025-09-12T19:45:42.386498885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-58fc44c59b-ppphl,Uid:40e242e0-42ef-4706-8f21-1b06d188f4d0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6e5f18b41fa15903cfdd739d9977abe7303c1e3b1d7d865401e0ece14f7072e1\"" Sep 12 19:45:42.390245 containerd[1629]: time="2025-09-12T19:45:42.390208886Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 19:45:42.526513 kubelet[2856]: I0912 19:45:42.524793 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-64vpr" podStartSLOduration=2.524735588 podStartE2EDuration="2.524735588s" podCreationTimestamp="2025-09-12 19:45:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 19:45:42.524562828 +0000 UTC m=+7.276774852" watchObservedRunningTime="2025-09-12 19:45:42.524735588 +0000 UTC m=+7.276947603" Sep 12 19:45:44.027234 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1934986963.mount: Deactivated successfully. Sep 12 19:45:45.541718 containerd[1629]: time="2025-09-12T19:45:45.541606637Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:45.543238 containerd[1629]: time="2025-09-12T19:45:45.543164415Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 19:45:45.544517 containerd[1629]: time="2025-09-12T19:45:45.544036569Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:45.547267 containerd[1629]: time="2025-09-12T19:45:45.547228936Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:45:45.549171 containerd[1629]: time="2025-09-12T19:45:45.549114140Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.158845958s" Sep 12 19:45:45.549264 containerd[1629]: time="2025-09-12T19:45:45.549181316Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 19:45:45.554762 containerd[1629]: time="2025-09-12T19:45:45.554469416Z" level=info msg="CreateContainer within sandbox \"6e5f18b41fa15903cfdd739d9977abe7303c1e3b1d7d865401e0ece14f7072e1\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 19:45:45.571995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569913837.mount: Deactivated successfully. Sep 12 19:45:45.574263 containerd[1629]: time="2025-09-12T19:45:45.574129665Z" level=info msg="CreateContainer within sandbox \"6e5f18b41fa15903cfdd739d9977abe7303c1e3b1d7d865401e0ece14f7072e1\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"261038f432c79f7a6b981bfc7bab3ea800bfda218cfbdb80c6570d59d6831628\"" Sep 12 19:45:45.576036 containerd[1629]: time="2025-09-12T19:45:45.574704289Z" level=info msg="StartContainer for \"261038f432c79f7a6b981bfc7bab3ea800bfda218cfbdb80c6570d59d6831628\"" Sep 12 19:45:45.690291 containerd[1629]: time="2025-09-12T19:45:45.690144839Z" level=info msg="StartContainer for \"261038f432c79f7a6b981bfc7bab3ea800bfda218cfbdb80c6570d59d6831628\" returns successfully" Sep 12 19:45:46.537340 kubelet[2856]: I0912 19:45:46.537232 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-58fc44c59b-ppphl" podStartSLOduration=2.3744759650000002 podStartE2EDuration="5.537183495s" podCreationTimestamp="2025-09-12 19:45:41 +0000 UTC" firstStartedPulling="2025-09-12 19:45:42.388941519 +0000 UTC m=+7.141153523" lastFinishedPulling="2025-09-12 19:45:45.551649043 +0000 UTC m=+10.303861053" observedRunningTime="2025-09-12 19:45:46.535256742 +0000 UTC m=+11.287468761" watchObservedRunningTime="2025-09-12 19:45:46.537183495 +0000 UTC m=+11.289395507" Sep 12 19:45:53.466039 sudo[1927]: pam_unix(sudo:session): session closed for user root Sep 12 19:45:53.614740 sshd[1923]: pam_unix(sshd:session): session closed for user core Sep 12 19:45:53.631804 systemd[1]: sshd@8-10.230.9.238:22-139.178.68.195:42640.service: Deactivated successfully. Sep 12 19:45:53.661068 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 19:45:53.662201 systemd-logind[1607]: Session 11 logged out. Waiting for processes to exit. Sep 12 19:45:53.670129 systemd-logind[1607]: Removed session 11. Sep 12 19:45:57.889575 kubelet[2856]: I0912 19:45:57.889223 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b39d8fd5-0abd-4956-97e9-587eb3c4bf28-typha-certs\") pod \"calico-typha-67fb6fb7cd-5cg24\" (UID: \"b39d8fd5-0abd-4956-97e9-587eb3c4bf28\") " pod="calico-system/calico-typha-67fb6fb7cd-5cg24" Sep 12 19:45:57.889575 kubelet[2856]: I0912 19:45:57.889399 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7lpn\" (UniqueName: \"kubernetes.io/projected/b39d8fd5-0abd-4956-97e9-587eb3c4bf28-kube-api-access-v7lpn\") pod \"calico-typha-67fb6fb7cd-5cg24\" (UID: \"b39d8fd5-0abd-4956-97e9-587eb3c4bf28\") " pod="calico-system/calico-typha-67fb6fb7cd-5cg24" Sep 12 19:45:57.889575 kubelet[2856]: I0912 19:45:57.889486 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b39d8fd5-0abd-4956-97e9-587eb3c4bf28-tigera-ca-bundle\") pod \"calico-typha-67fb6fb7cd-5cg24\" (UID: \"b39d8fd5-0abd-4956-97e9-587eb3c4bf28\") " pod="calico-system/calico-typha-67fb6fb7cd-5cg24" Sep 12 19:45:58.099974 containerd[1629]: time="2025-09-12T19:45:58.099526931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67fb6fb7cd-5cg24,Uid:b39d8fd5-0abd-4956-97e9-587eb3c4bf28,Namespace:calico-system,Attempt:0,}" Sep 12 19:45:58.172483 containerd[1629]: time="2025-09-12T19:45:58.167443306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:45:58.172483 containerd[1629]: time="2025-09-12T19:45:58.167574974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:45:58.172483 containerd[1629]: time="2025-09-12T19:45:58.167604121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:45:58.172483 containerd[1629]: time="2025-09-12T19:45:58.169631820Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:45:58.293197 kubelet[2856]: I0912 19:45:58.292601 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/acfd9cda-7ccf-4583-b214-75939b6777c4-cni-net-dir\") pod \"calico-node-xz6fz\" (UID: \"acfd9cda-7ccf-4583-b214-75939b6777c4\") " pod="calico-system/calico-node-xz6fz" Sep 12 19:45:58.293197 kubelet[2856]: I0912 19:45:58.292659 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/acfd9cda-7ccf-4583-b214-75939b6777c4-flexvol-driver-host\") pod \"calico-node-xz6fz\" (UID: \"acfd9cda-7ccf-4583-b214-75939b6777c4\") " pod="calico-system/calico-node-xz6fz" Sep 12 19:45:58.293197 kubelet[2856]: I0912 19:45:58.292691 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/acfd9cda-7ccf-4583-b214-75939b6777c4-node-certs\") pod \"calico-node-xz6fz\" (UID: \"acfd9cda-7ccf-4583-b214-75939b6777c4\") " pod="calico-system/calico-node-xz6fz" Sep 12 19:45:58.293197 kubelet[2856]: I0912 19:45:58.292718 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/acfd9cda-7ccf-4583-b214-75939b6777c4-xtables-lock\") pod \"calico-node-xz6fz\" (UID: \"acfd9cda-7ccf-4583-b214-75939b6777c4\") " pod="calico-system/calico-node-xz6fz" Sep 12 19:45:58.293197 kubelet[2856]: I0912 19:45:58.292744 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/acfd9cda-7ccf-4583-b214-75939b6777c4-policysync\") pod \"calico-node-xz6fz\" (UID: \"acfd9cda-7ccf-4583-b214-75939b6777c4\") " pod="calico-system/calico-node-xz6fz" Sep 12 19:45:58.293197 kubelet[2856]: I0912 19:45:58.292771 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/acfd9cda-7ccf-4583-b214-75939b6777c4-var-run-calico\") pod \"calico-node-xz6fz\" (UID: \"acfd9cda-7ccf-4583-b214-75939b6777c4\") " pod="calico-system/calico-node-xz6fz" Sep 12 19:45:58.293197 kubelet[2856]: I0912 19:45:58.292887 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/acfd9cda-7ccf-4583-b214-75939b6777c4-lib-modules\") pod \"calico-node-xz6fz\" (UID: \"acfd9cda-7ccf-4583-b214-75939b6777c4\") " pod="calico-system/calico-node-xz6fz" Sep 12 19:45:58.293197 kubelet[2856]: I0912 19:45:58.292921 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/acfd9cda-7ccf-4583-b214-75939b6777c4-cni-bin-dir\") pod \"calico-node-xz6fz\" (UID: \"acfd9cda-7ccf-4583-b214-75939b6777c4\") " pod="calico-system/calico-node-xz6fz" Sep 12 19:45:58.293197 kubelet[2856]: I0912 19:45:58.292953 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/acfd9cda-7ccf-4583-b214-75939b6777c4-cni-log-dir\") pod \"calico-node-xz6fz\" (UID: \"acfd9cda-7ccf-4583-b214-75939b6777c4\") " pod="calico-system/calico-node-xz6fz" Sep 12 19:45:58.293197 kubelet[2856]: I0912 19:45:58.292981 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/acfd9cda-7ccf-4583-b214-75939b6777c4-var-lib-calico\") pod \"calico-node-xz6fz\" (UID: \"acfd9cda-7ccf-4583-b214-75939b6777c4\") " pod="calico-system/calico-node-xz6fz" Sep 12 19:45:58.293197 kubelet[2856]: I0912 19:45:58.293012 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/acfd9cda-7ccf-4583-b214-75939b6777c4-tigera-ca-bundle\") pod \"calico-node-xz6fz\" (UID: \"acfd9cda-7ccf-4583-b214-75939b6777c4\") " pod="calico-system/calico-node-xz6fz" Sep 12 19:45:58.293197 kubelet[2856]: I0912 19:45:58.293038 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9jkw\" (UniqueName: \"kubernetes.io/projected/acfd9cda-7ccf-4583-b214-75939b6777c4-kube-api-access-k9jkw\") pod \"calico-node-xz6fz\" (UID: \"acfd9cda-7ccf-4583-b214-75939b6777c4\") " pod="calico-system/calico-node-xz6fz" Sep 12 19:45:58.334106 containerd[1629]: time="2025-09-12T19:45:58.334020455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67fb6fb7cd-5cg24,Uid:b39d8fd5-0abd-4956-97e9-587eb3c4bf28,Namespace:calico-system,Attempt:0,} returns sandbox id \"e11453a0d9278bd1176aae214d571ae9065d6819d064b0b1127581b4595d3ed1\"" Sep 12 19:45:58.338060 containerd[1629]: time="2025-09-12T19:45:58.337812162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 19:45:58.386992 kubelet[2856]: E0912 19:45:58.386760 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k2xgk" podUID="c267a380-4b7e-4a52-b71c-69a8c98b3163" Sep 12 19:45:58.413504 kubelet[2856]: E0912 19:45:58.411968 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.413504 kubelet[2856]: W0912 19:45:58.412022 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.413504 kubelet[2856]: E0912 19:45:58.412679 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.413504 kubelet[2856]: W0912 19:45:58.412698 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.413504 kubelet[2856]: E0912 19:45:58.413144 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.413504 kubelet[2856]: E0912 19:45:58.413276 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.413504 kubelet[2856]: E0912 19:45:58.413476 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.413504 kubelet[2856]: W0912 19:45:58.413489 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.416383 kubelet[2856]: E0912 19:45:58.413592 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.416383 kubelet[2856]: E0912 19:45:58.413809 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.416383 kubelet[2856]: W0912 19:45:58.413823 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.416383 kubelet[2856]: E0912 19:45:58.413881 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.416383 kubelet[2856]: E0912 19:45:58.414214 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.416383 kubelet[2856]: W0912 19:45:58.414229 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.416383 kubelet[2856]: E0912 19:45:58.414246 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.416383 kubelet[2856]: E0912 19:45:58.416041 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.416383 kubelet[2856]: W0912 19:45:58.416057 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.416383 kubelet[2856]: E0912 19:45:58.416074 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.419068 kubelet[2856]: E0912 19:45:58.418217 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.419068 kubelet[2856]: W0912 19:45:58.418246 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.419068 kubelet[2856]: E0912 19:45:58.418342 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.419068 kubelet[2856]: E0912 19:45:58.418531 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.419068 kubelet[2856]: W0912 19:45:58.418545 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.419068 kubelet[2856]: E0912 19:45:58.418573 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.419068 kubelet[2856]: E0912 19:45:58.419066 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.420276 kubelet[2856]: W0912 19:45:58.419081 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.420276 kubelet[2856]: E0912 19:45:58.419104 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.422957 kubelet[2856]: E0912 19:45:58.422121 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.422957 kubelet[2856]: W0912 19:45:58.422150 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.422957 kubelet[2856]: E0912 19:45:58.422176 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.424593 kubelet[2856]: E0912 19:45:58.424399 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.424593 kubelet[2856]: W0912 19:45:58.424420 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.424593 kubelet[2856]: E0912 19:45:58.424457 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.426200 kubelet[2856]: E0912 19:45:58.425950 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.426200 kubelet[2856]: W0912 19:45:58.425965 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.426200 kubelet[2856]: E0912 19:45:58.426016 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.426625 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.429430 kubelet[2856]: W0912 19:45:58.426643 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.426666 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.426979 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.429430 kubelet[2856]: W0912 19:45:58.426994 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.427008 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.427245 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.429430 kubelet[2856]: W0912 19:45:58.427269 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.427284 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.427555 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.429430 kubelet[2856]: W0912 19:45:58.427568 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.427596 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.427864 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.429430 kubelet[2856]: W0912 19:45:58.427898 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.427913 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.428140 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.429430 kubelet[2856]: W0912 19:45:58.428153 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.428169 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.428424 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.429430 kubelet[2856]: W0912 19:45:58.428437 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.428450 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.428682 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.429430 kubelet[2856]: W0912 19:45:58.428695 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.428708 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.428976 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.429430 kubelet[2856]: W0912 19:45:58.428990 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.429004 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.429430 kubelet[2856]: E0912 19:45:58.429231 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.430775 kubelet[2856]: W0912 19:45:58.429245 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.430775 kubelet[2856]: E0912 19:45:58.429259 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.430775 kubelet[2856]: E0912 19:45:58.429531 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.430775 kubelet[2856]: W0912 19:45:58.429544 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.430775 kubelet[2856]: E0912 19:45:58.429557 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.430775 kubelet[2856]: E0912 19:45:58.429792 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.430775 kubelet[2856]: W0912 19:45:58.429805 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.430775 kubelet[2856]: E0912 19:45:58.429827 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.430775 kubelet[2856]: E0912 19:45:58.430097 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.430775 kubelet[2856]: W0912 19:45:58.430110 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.430775 kubelet[2856]: E0912 19:45:58.430125 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.430775 kubelet[2856]: E0912 19:45:58.430365 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.430775 kubelet[2856]: W0912 19:45:58.430378 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.430775 kubelet[2856]: E0912 19:45:58.430391 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.443809 kubelet[2856]: E0912 19:45:58.443764 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.445946 kubelet[2856]: W0912 19:45:58.443794 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.446045 kubelet[2856]: E0912 19:45:58.445951 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.463132 containerd[1629]: time="2025-09-12T19:45:58.463089702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xz6fz,Uid:acfd9cda-7ccf-4583-b214-75939b6777c4,Namespace:calico-system,Attempt:0,}" Sep 12 19:45:58.495291 kubelet[2856]: E0912 19:45:58.495249 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.495453 kubelet[2856]: W0912 19:45:58.495298 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.495453 kubelet[2856]: E0912 19:45:58.495355 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.495453 kubelet[2856]: I0912 19:45:58.495407 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c267a380-4b7e-4a52-b71c-69a8c98b3163-registration-dir\") pod \"csi-node-driver-k2xgk\" (UID: \"c267a380-4b7e-4a52-b71c-69a8c98b3163\") " pod="calico-system/csi-node-driver-k2xgk" Sep 12 19:45:58.496020 kubelet[2856]: E0912 19:45:58.495994 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.496020 kubelet[2856]: W0912 19:45:58.496017 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.496134 kubelet[2856]: E0912 19:45:58.496036 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.496134 kubelet[2856]: I0912 19:45:58.496072 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c267a380-4b7e-4a52-b71c-69a8c98b3163-varrun\") pod \"csi-node-driver-k2xgk\" (UID: \"c267a380-4b7e-4a52-b71c-69a8c98b3163\") " pod="calico-system/csi-node-driver-k2xgk" Sep 12 19:45:58.498258 kubelet[2856]: E0912 19:45:58.498210 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.498338 kubelet[2856]: W0912 19:45:58.498301 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.498408 kubelet[2856]: E0912 19:45:58.498351 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.498408 kubelet[2856]: I0912 19:45:58.498378 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c267a380-4b7e-4a52-b71c-69a8c98b3163-kubelet-dir\") pod \"csi-node-driver-k2xgk\" (UID: \"c267a380-4b7e-4a52-b71c-69a8c98b3163\") " pod="calico-system/csi-node-driver-k2xgk" Sep 12 19:45:58.498895 kubelet[2856]: E0912 19:45:58.498846 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.499303 kubelet[2856]: W0912 19:45:58.498895 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.499303 kubelet[2856]: E0912 19:45:58.498927 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.500025 kubelet[2856]: E0912 19:45:58.499438 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.500025 kubelet[2856]: W0912 19:45:58.499507 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.500025 kubelet[2856]: E0912 19:45:58.499530 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.500025 kubelet[2856]: E0912 19:45:58.500022 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.500962 kubelet[2856]: W0912 19:45:58.500039 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.500962 kubelet[2856]: E0912 19:45:58.500099 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.500962 kubelet[2856]: I0912 19:45:58.500128 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c267a380-4b7e-4a52-b71c-69a8c98b3163-socket-dir\") pod \"csi-node-driver-k2xgk\" (UID: \"c267a380-4b7e-4a52-b71c-69a8c98b3163\") " pod="calico-system/csi-node-driver-k2xgk" Sep 12 19:45:58.501333 kubelet[2856]: E0912 19:45:58.501298 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.501383 kubelet[2856]: W0912 19:45:58.501356 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.502121 kubelet[2856]: E0912 19:45:58.502028 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.502452 kubelet[2856]: E0912 19:45:58.502397 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.502452 kubelet[2856]: W0912 19:45:58.502449 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.502580 kubelet[2856]: E0912 19:45:58.502467 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.503351 kubelet[2856]: E0912 19:45:58.503330 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.503351 kubelet[2856]: W0912 19:45:58.503349 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.503497 kubelet[2856]: E0912 19:45:58.503371 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.504072 kubelet[2856]: E0912 19:45:58.504051 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.504126 kubelet[2856]: W0912 19:45:58.504072 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.504230 kubelet[2856]: E0912 19:45:58.504131 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.504301 kubelet[2856]: I0912 19:45:58.504159 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn4w9\" (UniqueName: \"kubernetes.io/projected/c267a380-4b7e-4a52-b71c-69a8c98b3163-kube-api-access-mn4w9\") pod \"csi-node-driver-k2xgk\" (UID: \"c267a380-4b7e-4a52-b71c-69a8c98b3163\") " pod="calico-system/csi-node-driver-k2xgk" Sep 12 19:45:58.504799 kubelet[2856]: E0912 19:45:58.504761 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.504928 kubelet[2856]: W0912 19:45:58.504822 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.505057 kubelet[2856]: E0912 19:45:58.504970 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.505818 kubelet[2856]: E0912 19:45:58.505631 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.505818 kubelet[2856]: W0912 19:45:58.505645 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.505818 kubelet[2856]: E0912 19:45:58.505672 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.507070 kubelet[2856]: E0912 19:45:58.507020 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.507070 kubelet[2856]: W0912 19:45:58.507063 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.507186 kubelet[2856]: E0912 19:45:58.507082 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.510909 kubelet[2856]: E0912 19:45:58.510455 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.510909 kubelet[2856]: W0912 19:45:58.510507 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.510909 kubelet[2856]: E0912 19:45:58.510534 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.514305 kubelet[2856]: E0912 19:45:58.514280 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.515216 kubelet[2856]: W0912 19:45:58.514424 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.515216 kubelet[2856]: E0912 19:45:58.514509 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.548887 containerd[1629]: time="2025-09-12T19:45:58.544366919Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:45:58.548887 containerd[1629]: time="2025-09-12T19:45:58.544461866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:45:58.548887 containerd[1629]: time="2025-09-12T19:45:58.544479016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:45:58.548887 containerd[1629]: time="2025-09-12T19:45:58.544631436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:45:58.609218 kubelet[2856]: E0912 19:45:58.609178 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.609380 kubelet[2856]: W0912 19:45:58.609316 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.609380 kubelet[2856]: E0912 19:45:58.609347 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.610236 kubelet[2856]: E0912 19:45:58.610214 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.610236 kubelet[2856]: W0912 19:45:58.610235 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.610638 kubelet[2856]: E0912 19:45:58.610261 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.612386 kubelet[2856]: E0912 19:45:58.611363 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.612386 kubelet[2856]: W0912 19:45:58.611664 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.612386 kubelet[2856]: E0912 19:45:58.611702 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.613531 kubelet[2856]: E0912 19:45:58.613253 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.613531 kubelet[2856]: W0912 19:45:58.613418 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.613531 kubelet[2856]: E0912 19:45:58.613485 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.617108 kubelet[2856]: E0912 19:45:58.616504 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.617108 kubelet[2856]: W0912 19:45:58.616525 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.617108 kubelet[2856]: E0912 19:45:58.616564 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.618153 kubelet[2856]: E0912 19:45:58.617921 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.618153 kubelet[2856]: W0912 19:45:58.617996 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.618153 kubelet[2856]: E0912 19:45:58.618048 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.621126 kubelet[2856]: E0912 19:45:58.620382 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.621126 kubelet[2856]: W0912 19:45:58.620403 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.621126 kubelet[2856]: E0912 19:45:58.620441 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.621765 kubelet[2856]: E0912 19:45:58.621565 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.621765 kubelet[2856]: W0912 19:45:58.621587 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.622556 kubelet[2856]: E0912 19:45:58.621990 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.622556 kubelet[2856]: E0912 19:45:58.622288 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.622556 kubelet[2856]: W0912 19:45:58.622304 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.622556 kubelet[2856]: E0912 19:45:58.622417 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.622758 kubelet[2856]: E0912 19:45:58.622662 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.622758 kubelet[2856]: W0912 19:45:58.622676 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.623476 kubelet[2856]: E0912 19:45:58.622885 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.623476 kubelet[2856]: E0912 19:45:58.623060 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.623476 kubelet[2856]: W0912 19:45:58.623074 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.623476 kubelet[2856]: E0912 19:45:58.623167 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.623476 kubelet[2856]: E0912 19:45:58.623408 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.623476 kubelet[2856]: W0912 19:45:58.623422 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.625173 kubelet[2856]: E0912 19:45:58.623765 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.626004 kubelet[2856]: E0912 19:45:58.625963 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.626004 kubelet[2856]: W0912 19:45:58.625995 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.626953 kubelet[2856]: E0912 19:45:58.626038 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.627708 kubelet[2856]: E0912 19:45:58.627655 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.627708 kubelet[2856]: W0912 19:45:58.627680 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.627708 kubelet[2856]: E0912 19:45:58.627847 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.629106 kubelet[2856]: E0912 19:45:58.629001 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.629106 kubelet[2856]: W0912 19:45:58.629016 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.631150 kubelet[2856]: E0912 19:45:58.631088 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.633795 kubelet[2856]: E0912 19:45:58.633764 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.635066 kubelet[2856]: W0912 19:45:58.634992 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.635703 kubelet[2856]: E0912 19:45:58.635271 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.638137 kubelet[2856]: E0912 19:45:58.638074 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.638137 kubelet[2856]: W0912 19:45:58.638098 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.640574 kubelet[2856]: E0912 19:45:58.639914 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.640574 kubelet[2856]: E0912 19:45:58.640399 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.640574 kubelet[2856]: W0912 19:45:58.640415 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.644108 kubelet[2856]: E0912 19:45:58.642310 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.644108 kubelet[2856]: E0912 19:45:58.643977 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.644108 kubelet[2856]: W0912 19:45:58.643994 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.645231 kubelet[2856]: E0912 19:45:58.644551 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.646717 kubelet[2856]: E0912 19:45:58.645920 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.646717 kubelet[2856]: W0912 19:45:58.645940 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.647918 kubelet[2856]: E0912 19:45:58.647319 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.648831 kubelet[2856]: E0912 19:45:58.648444 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.649934 kubelet[2856]: W0912 19:45:58.649366 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.650620 kubelet[2856]: E0912 19:45:58.650297 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.652576 kubelet[2856]: E0912 19:45:58.652361 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.652576 kubelet[2856]: W0912 19:45:58.652381 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.654240 kubelet[2856]: E0912 19:45:58.653386 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.656340 kubelet[2856]: E0912 19:45:58.655584 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.656340 kubelet[2856]: W0912 19:45:58.655612 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.659218 kubelet[2856]: E0912 19:45:58.658898 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.660973 kubelet[2856]: E0912 19:45:58.660043 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.660973 kubelet[2856]: W0912 19:45:58.660063 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.662228 kubelet[2856]: E0912 19:45:58.661920 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.663907 kubelet[2856]: E0912 19:45:58.663432 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.663907 kubelet[2856]: W0912 19:45:58.663453 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.663907 kubelet[2856]: E0912 19:45:58.663481 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.682436 kubelet[2856]: E0912 19:45:58.681523 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:45:58.682436 kubelet[2856]: W0912 19:45:58.681550 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:45:58.682436 kubelet[2856]: E0912 19:45:58.681573 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:45:58.722299 containerd[1629]: time="2025-09-12T19:45:58.721725551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xz6fz,Uid:acfd9cda-7ccf-4583-b214-75939b6777c4,Namespace:calico-system,Attempt:0,} returns sandbox id \"63c0c5d55f4404f18569d41587893c46396cdc4d3e7c8172e890fe76a87ec467\"" Sep 12 19:46:00.253310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1999276711.mount: Deactivated successfully. Sep 12 19:46:00.435712 kubelet[2856]: E0912 19:46:00.435601 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k2xgk" podUID="c267a380-4b7e-4a52-b71c-69a8c98b3163" Sep 12 19:46:02.188637 containerd[1629]: time="2025-09-12T19:46:02.188482141Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:02.190760 containerd[1629]: time="2025-09-12T19:46:02.190649287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=35237389" Sep 12 19:46:02.194405 containerd[1629]: time="2025-09-12T19:46:02.194342356Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:02.197125 containerd[1629]: time="2025-09-12T19:46:02.197057926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:02.199035 containerd[1629]: time="2025-09-12T19:46:02.198762880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.860806753s" Sep 12 19:46:02.199035 containerd[1629]: time="2025-09-12T19:46:02.198819898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 19:46:02.210976 containerd[1629]: time="2025-09-12T19:46:02.209838569Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 19:46:02.255441 containerd[1629]: time="2025-09-12T19:46:02.255361794Z" level=info msg="CreateContainer within sandbox \"e11453a0d9278bd1176aae214d571ae9065d6819d064b0b1127581b4595d3ed1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 19:46:02.311269 containerd[1629]: time="2025-09-12T19:46:02.311216448Z" level=info msg="CreateContainer within sandbox \"e11453a0d9278bd1176aae214d571ae9065d6819d064b0b1127581b4595d3ed1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"02ffc1f9191cfca6d3dd72abc3be75c3b404ee02054c5080988c580b0766504c\"" Sep 12 19:46:02.312890 containerd[1629]: time="2025-09-12T19:46:02.312375167Z" level=info msg="StartContainer for \"02ffc1f9191cfca6d3dd72abc3be75c3b404ee02054c5080988c580b0766504c\"" Sep 12 19:46:02.435401 kubelet[2856]: E0912 19:46:02.435187 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k2xgk" podUID="c267a380-4b7e-4a52-b71c-69a8c98b3163" Sep 12 19:46:02.512603 containerd[1629]: time="2025-09-12T19:46:02.512454010Z" level=info msg="StartContainer for \"02ffc1f9191cfca6d3dd72abc3be75c3b404ee02054c5080988c580b0766504c\" returns successfully" Sep 12 19:46:02.668341 kubelet[2856]: E0912 19:46:02.668255 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.668341 kubelet[2856]: W0912 19:46:02.668327 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.669613 kubelet[2856]: E0912 19:46:02.669171 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.669727 kubelet[2856]: E0912 19:46:02.669667 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.669727 kubelet[2856]: W0912 19:46:02.669689 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.669837 kubelet[2856]: E0912 19:46:02.669727 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.671109 kubelet[2856]: E0912 19:46:02.670407 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.671109 kubelet[2856]: W0912 19:46:02.670428 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.671109 kubelet[2856]: E0912 19:46:02.670446 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.671109 kubelet[2856]: E0912 19:46:02.670839 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.671109 kubelet[2856]: W0912 19:46:02.670890 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.671109 kubelet[2856]: E0912 19:46:02.670909 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.671450 kubelet[2856]: E0912 19:46:02.671232 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.671450 kubelet[2856]: W0912 19:46:02.671264 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.671450 kubelet[2856]: E0912 19:46:02.671282 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.671605 kubelet[2856]: E0912 19:46:02.671575 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.671605 kubelet[2856]: W0912 19:46:02.671589 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.671617 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.671948 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.674369 kubelet[2856]: W0912 19:46:02.671962 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.671977 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.672266 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.674369 kubelet[2856]: W0912 19:46:02.672282 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.672297 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.672565 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.674369 kubelet[2856]: W0912 19:46:02.672588 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.672603 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.672966 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.674369 kubelet[2856]: W0912 19:46:02.672984 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.673000 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.673266 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.674369 kubelet[2856]: W0912 19:46:02.673279 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.673295 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.673568 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.674369 kubelet[2856]: W0912 19:46:02.673581 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.673595 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.673908 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.674369 kubelet[2856]: W0912 19:46:02.673922 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.673937 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.674186 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.674369 kubelet[2856]: W0912 19:46:02.674199 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.674369 kubelet[2856]: E0912 19:46:02.674214 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.677649 kubelet[2856]: E0912 19:46:02.674469 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.677649 kubelet[2856]: W0912 19:46:02.674483 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.677649 kubelet[2856]: E0912 19:46:02.674497 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.756149 kubelet[2856]: E0912 19:46:02.755968 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.756149 kubelet[2856]: W0912 19:46:02.755999 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.756149 kubelet[2856]: E0912 19:46:02.756028 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.757973 kubelet[2856]: E0912 19:46:02.757938 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.757973 kubelet[2856]: W0912 19:46:02.757961 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.758145 kubelet[2856]: E0912 19:46:02.757980 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.763927 kubelet[2856]: E0912 19:46:02.763751 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.763927 kubelet[2856]: W0912 19:46:02.763783 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.764183 kubelet[2856]: E0912 19:46:02.764156 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.767891 kubelet[2856]: E0912 19:46:02.767328 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.767891 kubelet[2856]: W0912 19:46:02.767353 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.768549 kubelet[2856]: E0912 19:46:02.768170 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.768549 kubelet[2856]: E0912 19:46:02.768516 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.768549 kubelet[2856]: W0912 19:46:02.768532 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.769458 kubelet[2856]: E0912 19:46:02.768628 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.771544 kubelet[2856]: E0912 19:46:02.771521 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.771544 kubelet[2856]: W0912 19:46:02.771542 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.772086 kubelet[2856]: E0912 19:46:02.771697 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.772086 kubelet[2856]: E0912 19:46:02.772083 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.772200 kubelet[2856]: W0912 19:46:02.772109 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.772898 kubelet[2856]: E0912 19:46:02.772274 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.773211 kubelet[2856]: E0912 19:46:02.773182 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.773211 kubelet[2856]: W0912 19:46:02.773203 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.775348 kubelet[2856]: E0912 19:46:02.775018 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.775348 kubelet[2856]: E0912 19:46:02.775292 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.775348 kubelet[2856]: W0912 19:46:02.775307 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.775592 kubelet[2856]: E0912 19:46:02.775439 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.779510 kubelet[2856]: E0912 19:46:02.776812 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.779510 kubelet[2856]: W0912 19:46:02.776834 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.780493 kubelet[2856]: E0912 19:46:02.780408 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.788084 kubelet[2856]: E0912 19:46:02.787970 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.788084 kubelet[2856]: W0912 19:46:02.788006 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.788824 kubelet[2856]: E0912 19:46:02.788594 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.789309 kubelet[2856]: E0912 19:46:02.789130 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.789309 kubelet[2856]: W0912 19:46:02.789149 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.790399 kubelet[2856]: E0912 19:46:02.789641 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.794681 kubelet[2856]: E0912 19:46:02.793049 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.794681 kubelet[2856]: W0912 19:46:02.793671 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.794681 kubelet[2856]: E0912 19:46:02.793799 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.794681 kubelet[2856]: E0912 19:46:02.794519 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.794681 kubelet[2856]: W0912 19:46:02.794534 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.794681 kubelet[2856]: E0912 19:46:02.794576 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.795945 kubelet[2856]: E0912 19:46:02.795924 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.796092 kubelet[2856]: W0912 19:46:02.796071 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.796877 kubelet[2856]: E0912 19:46:02.796824 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.797278 kubelet[2856]: W0912 19:46:02.796986 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.801614 kubelet[2856]: E0912 19:46:02.798269 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.801614 kubelet[2856]: W0912 19:46:02.798301 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.801614 kubelet[2856]: E0912 19:46:02.798320 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.801614 kubelet[2856]: E0912 19:46:02.798384 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.801614 kubelet[2856]: E0912 19:46:02.798895 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:02.811039 kubelet[2856]: E0912 19:46:02.810990 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:02.811202 kubelet[2856]: W0912 19:46:02.811177 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:02.811740 kubelet[2856]: E0912 19:46:02.811355 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.626736 kubelet[2856]: I0912 19:46:03.626238 2856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 19:46:03.681597 kubelet[2856]: E0912 19:46:03.681567 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.682047 kubelet[2856]: W0912 19:46:03.681774 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.682047 kubelet[2856]: E0912 19:46:03.681806 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.682289 kubelet[2856]: E0912 19:46:03.682269 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.682411 kubelet[2856]: W0912 19:46:03.682379 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.682658 kubelet[2856]: E0912 19:46:03.682511 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.682968 kubelet[2856]: E0912 19:46:03.682911 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.683227 kubelet[2856]: W0912 19:46:03.683067 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.683227 kubelet[2856]: E0912 19:46:03.683093 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.683792 kubelet[2856]: E0912 19:46:03.683601 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.683792 kubelet[2856]: W0912 19:46:03.683620 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.683792 kubelet[2856]: E0912 19:46:03.683638 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.684114 kubelet[2856]: E0912 19:46:03.684095 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.684224 kubelet[2856]: W0912 19:46:03.684204 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.684449 kubelet[2856]: E0912 19:46:03.684322 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.684733 kubelet[2856]: E0912 19:46:03.684713 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.684993 kubelet[2856]: W0912 19:46:03.684832 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.684993 kubelet[2856]: E0912 19:46:03.684882 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.685258 kubelet[2856]: E0912 19:46:03.685239 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.685500 kubelet[2856]: W0912 19:46:03.685348 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.685500 kubelet[2856]: E0912 19:46:03.685372 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.685733 kubelet[2856]: E0912 19:46:03.685714 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.685833 kubelet[2856]: W0912 19:46:03.685813 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.686110 kubelet[2856]: E0912 19:46:03.685970 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.686281 kubelet[2856]: E0912 19:46:03.686262 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.686389 kubelet[2856]: W0912 19:46:03.686370 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.686501 kubelet[2856]: E0912 19:46:03.686481 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.687074 kubelet[2856]: E0912 19:46:03.686912 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.687074 kubelet[2856]: W0912 19:46:03.686930 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.687074 kubelet[2856]: E0912 19:46:03.686949 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.687370 kubelet[2856]: E0912 19:46:03.687351 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.687606 kubelet[2856]: W0912 19:46:03.687459 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.687606 kubelet[2856]: E0912 19:46:03.687483 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.687823 kubelet[2856]: E0912 19:46:03.687804 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.687977 kubelet[2856]: W0912 19:46:03.687955 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.688213 kubelet[2856]: E0912 19:46:03.688069 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.688396 kubelet[2856]: E0912 19:46:03.688377 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.688508 kubelet[2856]: W0912 19:46:03.688488 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.688633 kubelet[2856]: E0912 19:46:03.688611 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.689186 kubelet[2856]: E0912 19:46:03.689032 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.689186 kubelet[2856]: W0912 19:46:03.689051 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.689186 kubelet[2856]: E0912 19:46:03.689067 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.689667 kubelet[2856]: E0912 19:46:03.689571 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.689667 kubelet[2856]: W0912 19:46:03.689589 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.689667 kubelet[2856]: E0912 19:46:03.689605 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.772896 kubelet[2856]: E0912 19:46:03.772090 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.772896 kubelet[2856]: W0912 19:46:03.772117 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.772896 kubelet[2856]: E0912 19:46:03.772149 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.773172 kubelet[2856]: E0912 19:46:03.772987 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.773172 kubelet[2856]: W0912 19:46:03.773005 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.773172 kubelet[2856]: E0912 19:46:03.773034 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.773435 kubelet[2856]: E0912 19:46:03.773412 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.773508 kubelet[2856]: W0912 19:46:03.773435 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.773508 kubelet[2856]: E0912 19:46:03.773470 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.775889 kubelet[2856]: E0912 19:46:03.773919 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.775889 kubelet[2856]: W0912 19:46:03.773941 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.775889 kubelet[2856]: E0912 19:46:03.774036 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.775889 kubelet[2856]: E0912 19:46:03.774276 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.775889 kubelet[2856]: W0912 19:46:03.774290 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.775889 kubelet[2856]: E0912 19:46:03.774381 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.775889 kubelet[2856]: E0912 19:46:03.774630 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.775889 kubelet[2856]: W0912 19:46:03.774645 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.775889 kubelet[2856]: E0912 19:46:03.774774 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.775889 kubelet[2856]: E0912 19:46:03.775012 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.775889 kubelet[2856]: W0912 19:46:03.775026 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.775889 kubelet[2856]: E0912 19:46:03.775051 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.775889 kubelet[2856]: E0912 19:46:03.775349 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.775889 kubelet[2856]: W0912 19:46:03.775362 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.775889 kubelet[2856]: E0912 19:46:03.775395 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.775889 kubelet[2856]: E0912 19:46:03.775726 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.775889 kubelet[2856]: W0912 19:46:03.775740 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.775889 kubelet[2856]: E0912 19:46:03.775832 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.776785 kubelet[2856]: E0912 19:46:03.776335 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.776785 kubelet[2856]: W0912 19:46:03.776349 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.776785 kubelet[2856]: E0912 19:46:03.776461 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.776785 kubelet[2856]: E0912 19:46:03.776666 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.776785 kubelet[2856]: W0912 19:46:03.776681 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.776785 kubelet[2856]: E0912 19:46:03.776773 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.777098 kubelet[2856]: E0912 19:46:03.777029 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.777098 kubelet[2856]: W0912 19:46:03.777043 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.777098 kubelet[2856]: E0912 19:46:03.777066 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.778380 kubelet[2856]: E0912 19:46:03.777340 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.778380 kubelet[2856]: W0912 19:46:03.777362 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.778380 kubelet[2856]: E0912 19:46:03.777463 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.778380 kubelet[2856]: E0912 19:46:03.777768 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.778380 kubelet[2856]: W0912 19:46:03.777782 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.778380 kubelet[2856]: E0912 19:46:03.777811 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.778380 kubelet[2856]: E0912 19:46:03.778364 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.778380 kubelet[2856]: W0912 19:46:03.778379 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.778766 kubelet[2856]: E0912 19:46:03.778401 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.778766 kubelet[2856]: E0912 19:46:03.778683 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.778766 kubelet[2856]: W0912 19:46:03.778697 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.778766 kubelet[2856]: E0912 19:46:03.778713 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.779929 kubelet[2856]: E0912 19:46:03.779900 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.779929 kubelet[2856]: W0912 19:46:03.779922 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.780152 kubelet[2856]: E0912 19:46:03.779939 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.780214 kubelet[2856]: E0912 19:46:03.780181 2856 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:46:03.780214 kubelet[2856]: W0912 19:46:03.780195 2856 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:46:03.780214 kubelet[2856]: E0912 19:46:03.780210 2856 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:46:03.925270 containerd[1629]: time="2025-09-12T19:46:03.925137753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:03.928367 containerd[1629]: time="2025-09-12T19:46:03.927884592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=4446660" Sep 12 19:46:03.928367 containerd[1629]: time="2025-09-12T19:46:03.928048542Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:03.931579 containerd[1629]: time="2025-09-12T19:46:03.931542251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:03.932824 containerd[1629]: time="2025-09-12T19:46:03.932783754Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 1.722851367s" Sep 12 19:46:03.932978 containerd[1629]: time="2025-09-12T19:46:03.932948215Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 19:46:03.937752 containerd[1629]: time="2025-09-12T19:46:03.937719333Z" level=info msg="CreateContainer within sandbox \"63c0c5d55f4404f18569d41587893c46396cdc4d3e7c8172e890fe76a87ec467\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 19:46:03.955037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671779394.mount: Deactivated successfully. Sep 12 19:46:03.962302 containerd[1629]: time="2025-09-12T19:46:03.962167979Z" level=info msg="CreateContainer within sandbox \"63c0c5d55f4404f18569d41587893c46396cdc4d3e7c8172e890fe76a87ec467\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bf6ae9a24883f11c97196684c62431b974ca96870d159f8c13091d7da1be1ce6\"" Sep 12 19:46:03.965698 containerd[1629]: time="2025-09-12T19:46:03.963260854Z" level=info msg="StartContainer for \"bf6ae9a24883f11c97196684c62431b974ca96870d159f8c13091d7da1be1ce6\"" Sep 12 19:46:04.083194 containerd[1629]: time="2025-09-12T19:46:04.083051985Z" level=info msg="StartContainer for \"bf6ae9a24883f11c97196684c62431b974ca96870d159f8c13091d7da1be1ce6\" returns successfully" Sep 12 19:46:04.165018 containerd[1629]: time="2025-09-12T19:46:04.150691935Z" level=info msg="shim disconnected" id=bf6ae9a24883f11c97196684c62431b974ca96870d159f8c13091d7da1be1ce6 namespace=k8s.io Sep 12 19:46:04.165018 containerd[1629]: time="2025-09-12T19:46:04.165007845Z" level=warning msg="cleaning up after shim disconnected" id=bf6ae9a24883f11c97196684c62431b974ca96870d159f8c13091d7da1be1ce6 namespace=k8s.io Sep 12 19:46:04.165290 containerd[1629]: time="2025-09-12T19:46:04.165034194Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 19:46:04.229578 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf6ae9a24883f11c97196684c62431b974ca96870d159f8c13091d7da1be1ce6-rootfs.mount: Deactivated successfully. Sep 12 19:46:04.435624 kubelet[2856]: E0912 19:46:04.435517 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k2xgk" podUID="c267a380-4b7e-4a52-b71c-69a8c98b3163" Sep 12 19:46:04.634240 containerd[1629]: time="2025-09-12T19:46:04.634180030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 19:46:04.657287 kubelet[2856]: I0912 19:46:04.655474 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67fb6fb7cd-5cg24" podStartSLOduration=3.783964444 podStartE2EDuration="7.655437496s" podCreationTimestamp="2025-09-12 19:45:57 +0000 UTC" firstStartedPulling="2025-09-12 19:45:58.337049056 +0000 UTC m=+23.089261060" lastFinishedPulling="2025-09-12 19:46:02.2085221 +0000 UTC m=+26.960734112" observedRunningTime="2025-09-12 19:46:02.649487135 +0000 UTC m=+27.401699147" watchObservedRunningTime="2025-09-12 19:46:04.655437496 +0000 UTC m=+29.407649500" Sep 12 19:46:06.435537 kubelet[2856]: E0912 19:46:06.435012 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k2xgk" podUID="c267a380-4b7e-4a52-b71c-69a8c98b3163" Sep 12 19:46:08.436127 kubelet[2856]: E0912 19:46:08.435733 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k2xgk" podUID="c267a380-4b7e-4a52-b71c-69a8c98b3163" Sep 12 19:46:10.044091 containerd[1629]: time="2025-09-12T19:46:10.043985969Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:10.045774 containerd[1629]: time="2025-09-12T19:46:10.045699004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 19:46:10.047682 containerd[1629]: time="2025-09-12T19:46:10.046609585Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:10.051238 containerd[1629]: time="2025-09-12T19:46:10.050914102Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:10.052085 containerd[1629]: time="2025-09-12T19:46:10.052045888Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.417804802s" Sep 12 19:46:10.052174 containerd[1629]: time="2025-09-12T19:46:10.052099690Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 19:46:10.060800 containerd[1629]: time="2025-09-12T19:46:10.060758960Z" level=info msg="CreateContainer within sandbox \"63c0c5d55f4404f18569d41587893c46396cdc4d3e7c8172e890fe76a87ec467\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 19:46:10.080456 containerd[1629]: time="2025-09-12T19:46:10.080413979Z" level=info msg="CreateContainer within sandbox \"63c0c5d55f4404f18569d41587893c46396cdc4d3e7c8172e890fe76a87ec467\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a8dd55424535133eadd032e9b7fb2fbb40ff1228331dfd78047c36f930af8856\"" Sep 12 19:46:10.083107 containerd[1629]: time="2025-09-12T19:46:10.083055486Z" level=info msg="StartContainer for \"a8dd55424535133eadd032e9b7fb2fbb40ff1228331dfd78047c36f930af8856\"" Sep 12 19:46:10.201427 containerd[1629]: time="2025-09-12T19:46:10.201369132Z" level=info msg="StartContainer for \"a8dd55424535133eadd032e9b7fb2fbb40ff1228331dfd78047c36f930af8856\" returns successfully" Sep 12 19:46:10.435769 kubelet[2856]: E0912 19:46:10.435658 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k2xgk" podUID="c267a380-4b7e-4a52-b71c-69a8c98b3163" Sep 12 19:46:11.559225 kubelet[2856]: I0912 19:46:11.558405 2856 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 12 19:46:11.594293 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8dd55424535133eadd032e9b7fb2fbb40ff1228331dfd78047c36f930af8856-rootfs.mount: Deactivated successfully. Sep 12 19:46:11.599979 containerd[1629]: time="2025-09-12T19:46:11.593763385Z" level=info msg="shim disconnected" id=a8dd55424535133eadd032e9b7fb2fbb40ff1228331dfd78047c36f930af8856 namespace=k8s.io Sep 12 19:46:11.599979 containerd[1629]: time="2025-09-12T19:46:11.596113026Z" level=warning msg="cleaning up after shim disconnected" id=a8dd55424535133eadd032e9b7fb2fbb40ff1228331dfd78047c36f930af8856 namespace=k8s.io Sep 12 19:46:11.599979 containerd[1629]: time="2025-09-12T19:46:11.596143497Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 19:46:11.736321 kubelet[2856]: I0912 19:46:11.734723 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5jrl\" (UniqueName: \"kubernetes.io/projected/d4a4fa81-9bd4-48b5-8758-d5569260af4c-kube-api-access-j5jrl\") pod \"calico-apiserver-758b4974c-55zh4\" (UID: \"d4a4fa81-9bd4-48b5-8758-d5569260af4c\") " pod="calico-apiserver/calico-apiserver-758b4974c-55zh4" Sep 12 19:46:11.736321 kubelet[2856]: I0912 19:46:11.735590 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8ttj\" (UniqueName: \"kubernetes.io/projected/78be5b80-9110-4f94-92fe-b7d6cd61fcbc-kube-api-access-d8ttj\") pod \"whisker-bd8bb6677-w7ftn\" (UID: \"78be5b80-9110-4f94-92fe-b7d6cd61fcbc\") " pod="calico-system/whisker-bd8bb6677-w7ftn" Sep 12 19:46:11.736321 kubelet[2856]: I0912 19:46:11.735751 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b92340aa-ced1-4270-b056-3fe317e4937f-tigera-ca-bundle\") pod \"calico-kube-controllers-59796884b-cxjns\" (UID: \"b92340aa-ced1-4270-b056-3fe317e4937f\") " pod="calico-system/calico-kube-controllers-59796884b-cxjns" Sep 12 19:46:11.736321 kubelet[2856]: I0912 19:46:11.735998 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6ae76510-d686-4c2f-936d-931b1dc2f310-config-volume\") pod \"coredns-7c65d6cfc9-nj4fp\" (UID: \"6ae76510-d686-4c2f-936d-931b1dc2f310\") " pod="kube-system/coredns-7c65d6cfc9-nj4fp" Sep 12 19:46:11.738320 kubelet[2856]: I0912 19:46:11.736742 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78be5b80-9110-4f94-92fe-b7d6cd61fcbc-whisker-ca-bundle\") pod \"whisker-bd8bb6677-w7ftn\" (UID: \"78be5b80-9110-4f94-92fe-b7d6cd61fcbc\") " pod="calico-system/whisker-bd8bb6677-w7ftn" Sep 12 19:46:11.738320 kubelet[2856]: I0912 19:46:11.736793 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78be5b80-9110-4f94-92fe-b7d6cd61fcbc-whisker-backend-key-pair\") pod \"whisker-bd8bb6677-w7ftn\" (UID: \"78be5b80-9110-4f94-92fe-b7d6cd61fcbc\") " pod="calico-system/whisker-bd8bb6677-w7ftn" Sep 12 19:46:11.738320 kubelet[2856]: I0912 19:46:11.736897 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d4a4fa81-9bd4-48b5-8758-d5569260af4c-calico-apiserver-certs\") pod \"calico-apiserver-758b4974c-55zh4\" (UID: \"d4a4fa81-9bd4-48b5-8758-d5569260af4c\") " pod="calico-apiserver/calico-apiserver-758b4974c-55zh4" Sep 12 19:46:11.738320 kubelet[2856]: I0912 19:46:11.736939 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fj42\" (UniqueName: \"kubernetes.io/projected/6ae76510-d686-4c2f-936d-931b1dc2f310-kube-api-access-6fj42\") pod \"coredns-7c65d6cfc9-nj4fp\" (UID: \"6ae76510-d686-4c2f-936d-931b1dc2f310\") " pod="kube-system/coredns-7c65d6cfc9-nj4fp" Sep 12 19:46:11.738320 kubelet[2856]: I0912 19:46:11.736987 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swmzk\" (UniqueName: \"kubernetes.io/projected/b92340aa-ced1-4270-b056-3fe317e4937f-kube-api-access-swmzk\") pod \"calico-kube-controllers-59796884b-cxjns\" (UID: \"b92340aa-ced1-4270-b056-3fe317e4937f\") " pod="calico-system/calico-kube-controllers-59796884b-cxjns" Sep 12 19:46:11.840886 kubelet[2856]: I0912 19:46:11.838281 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24kvx\" (UniqueName: \"kubernetes.io/projected/448b2875-d499-4595-84f9-0b0c9ee28b39-kube-api-access-24kvx\") pod \"coredns-7c65d6cfc9-wltfb\" (UID: \"448b2875-d499-4595-84f9-0b0c9ee28b39\") " pod="kube-system/coredns-7c65d6cfc9-wltfb" Sep 12 19:46:11.840886 kubelet[2856]: I0912 19:46:11.838345 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47b90bcb-34aa-42dd-ada4-6ab99df0bd40-config\") pod \"goldmane-7988f88666-glqx6\" (UID: \"47b90bcb-34aa-42dd-ada4-6ab99df0bd40\") " pod="calico-system/goldmane-7988f88666-glqx6" Sep 12 19:46:11.840886 kubelet[2856]: I0912 19:46:11.838378 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/35e0e2e0-2224-46e0-8d30-2bff8d42b502-calico-apiserver-certs\") pod \"calico-apiserver-758b4974c-zpjm5\" (UID: \"35e0e2e0-2224-46e0-8d30-2bff8d42b502\") " pod="calico-apiserver/calico-apiserver-758b4974c-zpjm5" Sep 12 19:46:11.840886 kubelet[2856]: I0912 19:46:11.838425 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/47b90bcb-34aa-42dd-ada4-6ab99df0bd40-goldmane-key-pair\") pod \"goldmane-7988f88666-glqx6\" (UID: \"47b90bcb-34aa-42dd-ada4-6ab99df0bd40\") " pod="calico-system/goldmane-7988f88666-glqx6" Sep 12 19:46:11.840886 kubelet[2856]: I0912 19:46:11.838482 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xltx7\" (UniqueName: \"kubernetes.io/projected/35e0e2e0-2224-46e0-8d30-2bff8d42b502-kube-api-access-xltx7\") pod \"calico-apiserver-758b4974c-zpjm5\" (UID: \"35e0e2e0-2224-46e0-8d30-2bff8d42b502\") " pod="calico-apiserver/calico-apiserver-758b4974c-zpjm5" Sep 12 19:46:11.840886 kubelet[2856]: I0912 19:46:11.838528 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47b90bcb-34aa-42dd-ada4-6ab99df0bd40-goldmane-ca-bundle\") pod \"goldmane-7988f88666-glqx6\" (UID: \"47b90bcb-34aa-42dd-ada4-6ab99df0bd40\") " pod="calico-system/goldmane-7988f88666-glqx6" Sep 12 19:46:11.840886 kubelet[2856]: I0912 19:46:11.838583 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/448b2875-d499-4595-84f9-0b0c9ee28b39-config-volume\") pod \"coredns-7c65d6cfc9-wltfb\" (UID: \"448b2875-d499-4595-84f9-0b0c9ee28b39\") " pod="kube-system/coredns-7c65d6cfc9-wltfb" Sep 12 19:46:11.840886 kubelet[2856]: I0912 19:46:11.838613 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5ldk\" (UniqueName: \"kubernetes.io/projected/47b90bcb-34aa-42dd-ada4-6ab99df0bd40-kube-api-access-c5ldk\") pod \"goldmane-7988f88666-glqx6\" (UID: \"47b90bcb-34aa-42dd-ada4-6ab99df0bd40\") " pod="calico-system/goldmane-7988f88666-glqx6" Sep 12 19:46:11.968816 containerd[1629]: time="2025-09-12T19:46:11.968734812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nj4fp,Uid:6ae76510-d686-4c2f-936d-931b1dc2f310,Namespace:kube-system,Attempt:0,}" Sep 12 19:46:11.972020 containerd[1629]: time="2025-09-12T19:46:11.971981095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59796884b-cxjns,Uid:b92340aa-ced1-4270-b056-3fe317e4937f,Namespace:calico-system,Attempt:0,}" Sep 12 19:46:12.000609 containerd[1629]: time="2025-09-12T19:46:12.000339590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-758b4974c-55zh4,Uid:d4a4fa81-9bd4-48b5-8758-d5569260af4c,Namespace:calico-apiserver,Attempt:0,}" Sep 12 19:46:12.016607 containerd[1629]: time="2025-09-12T19:46:12.016564596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd8bb6677-w7ftn,Uid:78be5b80-9110-4f94-92fe-b7d6cd61fcbc,Namespace:calico-system,Attempt:0,}" Sep 12 19:46:12.026231 containerd[1629]: time="2025-09-12T19:46:12.025610027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wltfb,Uid:448b2875-d499-4595-84f9-0b0c9ee28b39,Namespace:kube-system,Attempt:0,}" Sep 12 19:46:12.036005 containerd[1629]: time="2025-09-12T19:46:12.035955584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-758b4974c-zpjm5,Uid:35e0e2e0-2224-46e0-8d30-2bff8d42b502,Namespace:calico-apiserver,Attempt:0,}" Sep 12 19:46:12.042373 containerd[1629]: time="2025-09-12T19:46:12.042056474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-glqx6,Uid:47b90bcb-34aa-42dd-ada4-6ab99df0bd40,Namespace:calico-system,Attempt:0,}" Sep 12 19:46:12.419785 containerd[1629]: time="2025-09-12T19:46:12.419708589Z" level=error msg="Failed to destroy network for sandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.420431 containerd[1629]: time="2025-09-12T19:46:12.420353935Z" level=error msg="Failed to destroy network for sandbox \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.428888 containerd[1629]: time="2025-09-12T19:46:12.427772948Z" level=error msg="encountered an error cleaning up failed sandbox \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.440281 containerd[1629]: time="2025-09-12T19:46:12.439638829Z" level=error msg="encountered an error cleaning up failed sandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.440281 containerd[1629]: time="2025-09-12T19:46:12.439727663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd8bb6677-w7ftn,Uid:78be5b80-9110-4f94-92fe-b7d6cd61fcbc,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.453137 containerd[1629]: time="2025-09-12T19:46:12.452293639Z" level=error msg="Failed to destroy network for sandbox \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.456145 containerd[1629]: time="2025-09-12T19:46:12.456100795Z" level=error msg="encountered an error cleaning up failed sandbox \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.456249 containerd[1629]: time="2025-09-12T19:46:12.456169717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-758b4974c-zpjm5,Uid:35e0e2e0-2224-46e0-8d30-2bff8d42b502,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.456502 containerd[1629]: time="2025-09-12T19:46:12.456267508Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-758b4974c-55zh4,Uid:d4a4fa81-9bd4-48b5-8758-d5569260af4c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.456606 containerd[1629]: time="2025-09-12T19:46:12.456528902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k2xgk,Uid:c267a380-4b7e-4a52-b71c-69a8c98b3163,Namespace:calico-system,Attempt:0,}" Sep 12 19:46:12.457664 kubelet[2856]: E0912 19:46:12.457101 2856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.457664 kubelet[2856]: E0912 19:46:12.457205 2856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.457664 kubelet[2856]: E0912 19:46:12.457222 2856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-758b4974c-55zh4" Sep 12 19:46:12.457664 kubelet[2856]: E0912 19:46:12.457250 2856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-758b4974c-zpjm5" Sep 12 19:46:12.457664 kubelet[2856]: E0912 19:46:12.457266 2856 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-758b4974c-55zh4" Sep 12 19:46:12.457664 kubelet[2856]: E0912 19:46:12.457101 2856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.457664 kubelet[2856]: E0912 19:46:12.457314 2856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd8bb6677-w7ftn" Sep 12 19:46:12.457664 kubelet[2856]: E0912 19:46:12.457334 2856 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bd8bb6677-w7ftn" Sep 12 19:46:12.457664 kubelet[2856]: E0912 19:46:12.457338 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-758b4974c-55zh4_calico-apiserver(d4a4fa81-9bd4-48b5-8758-d5569260af4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-758b4974c-55zh4_calico-apiserver(d4a4fa81-9bd4-48b5-8758-d5569260af4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-758b4974c-55zh4" podUID="d4a4fa81-9bd4-48b5-8758-d5569260af4c" Sep 12 19:46:12.457664 kubelet[2856]: E0912 19:46:12.457386 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bd8bb6677-w7ftn_calico-system(78be5b80-9110-4f94-92fe-b7d6cd61fcbc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bd8bb6677-w7ftn_calico-system(78be5b80-9110-4f94-92fe-b7d6cd61fcbc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bd8bb6677-w7ftn" podUID="78be5b80-9110-4f94-92fe-b7d6cd61fcbc" Sep 12 19:46:12.458555 kubelet[2856]: E0912 19:46:12.457279 2856 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-758b4974c-zpjm5" Sep 12 19:46:12.458555 kubelet[2856]: E0912 19:46:12.457493 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-758b4974c-zpjm5_calico-apiserver(35e0e2e0-2224-46e0-8d30-2bff8d42b502)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-758b4974c-zpjm5_calico-apiserver(35e0e2e0-2224-46e0-8d30-2bff8d42b502)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-758b4974c-zpjm5" podUID="35e0e2e0-2224-46e0-8d30-2bff8d42b502" Sep 12 19:46:12.458737 containerd[1629]: time="2025-09-12T19:46:12.458212676Z" level=error msg="Failed to destroy network for sandbox \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.459197 containerd[1629]: time="2025-09-12T19:46:12.459161231Z" level=error msg="encountered an error cleaning up failed sandbox \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.459956 containerd[1629]: time="2025-09-12T19:46:12.459910437Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59796884b-cxjns,Uid:b92340aa-ced1-4270-b056-3fe317e4937f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.460247 kubelet[2856]: E0912 19:46:12.460211 2856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.460338 kubelet[2856]: E0912 19:46:12.460259 2856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59796884b-cxjns" Sep 12 19:46:12.460338 kubelet[2856]: E0912 19:46:12.460285 2856 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-59796884b-cxjns" Sep 12 19:46:12.460447 kubelet[2856]: E0912 19:46:12.460328 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-59796884b-cxjns_calico-system(b92340aa-ced1-4270-b056-3fe317e4937f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-59796884b-cxjns_calico-system(b92340aa-ced1-4270-b056-3fe317e4937f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59796884b-cxjns" podUID="b92340aa-ced1-4270-b056-3fe317e4937f" Sep 12 19:46:12.470769 containerd[1629]: time="2025-09-12T19:46:12.470625081Z" level=error msg="Failed to destroy network for sandbox \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.471370 containerd[1629]: time="2025-09-12T19:46:12.471246316Z" level=error msg="encountered an error cleaning up failed sandbox \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.471370 containerd[1629]: time="2025-09-12T19:46:12.471304780Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nj4fp,Uid:6ae76510-d686-4c2f-936d-931b1dc2f310,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.472673 kubelet[2856]: E0912 19:46:12.472335 2856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.472673 kubelet[2856]: E0912 19:46:12.472395 2856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nj4fp" Sep 12 19:46:12.472673 kubelet[2856]: E0912 19:46:12.472423 2856 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-nj4fp" Sep 12 19:46:12.472673 kubelet[2856]: E0912 19:46:12.472480 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nj4fp_kube-system(6ae76510-d686-4c2f-936d-931b1dc2f310)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nj4fp_kube-system(6ae76510-d686-4c2f-936d-931b1dc2f310)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nj4fp" podUID="6ae76510-d686-4c2f-936d-931b1dc2f310" Sep 12 19:46:12.490232 containerd[1629]: time="2025-09-12T19:46:12.490020190Z" level=error msg="Failed to destroy network for sandbox \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.491114 containerd[1629]: time="2025-09-12T19:46:12.490898462Z" level=error msg="encountered an error cleaning up failed sandbox \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.491114 containerd[1629]: time="2025-09-12T19:46:12.490967724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-glqx6,Uid:47b90bcb-34aa-42dd-ada4-6ab99df0bd40,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.491766 kubelet[2856]: E0912 19:46:12.491392 2856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.491766 kubelet[2856]: E0912 19:46:12.491460 2856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-glqx6" Sep 12 19:46:12.491766 kubelet[2856]: E0912 19:46:12.491488 2856 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7988f88666-glqx6" Sep 12 19:46:12.491766 kubelet[2856]: E0912 19:46:12.491538 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7988f88666-glqx6_calico-system(47b90bcb-34aa-42dd-ada4-6ab99df0bd40)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7988f88666-glqx6_calico-system(47b90bcb-34aa-42dd-ada4-6ab99df0bd40)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-glqx6" podUID="47b90bcb-34aa-42dd-ada4-6ab99df0bd40" Sep 12 19:46:12.492748 containerd[1629]: time="2025-09-12T19:46:12.492221578Z" level=error msg="Failed to destroy network for sandbox \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.492748 containerd[1629]: time="2025-09-12T19:46:12.492602313Z" level=error msg="encountered an error cleaning up failed sandbox \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.492748 containerd[1629]: time="2025-09-12T19:46:12.492654039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wltfb,Uid:448b2875-d499-4595-84f9-0b0c9ee28b39,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.493452 kubelet[2856]: E0912 19:46:12.493039 2856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.493452 kubelet[2856]: E0912 19:46:12.493099 2856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wltfb" Sep 12 19:46:12.493452 kubelet[2856]: E0912 19:46:12.493126 2856 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wltfb" Sep 12 19:46:12.493452 kubelet[2856]: E0912 19:46:12.493230 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wltfb_kube-system(448b2875-d499-4595-84f9-0b0c9ee28b39)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wltfb_kube-system(448b2875-d499-4595-84f9-0b0c9ee28b39)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wltfb" podUID="448b2875-d499-4595-84f9-0b0c9ee28b39" Sep 12 19:46:12.584459 containerd[1629]: time="2025-09-12T19:46:12.584312830Z" level=error msg="Failed to destroy network for sandbox \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.585256 containerd[1629]: time="2025-09-12T19:46:12.585208649Z" level=error msg="encountered an error cleaning up failed sandbox \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.585386 containerd[1629]: time="2025-09-12T19:46:12.585330622Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k2xgk,Uid:c267a380-4b7e-4a52-b71c-69a8c98b3163,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.586891 kubelet[2856]: E0912 19:46:12.586028 2856 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.586891 kubelet[2856]: E0912 19:46:12.586142 2856 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k2xgk" Sep 12 19:46:12.586891 kubelet[2856]: E0912 19:46:12.586183 2856 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k2xgk" Sep 12 19:46:12.586891 kubelet[2856]: E0912 19:46:12.586261 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k2xgk_calico-system(c267a380-4b7e-4a52-b71c-69a8c98b3163)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k2xgk_calico-system(c267a380-4b7e-4a52-b71c-69a8c98b3163)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k2xgk" podUID="c267a380-4b7e-4a52-b71c-69a8c98b3163" Sep 12 19:46:12.694551 kubelet[2856]: I0912 19:46:12.693217 2856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Sep 12 19:46:12.702310 kubelet[2856]: I0912 19:46:12.702280 2856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Sep 12 19:46:12.722598 kubelet[2856]: I0912 19:46:12.722563 2856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Sep 12 19:46:12.726134 kubelet[2856]: I0912 19:46:12.726096 2856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Sep 12 19:46:12.745297 containerd[1629]: time="2025-09-12T19:46:12.742556498Z" level=info msg="StopPodSandbox for \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\"" Sep 12 19:46:12.747793 containerd[1629]: time="2025-09-12T19:46:12.743631306Z" level=info msg="StopPodSandbox for \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\"" Sep 12 19:46:12.757391 containerd[1629]: time="2025-09-12T19:46:12.757174756Z" level=info msg="StopPodSandbox for \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\"" Sep 12 19:46:12.758047 containerd[1629]: time="2025-09-12T19:46:12.757892984Z" level=info msg="StopPodSandbox for \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\"" Sep 12 19:46:12.758047 containerd[1629]: time="2025-09-12T19:46:12.757992889Z" level=info msg="Ensure that sandbox dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61 in task-service has been cleanup successfully" Sep 12 19:46:12.760192 containerd[1629]: time="2025-09-12T19:46:12.759638794Z" level=info msg="Ensure that sandbox a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02 in task-service has been cleanup successfully" Sep 12 19:46:12.760271 kubelet[2856]: I0912 19:46:12.758692 2856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:12.760414 containerd[1629]: time="2025-09-12T19:46:12.760385036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 19:46:12.762291 containerd[1629]: time="2025-09-12T19:46:12.762248470Z" level=info msg="Ensure that sandbox 1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a in task-service has been cleanup successfully" Sep 12 19:46:12.772660 containerd[1629]: time="2025-09-12T19:46:12.772210601Z" level=info msg="Ensure that sandbox ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919 in task-service has been cleanup successfully" Sep 12 19:46:12.783936 containerd[1629]: time="2025-09-12T19:46:12.773196528Z" level=info msg="StopPodSandbox for \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\"" Sep 12 19:46:12.784515 containerd[1629]: time="2025-09-12T19:46:12.784483715Z" level=info msg="Ensure that sandbox 989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d in task-service has been cleanup successfully" Sep 12 19:46:12.785627 kubelet[2856]: I0912 19:46:12.785596 2856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Sep 12 19:46:12.810940 containerd[1629]: time="2025-09-12T19:46:12.802832639Z" level=info msg="StopPodSandbox for \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\"" Sep 12 19:46:12.811225 containerd[1629]: time="2025-09-12T19:46:12.811191131Z" level=info msg="Ensure that sandbox 710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2 in task-service has been cleanup successfully" Sep 12 19:46:12.812299 kubelet[2856]: I0912 19:46:12.811800 2856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Sep 12 19:46:12.819102 containerd[1629]: time="2025-09-12T19:46:12.819060959Z" level=info msg="StopPodSandbox for \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\"" Sep 12 19:46:12.819938 containerd[1629]: time="2025-09-12T19:46:12.819749900Z" level=info msg="Ensure that sandbox 64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702 in task-service has been cleanup successfully" Sep 12 19:46:12.826243 kubelet[2856]: I0912 19:46:12.826167 2856 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Sep 12 19:46:12.830147 containerd[1629]: time="2025-09-12T19:46:12.829966630Z" level=info msg="StopPodSandbox for \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\"" Sep 12 19:46:12.830802 containerd[1629]: time="2025-09-12T19:46:12.830654685Z" level=info msg="Ensure that sandbox 70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690 in task-service has been cleanup successfully" Sep 12 19:46:12.910803 containerd[1629]: time="2025-09-12T19:46:12.910607199Z" level=error msg="StopPodSandbox for \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\" failed" error="failed to destroy network for sandbox \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.911412 kubelet[2856]: E0912 19:46:12.911046 2856 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Sep 12 19:46:12.913894 kubelet[2856]: E0912 19:46:12.911150 2856 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61"} Sep 12 19:46:12.913894 kubelet[2856]: E0912 19:46:12.912445 2856 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"35e0e2e0-2224-46e0-8d30-2bff8d42b502\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:46:12.913894 kubelet[2856]: E0912 19:46:12.912483 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"35e0e2e0-2224-46e0-8d30-2bff8d42b502\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-758b4974c-zpjm5" podUID="35e0e2e0-2224-46e0-8d30-2bff8d42b502" Sep 12 19:46:12.924028 containerd[1629]: time="2025-09-12T19:46:12.923853308Z" level=error msg="StopPodSandbox for \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\" failed" error="failed to destroy network for sandbox \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.924761 kubelet[2856]: E0912 19:46:12.924707 2856 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Sep 12 19:46:12.924851 kubelet[2856]: E0912 19:46:12.924776 2856 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702"} Sep 12 19:46:12.924966 kubelet[2856]: E0912 19:46:12.924838 2856 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"448b2875-d499-4595-84f9-0b0c9ee28b39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:46:12.924966 kubelet[2856]: E0912 19:46:12.924912 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"448b2875-d499-4595-84f9-0b0c9ee28b39\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wltfb" podUID="448b2875-d499-4595-84f9-0b0c9ee28b39" Sep 12 19:46:12.940170 containerd[1629]: time="2025-09-12T19:46:12.940107058Z" level=error msg="StopPodSandbox for \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\" failed" error="failed to destroy network for sandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.940994 kubelet[2856]: E0912 19:46:12.940741 2856 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:12.940994 kubelet[2856]: E0912 19:46:12.940814 2856 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d"} Sep 12 19:46:12.940994 kubelet[2856]: E0912 19:46:12.940904 2856 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78be5b80-9110-4f94-92fe-b7d6cd61fcbc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:46:12.940994 kubelet[2856]: E0912 19:46:12.940950 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78be5b80-9110-4f94-92fe-b7d6cd61fcbc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bd8bb6677-w7ftn" podUID="78be5b80-9110-4f94-92fe-b7d6cd61fcbc" Sep 12 19:46:12.984079 containerd[1629]: time="2025-09-12T19:46:12.983944199Z" level=error msg="StopPodSandbox for \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\" failed" error="failed to destroy network for sandbox \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.986285 kubelet[2856]: E0912 19:46:12.986026 2856 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Sep 12 19:46:12.986285 kubelet[2856]: E0912 19:46:12.986107 2856 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a"} Sep 12 19:46:12.986285 kubelet[2856]: E0912 19:46:12.986205 2856 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d4a4fa81-9bd4-48b5-8758-d5569260af4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:46:12.986285 kubelet[2856]: E0912 19:46:12.986243 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d4a4fa81-9bd4-48b5-8758-d5569260af4c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-758b4974c-55zh4" podUID="d4a4fa81-9bd4-48b5-8758-d5569260af4c" Sep 12 19:46:12.993904 containerd[1629]: time="2025-09-12T19:46:12.993246965Z" level=error msg="StopPodSandbox for \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\" failed" error="failed to destroy network for sandbox \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.994051 kubelet[2856]: E0912 19:46:12.993526 2856 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Sep 12 19:46:12.994051 kubelet[2856]: E0912 19:46:12.993580 2856 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02"} Sep 12 19:46:12.994051 kubelet[2856]: E0912 19:46:12.993635 2856 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b92340aa-ced1-4270-b056-3fe317e4937f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:46:12.994051 kubelet[2856]: E0912 19:46:12.993663 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b92340aa-ced1-4270-b056-3fe317e4937f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-59796884b-cxjns" podUID="b92340aa-ced1-4270-b056-3fe317e4937f" Sep 12 19:46:12.995122 containerd[1629]: time="2025-09-12T19:46:12.994680005Z" level=error msg="StopPodSandbox for \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\" failed" error="failed to destroy network for sandbox \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.995238 kubelet[2856]: E0912 19:46:12.994922 2856 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Sep 12 19:46:12.995238 kubelet[2856]: E0912 19:46:12.994960 2856 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919"} Sep 12 19:46:12.995238 kubelet[2856]: E0912 19:46:12.995196 2856 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c267a380-4b7e-4a52-b71c-69a8c98b3163\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:46:12.995554 kubelet[2856]: E0912 19:46:12.995237 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c267a380-4b7e-4a52-b71c-69a8c98b3163\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k2xgk" podUID="c267a380-4b7e-4a52-b71c-69a8c98b3163" Sep 12 19:46:12.999523 containerd[1629]: time="2025-09-12T19:46:12.999477677Z" level=error msg="StopPodSandbox for \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\" failed" error="failed to destroy network for sandbox \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:12.999948 kubelet[2856]: E0912 19:46:12.999659 2856 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Sep 12 19:46:12.999948 kubelet[2856]: E0912 19:46:12.999703 2856 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2"} Sep 12 19:46:12.999948 kubelet[2856]: E0912 19:46:12.999752 2856 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6ae76510-d686-4c2f-936d-931b1dc2f310\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:46:12.999948 kubelet[2856]: E0912 19:46:12.999784 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6ae76510-d686-4c2f-936d-931b1dc2f310\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-nj4fp" podUID="6ae76510-d686-4c2f-936d-931b1dc2f310" Sep 12 19:46:13.003170 containerd[1629]: time="2025-09-12T19:46:13.003019838Z" level=error msg="StopPodSandbox for \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\" failed" error="failed to destroy network for sandbox \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:13.003340 kubelet[2856]: E0912 19:46:13.003196 2856 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Sep 12 19:46:13.003340 kubelet[2856]: E0912 19:46:13.003271 2856 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690"} Sep 12 19:46:13.003340 kubelet[2856]: E0912 19:46:13.003320 2856 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47b90bcb-34aa-42dd-ada4-6ab99df0bd40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:46:13.003624 kubelet[2856]: E0912 19:46:13.003361 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47b90bcb-34aa-42dd-ada4-6ab99df0bd40\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7988f88666-glqx6" podUID="47b90bcb-34aa-42dd-ada4-6ab99df0bd40" Sep 12 19:46:14.169408 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:46:14.176922 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:46:14.169504 systemd-resolved[1511]: Flushed all caches. Sep 12 19:46:22.616141 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount327632505.mount: Deactivated successfully. Sep 12 19:46:22.723564 containerd[1629]: time="2025-09-12T19:46:22.722787109Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:22.724646 containerd[1629]: time="2025-09-12T19:46:22.703682244Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 19:46:22.749231 containerd[1629]: time="2025-09-12T19:46:22.749135592Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:22.751881 containerd[1629]: time="2025-09-12T19:46:22.751093190Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:22.752848 containerd[1629]: time="2025-09-12T19:46:22.752766725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 9.989831974s" Sep 12 19:46:22.752965 containerd[1629]: time="2025-09-12T19:46:22.752847797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 19:46:22.811582 containerd[1629]: time="2025-09-12T19:46:22.811519587Z" level=info msg="CreateContainer within sandbox \"63c0c5d55f4404f18569d41587893c46396cdc4d3e7c8172e890fe76a87ec467\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 19:46:22.855178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241312943.mount: Deactivated successfully. Sep 12 19:46:22.882173 containerd[1629]: time="2025-09-12T19:46:22.880164607Z" level=info msg="CreateContainer within sandbox \"63c0c5d55f4404f18569d41587893c46396cdc4d3e7c8172e890fe76a87ec467\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"617b216468fa2c802dfa262ad6161063a0a5622c302eca3e583e5b559496ff7f\"" Sep 12 19:46:22.885906 containerd[1629]: time="2025-09-12T19:46:22.884313432Z" level=info msg="StartContainer for \"617b216468fa2c802dfa262ad6161063a0a5622c302eca3e583e5b559496ff7f\"" Sep 12 19:46:22.954030 kubelet[2856]: I0912 19:46:22.953292 2856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 19:46:23.222969 containerd[1629]: time="2025-09-12T19:46:23.222067326Z" level=info msg="StartContainer for \"617b216468fa2c802dfa262ad6161063a0a5622c302eca3e583e5b559496ff7f\" returns successfully" Sep 12 19:46:23.447435 containerd[1629]: time="2025-09-12T19:46:23.445891953Z" level=info msg="StopPodSandbox for \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\"" Sep 12 19:46:23.455353 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 19:46:23.455576 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 19:46:23.539292 containerd[1629]: time="2025-09-12T19:46:23.539081567Z" level=error msg="StopPodSandbox for \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\" failed" error="failed to destroy network for sandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:46:23.543072 kubelet[2856]: E0912 19:46:23.540778 2856 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:23.543072 kubelet[2856]: E0912 19:46:23.540908 2856 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d"} Sep 12 19:46:23.543072 kubelet[2856]: E0912 19:46:23.540988 2856 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78be5b80-9110-4f94-92fe-b7d6cd61fcbc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:46:23.544513 kubelet[2856]: E0912 19:46:23.543380 2856 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78be5b80-9110-4f94-92fe-b7d6cd61fcbc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bd8bb6677-w7ftn" podUID="78be5b80-9110-4f94-92fe-b7d6cd61fcbc" Sep 12 19:46:23.935749 containerd[1629]: time="2025-09-12T19:46:23.935655137Z" level=info msg="StopPodSandbox for \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\"" Sep 12 19:46:24.007891 kubelet[2856]: I0912 19:46:23.990912 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xz6fz" podStartSLOduration=1.932236685 podStartE2EDuration="25.963378252s" podCreationTimestamp="2025-09-12 19:45:58 +0000 UTC" firstStartedPulling="2025-09-12 19:45:58.72331859 +0000 UTC m=+23.475530595" lastFinishedPulling="2025-09-12 19:46:22.75446016 +0000 UTC m=+47.506672162" observedRunningTime="2025-09-12 19:46:23.962911809 +0000 UTC m=+48.715123830" watchObservedRunningTime="2025-09-12 19:46:23.963378252 +0000 UTC m=+48.715590264" Sep 12 19:46:24.089095 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:46:24.092196 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:46:24.089127 systemd-resolved[1511]: Flushed all caches. Sep 12 19:46:24.116556 systemd[1]: run-containerd-runc-k8s.io-617b216468fa2c802dfa262ad6161063a0a5622c302eca3e583e5b559496ff7f-runc.ovajuS.mount: Deactivated successfully. Sep 12 19:46:24.430792 containerd[1629]: 2025-09-12 19:46:24.136 [INFO][4109] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:24.430792 containerd[1629]: 2025-09-12 19:46:24.138 [INFO][4109] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" iface="eth0" netns="/var/run/netns/cni-fcf2965b-4858-2363-0550-87d3daa4e5f7" Sep 12 19:46:24.430792 containerd[1629]: 2025-09-12 19:46:24.140 [INFO][4109] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" iface="eth0" netns="/var/run/netns/cni-fcf2965b-4858-2363-0550-87d3daa4e5f7" Sep 12 19:46:24.430792 containerd[1629]: 2025-09-12 19:46:24.145 [INFO][4109] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" iface="eth0" netns="/var/run/netns/cni-fcf2965b-4858-2363-0550-87d3daa4e5f7" Sep 12 19:46:24.430792 containerd[1629]: 2025-09-12 19:46:24.146 [INFO][4109] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:24.430792 containerd[1629]: 2025-09-12 19:46:24.146 [INFO][4109] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:24.430792 containerd[1629]: 2025-09-12 19:46:24.402 [INFO][4131] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" HandleID="k8s-pod-network.989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Workload="srv--l18mb.gb1.brightbox.com-k8s-whisker--bd8bb6677--w7ftn-eth0" Sep 12 19:46:24.430792 containerd[1629]: 2025-09-12 19:46:24.405 [INFO][4131] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:24.430792 containerd[1629]: 2025-09-12 19:46:24.405 [INFO][4131] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:24.430792 containerd[1629]: 2025-09-12 19:46:24.423 [WARNING][4131] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" HandleID="k8s-pod-network.989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Workload="srv--l18mb.gb1.brightbox.com-k8s-whisker--bd8bb6677--w7ftn-eth0" Sep 12 19:46:24.430792 containerd[1629]: 2025-09-12 19:46:24.424 [INFO][4131] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" HandleID="k8s-pod-network.989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Workload="srv--l18mb.gb1.brightbox.com-k8s-whisker--bd8bb6677--w7ftn-eth0" Sep 12 19:46:24.430792 containerd[1629]: 2025-09-12 19:46:24.425 [INFO][4131] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:24.430792 containerd[1629]: 2025-09-12 19:46:24.428 [INFO][4109] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:24.433229 containerd[1629]: time="2025-09-12T19:46:24.431117736Z" level=info msg="TearDown network for sandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\" successfully" Sep 12 19:46:24.433229 containerd[1629]: time="2025-09-12T19:46:24.431177792Z" level=info msg="StopPodSandbox for \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\" returns successfully" Sep 12 19:46:24.441343 containerd[1629]: time="2025-09-12T19:46:24.437901340Z" level=info msg="StopPodSandbox for \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\"" Sep 12 19:46:24.440672 systemd[1]: run-netns-cni\x2dfcf2965b\x2d4858\x2d2363\x2d0550\x2d87d3daa4e5f7.mount: Deactivated successfully. Sep 12 19:46:24.560561 kubelet[2856]: I0912 19:46:24.560159 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78be5b80-9110-4f94-92fe-b7d6cd61fcbc-whisker-backend-key-pair\") pod \"78be5b80-9110-4f94-92fe-b7d6cd61fcbc\" (UID: \"78be5b80-9110-4f94-92fe-b7d6cd61fcbc\") " Sep 12 19:46:24.560561 kubelet[2856]: I0912 19:46:24.560233 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78be5b80-9110-4f94-92fe-b7d6cd61fcbc-whisker-ca-bundle\") pod \"78be5b80-9110-4f94-92fe-b7d6cd61fcbc\" (UID: \"78be5b80-9110-4f94-92fe-b7d6cd61fcbc\") " Sep 12 19:46:24.560561 kubelet[2856]: I0912 19:46:24.560306 2856 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8ttj\" (UniqueName: \"kubernetes.io/projected/78be5b80-9110-4f94-92fe-b7d6cd61fcbc-kube-api-access-d8ttj\") pod \"78be5b80-9110-4f94-92fe-b7d6cd61fcbc\" (UID: \"78be5b80-9110-4f94-92fe-b7d6cd61fcbc\") " Sep 12 19:46:24.574229 containerd[1629]: 2025-09-12 19:46:24.505 [INFO][4151] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Sep 12 19:46:24.574229 containerd[1629]: 2025-09-12 19:46:24.507 [INFO][4151] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" iface="eth0" netns="/var/run/netns/cni-922d8dd1-91fd-371e-dc43-7c4c1c0c14fa" Sep 12 19:46:24.574229 containerd[1629]: 2025-09-12 19:46:24.509 [INFO][4151] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" iface="eth0" netns="/var/run/netns/cni-922d8dd1-91fd-371e-dc43-7c4c1c0c14fa" Sep 12 19:46:24.574229 containerd[1629]: 2025-09-12 19:46:24.511 [INFO][4151] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" iface="eth0" netns="/var/run/netns/cni-922d8dd1-91fd-371e-dc43-7c4c1c0c14fa" Sep 12 19:46:24.574229 containerd[1629]: 2025-09-12 19:46:24.511 [INFO][4151] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Sep 12 19:46:24.574229 containerd[1629]: 2025-09-12 19:46:24.511 [INFO][4151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Sep 12 19:46:24.574229 containerd[1629]: 2025-09-12 19:46:24.551 [INFO][4159] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" HandleID="k8s-pod-network.64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:24.574229 containerd[1629]: 2025-09-12 19:46:24.551 [INFO][4159] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:24.574229 containerd[1629]: 2025-09-12 19:46:24.551 [INFO][4159] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:24.574229 containerd[1629]: 2025-09-12 19:46:24.562 [WARNING][4159] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" HandleID="k8s-pod-network.64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:24.574229 containerd[1629]: 2025-09-12 19:46:24.562 [INFO][4159] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" HandleID="k8s-pod-network.64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:24.574229 containerd[1629]: 2025-09-12 19:46:24.564 [INFO][4159] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:24.574229 containerd[1629]: 2025-09-12 19:46:24.569 [INFO][4151] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Sep 12 19:46:24.583066 containerd[1629]: time="2025-09-12T19:46:24.574192528Z" level=info msg="TearDown network for sandbox \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\" successfully" Sep 12 19:46:24.583066 containerd[1629]: time="2025-09-12T19:46:24.574929468Z" level=info msg="StopPodSandbox for \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\" returns successfully" Sep 12 19:46:24.582585 systemd[1]: run-netns-cni\x2d922d8dd1\x2d91fd\x2d371e\x2ddc43\x2d7c4c1c0c14fa.mount: Deactivated successfully. Sep 12 19:46:24.585805 containerd[1629]: time="2025-09-12T19:46:24.585392238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wltfb,Uid:448b2875-d499-4595-84f9-0b0c9ee28b39,Namespace:kube-system,Attempt:1,}" Sep 12 19:46:24.608430 kubelet[2856]: I0912 19:46:24.607043 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78be5b80-9110-4f94-92fe-b7d6cd61fcbc-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "78be5b80-9110-4f94-92fe-b7d6cd61fcbc" (UID: "78be5b80-9110-4f94-92fe-b7d6cd61fcbc"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 12 19:46:24.608662 kubelet[2856]: I0912 19:46:24.608542 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78be5b80-9110-4f94-92fe-b7d6cd61fcbc-kube-api-access-d8ttj" (OuterVolumeSpecName: "kube-api-access-d8ttj") pod "78be5b80-9110-4f94-92fe-b7d6cd61fcbc" (UID: "78be5b80-9110-4f94-92fe-b7d6cd61fcbc"). InnerVolumeSpecName "kube-api-access-d8ttj". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 12 19:46:24.608662 kubelet[2856]: I0912 19:46:24.608594 2856 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78be5b80-9110-4f94-92fe-b7d6cd61fcbc-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "78be5b80-9110-4f94-92fe-b7d6cd61fcbc" (UID: "78be5b80-9110-4f94-92fe-b7d6cd61fcbc"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 12 19:46:24.614539 systemd[1]: var-lib-kubelet-pods-78be5b80\x2d9110\x2d4f94\x2d92fe\x2db7d6cd61fcbc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd8ttj.mount: Deactivated successfully. Sep 12 19:46:24.615124 systemd[1]: var-lib-kubelet-pods-78be5b80\x2d9110\x2d4f94\x2d92fe\x2db7d6cd61fcbc-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 19:46:24.660987 kubelet[2856]: I0912 19:46:24.660924 2856 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78be5b80-9110-4f94-92fe-b7d6cd61fcbc-whisker-backend-key-pair\") on node \"srv-l18mb.gb1.brightbox.com\" DevicePath \"\"" Sep 12 19:46:24.660987 kubelet[2856]: I0912 19:46:24.660981 2856 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78be5b80-9110-4f94-92fe-b7d6cd61fcbc-whisker-ca-bundle\") on node \"srv-l18mb.gb1.brightbox.com\" DevicePath \"\"" Sep 12 19:46:24.661260 kubelet[2856]: I0912 19:46:24.660999 2856 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8ttj\" (UniqueName: \"kubernetes.io/projected/78be5b80-9110-4f94-92fe-b7d6cd61fcbc-kube-api-access-d8ttj\") on node \"srv-l18mb.gb1.brightbox.com\" DevicePath \"\"" Sep 12 19:46:24.930079 systemd-networkd[1258]: cali6f5a4f15e81: Link UP Sep 12 19:46:24.930479 systemd-networkd[1258]: cali6f5a4f15e81: Gained carrier Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.678 [INFO][4168] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.701 [INFO][4168] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0 coredns-7c65d6cfc9- kube-system 448b2875-d499-4595-84f9-0b0c9ee28b39 890 0 2025-09-12 19:45:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-l18mb.gb1.brightbox.com coredns-7c65d6cfc9-wltfb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6f5a4f15e81 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wltfb" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-" Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.701 [INFO][4168] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wltfb" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.838 [INFO][4179] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" HandleID="k8s-pod-network.435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.839 [INFO][4179] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" HandleID="k8s-pod-network.435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-l18mb.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-wltfb", "timestamp":"2025-09-12 19:46:24.838802364 +0000 UTC"}, Hostname:"srv-l18mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.840 [INFO][4179] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.840 [INFO][4179] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.840 [INFO][4179] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-l18mb.gb1.brightbox.com' Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.850 [INFO][4179] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.869 [INFO][4179] ipam/ipam.go 394: Looking up existing affinities for host host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.875 [INFO][4179] ipam/ipam.go 511: Trying affinity for 192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.878 [INFO][4179] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.882 [INFO][4179] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.882 [INFO][4179] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.6.64/26 handle="k8s-pod-network.435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.884 [INFO][4179] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.890 [INFO][4179] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.6.64/26 handle="k8s-pod-network.435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.901 [INFO][4179] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.6.65/26] block=192.168.6.64/26 handle="k8s-pod-network.435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.902 [INFO][4179] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.65/26] handle="k8s-pod-network.435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.902 [INFO][4179] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:25.006727 containerd[1629]: 2025-09-12 19:46:24.902 [INFO][4179] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.65/26] IPv6=[] ContainerID="435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" HandleID="k8s-pod-network.435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:25.010088 containerd[1629]: 2025-09-12 19:46:24.905 [INFO][4168] cni-plugin/k8s.go 418: Populated endpoint ContainerID="435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wltfb" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"448b2875-d499-4595-84f9-0b0c9ee28b39", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-wltfb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f5a4f15e81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:25.010088 containerd[1629]: 2025-09-12 19:46:24.905 [INFO][4168] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.65/32] ContainerID="435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wltfb" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:25.010088 containerd[1629]: 2025-09-12 19:46:24.905 [INFO][4168] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6f5a4f15e81 ContainerID="435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wltfb" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:25.010088 containerd[1629]: 2025-09-12 19:46:24.932 [INFO][4168] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wltfb" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:25.010088 containerd[1629]: 2025-09-12 19:46:24.935 [INFO][4168] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wltfb" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"448b2875-d499-4595-84f9-0b0c9ee28b39", ResourceVersion:"890", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe", Pod:"coredns-7c65d6cfc9-wltfb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f5a4f15e81", MAC:"aa:b1:cf:53:93:6d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:25.010088 containerd[1629]: 2025-09-12 19:46:24.972 [INFO][4168] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wltfb" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:25.108603 systemd[1]: run-containerd-runc-k8s.io-617b216468fa2c802dfa262ad6161063a0a5622c302eca3e583e5b559496ff7f-runc.5C8oV9.mount: Deactivated successfully. Sep 12 19:46:25.153037 containerd[1629]: time="2025-09-12T19:46:25.152732711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:46:25.156960 containerd[1629]: time="2025-09-12T19:46:25.154344145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:46:25.156960 containerd[1629]: time="2025-09-12T19:46:25.154374290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:25.160294 containerd[1629]: time="2025-09-12T19:46:25.160220461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:25.264911 kubelet[2856]: I0912 19:46:25.264571 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfvwn\" (UniqueName: \"kubernetes.io/projected/45ad44b6-b4b4-4414-8af3-3dc77cfe4590-kube-api-access-dfvwn\") pod \"whisker-759bcb7f56-rw7jz\" (UID: \"45ad44b6-b4b4-4414-8af3-3dc77cfe4590\") " pod="calico-system/whisker-759bcb7f56-rw7jz" Sep 12 19:46:25.267328 kubelet[2856]: I0912 19:46:25.267101 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/45ad44b6-b4b4-4414-8af3-3dc77cfe4590-whisker-ca-bundle\") pod \"whisker-759bcb7f56-rw7jz\" (UID: \"45ad44b6-b4b4-4414-8af3-3dc77cfe4590\") " pod="calico-system/whisker-759bcb7f56-rw7jz" Sep 12 19:46:25.267328 kubelet[2856]: I0912 19:46:25.267183 2856 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/45ad44b6-b4b4-4414-8af3-3dc77cfe4590-whisker-backend-key-pair\") pod \"whisker-759bcb7f56-rw7jz\" (UID: \"45ad44b6-b4b4-4414-8af3-3dc77cfe4590\") " pod="calico-system/whisker-759bcb7f56-rw7jz" Sep 12 19:46:25.442476 containerd[1629]: time="2025-09-12T19:46:25.442265696Z" level=info msg="StopPodSandbox for \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\"" Sep 12 19:46:25.452984 containerd[1629]: time="2025-09-12T19:46:25.452256420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-759bcb7f56-rw7jz,Uid:45ad44b6-b4b4-4414-8af3-3dc77cfe4590,Namespace:calico-system,Attempt:0,}" Sep 12 19:46:25.457852 containerd[1629]: time="2025-09-12T19:46:25.457799620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wltfb,Uid:448b2875-d499-4595-84f9-0b0c9ee28b39,Namespace:kube-system,Attempt:1,} returns sandbox id \"435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe\"" Sep 12 19:46:25.478973 kubelet[2856]: I0912 19:46:25.478922 2856 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78be5b80-9110-4f94-92fe-b7d6cd61fcbc" path="/var/lib/kubelet/pods/78be5b80-9110-4f94-92fe-b7d6cd61fcbc/volumes" Sep 12 19:46:25.489232 containerd[1629]: time="2025-09-12T19:46:25.489141749Z" level=info msg="CreateContainer within sandbox \"435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 19:46:25.615513 containerd[1629]: time="2025-09-12T19:46:25.613902732Z" level=info msg="CreateContainer within sandbox \"435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"64624838ed887ed0a1cf364e014a83363816cb264b0213bc2503867a365b8796\"" Sep 12 19:46:25.619902 containerd[1629]: time="2025-09-12T19:46:25.619398682Z" level=info msg="StartContainer for \"64624838ed887ed0a1cf364e014a83363816cb264b0213bc2503867a365b8796\"" Sep 12 19:46:26.089758 containerd[1629]: time="2025-09-12T19:46:26.087584046Z" level=info msg="StartContainer for \"64624838ed887ed0a1cf364e014a83363816cb264b0213bc2503867a365b8796\" returns successfully" Sep 12 19:46:26.331052 systemd-networkd[1258]: cali50d0ff3225e: Link UP Sep 12 19:46:26.338516 systemd-networkd[1258]: cali50d0ff3225e: Gained carrier Sep 12 19:46:26.350362 containerd[1629]: 2025-09-12 19:46:25.993 [INFO][4310] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Sep 12 19:46:26.350362 containerd[1629]: 2025-09-12 19:46:25.993 [INFO][4310] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" iface="eth0" netns="/var/run/netns/cni-927258cd-05fa-8c70-878a-99586902d35a" Sep 12 19:46:26.350362 containerd[1629]: 2025-09-12 19:46:26.018 [INFO][4310] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" iface="eth0" netns="/var/run/netns/cni-927258cd-05fa-8c70-878a-99586902d35a" Sep 12 19:46:26.350362 containerd[1629]: 2025-09-12 19:46:26.018 [INFO][4310] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" iface="eth0" netns="/var/run/netns/cni-927258cd-05fa-8c70-878a-99586902d35a" Sep 12 19:46:26.350362 containerd[1629]: 2025-09-12 19:46:26.018 [INFO][4310] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Sep 12 19:46:26.350362 containerd[1629]: 2025-09-12 19:46:26.018 [INFO][4310] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Sep 12 19:46:26.350362 containerd[1629]: 2025-09-12 19:46:26.252 [INFO][4405] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" HandleID="k8s-pod-network.1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:26.350362 containerd[1629]: 2025-09-12 19:46:26.252 [INFO][4405] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:26.350362 containerd[1629]: 2025-09-12 19:46:26.265 [INFO][4405] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:26.350362 containerd[1629]: 2025-09-12 19:46:26.301 [WARNING][4405] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" HandleID="k8s-pod-network.1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:26.350362 containerd[1629]: 2025-09-12 19:46:26.302 [INFO][4405] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" HandleID="k8s-pod-network.1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:26.350362 containerd[1629]: 2025-09-12 19:46:26.311 [INFO][4405] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:26.350362 containerd[1629]: 2025-09-12 19:46:26.337 [INFO][4310] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Sep 12 19:46:26.364450 containerd[1629]: time="2025-09-12T19:46:26.357332679Z" level=info msg="TearDown network for sandbox \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\" successfully" Sep 12 19:46:26.364450 containerd[1629]: time="2025-09-12T19:46:26.357412779Z" level=info msg="StopPodSandbox for \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\" returns successfully" Sep 12 19:46:26.363790 systemd[1]: run-netns-cni\x2d927258cd\x2d05fa\x2d8c70\x2d878a\x2d99586902d35a.mount: Deactivated successfully. Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:25.717 [INFO][4311] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:25.769 [INFO][4311] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--l18mb.gb1.brightbox.com-k8s-whisker--759bcb7f56--rw7jz-eth0 whisker-759bcb7f56- calico-system 45ad44b6-b4b4-4414-8af3-3dc77cfe4590 908 0 2025-09-12 19:46:25 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:759bcb7f56 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-l18mb.gb1.brightbox.com whisker-759bcb7f56-rw7jz eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali50d0ff3225e [] [] }} ContainerID="92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" Namespace="calico-system" Pod="whisker-759bcb7f56-rw7jz" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-whisker--759bcb7f56--rw7jz-" Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:25.769 [INFO][4311] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" Namespace="calico-system" Pod="whisker-759bcb7f56-rw7jz" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-whisker--759bcb7f56--rw7jz-eth0" Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.102 [INFO][4379] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" HandleID="k8s-pod-network.92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" Workload="srv--l18mb.gb1.brightbox.com-k8s-whisker--759bcb7f56--rw7jz-eth0" Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.102 [INFO][4379] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" HandleID="k8s-pod-network.92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" Workload="srv--l18mb.gb1.brightbox.com-k8s-whisker--759bcb7f56--rw7jz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e110), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-l18mb.gb1.brightbox.com", "pod":"whisker-759bcb7f56-rw7jz", "timestamp":"2025-09-12 19:46:26.102187726 +0000 UTC"}, Hostname:"srv-l18mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.102 [INFO][4379] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.102 [INFO][4379] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.102 [INFO][4379] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-l18mb.gb1.brightbox.com' Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.125 [INFO][4379] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.145 [INFO][4379] ipam/ipam.go 394: Looking up existing affinities for host host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.185 [INFO][4379] ipam/ipam.go 511: Trying affinity for 192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.199 [INFO][4379] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.214 [INFO][4379] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.217 [INFO][4379] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.6.64/26 handle="k8s-pod-network.92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.224 [INFO][4379] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2 Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.245 [INFO][4379] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.6.64/26 handle="k8s-pod-network.92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.265 [INFO][4379] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.6.66/26] block=192.168.6.64/26 handle="k8s-pod-network.92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.265 [INFO][4379] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.66/26] handle="k8s-pod-network.92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.265 [INFO][4379] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:26.406805 containerd[1629]: 2025-09-12 19:46:26.265 [INFO][4379] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.66/26] IPv6=[] ContainerID="92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" HandleID="k8s-pod-network.92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" Workload="srv--l18mb.gb1.brightbox.com-k8s-whisker--759bcb7f56--rw7jz-eth0" Sep 12 19:46:26.409418 containerd[1629]: 2025-09-12 19:46:26.292 [INFO][4311] cni-plugin/k8s.go 418: Populated endpoint ContainerID="92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" Namespace="calico-system" Pod="whisker-759bcb7f56-rw7jz" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-whisker--759bcb7f56--rw7jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-whisker--759bcb7f56--rw7jz-eth0", GenerateName:"whisker-759bcb7f56-", Namespace:"calico-system", SelfLink:"", UID:"45ad44b6-b4b4-4414-8af3-3dc77cfe4590", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 46, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"759bcb7f56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"", Pod:"whisker-759bcb7f56-rw7jz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.6.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali50d0ff3225e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:26.409418 containerd[1629]: 2025-09-12 19:46:26.294 [INFO][4311] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.66/32] ContainerID="92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" Namespace="calico-system" Pod="whisker-759bcb7f56-rw7jz" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-whisker--759bcb7f56--rw7jz-eth0" Sep 12 19:46:26.409418 containerd[1629]: 2025-09-12 19:46:26.294 [INFO][4311] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50d0ff3225e ContainerID="92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" Namespace="calico-system" Pod="whisker-759bcb7f56-rw7jz" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-whisker--759bcb7f56--rw7jz-eth0" Sep 12 19:46:26.409418 containerd[1629]: 2025-09-12 19:46:26.344 [INFO][4311] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" Namespace="calico-system" Pod="whisker-759bcb7f56-rw7jz" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-whisker--759bcb7f56--rw7jz-eth0" Sep 12 19:46:26.409418 containerd[1629]: 2025-09-12 19:46:26.345 [INFO][4311] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" Namespace="calico-system" Pod="whisker-759bcb7f56-rw7jz" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-whisker--759bcb7f56--rw7jz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-whisker--759bcb7f56--rw7jz-eth0", GenerateName:"whisker-759bcb7f56-", Namespace:"calico-system", SelfLink:"", UID:"45ad44b6-b4b4-4414-8af3-3dc77cfe4590", ResourceVersion:"908", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 46, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"759bcb7f56", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2", Pod:"whisker-759bcb7f56-rw7jz", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.6.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali50d0ff3225e", MAC:"02:b3:47:e1:15:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:26.409418 containerd[1629]: 2025-09-12 19:46:26.387 [INFO][4311] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2" Namespace="calico-system" Pod="whisker-759bcb7f56-rw7jz" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-whisker--759bcb7f56--rw7jz-eth0" Sep 12 19:46:26.482679 containerd[1629]: time="2025-09-12T19:46:26.482464784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-758b4974c-55zh4,Uid:d4a4fa81-9bd4-48b5-8758-d5569260af4c,Namespace:calico-apiserver,Attempt:1,}" Sep 12 19:46:26.483202 containerd[1629]: time="2025-09-12T19:46:26.482532682Z" level=info msg="StopPodSandbox for \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\"" Sep 12 19:46:26.509056 containerd[1629]: time="2025-09-12T19:46:26.508771279Z" level=info msg="StopPodSandbox for \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\"" Sep 12 19:46:26.683953 containerd[1629]: time="2025-09-12T19:46:26.679820888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:46:26.683953 containerd[1629]: time="2025-09-12T19:46:26.680822440Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:46:26.683953 containerd[1629]: time="2025-09-12T19:46:26.680847148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:26.683953 containerd[1629]: time="2025-09-12T19:46:26.682297575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:26.784103 systemd-networkd[1258]: cali6f5a4f15e81: Gained IPv6LL Sep 12 19:46:26.998475 kubelet[2856]: I0912 19:46:26.997375 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wltfb" podStartSLOduration=45.99734779 podStartE2EDuration="45.99734779s" podCreationTimestamp="2025-09-12 19:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 19:46:26.993964008 +0000 UTC m=+51.746176030" watchObservedRunningTime="2025-09-12 19:46:26.99734779 +0000 UTC m=+51.749559797" Sep 12 19:46:27.184092 containerd[1629]: 2025-09-12 19:46:26.817 [INFO][4449] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Sep 12 19:46:27.184092 containerd[1629]: 2025-09-12 19:46:26.818 [INFO][4449] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" iface="eth0" netns="/var/run/netns/cni-63eb20ef-9d2b-92f4-04b0-0212bfc9262a" Sep 12 19:46:27.184092 containerd[1629]: 2025-09-12 19:46:26.821 [INFO][4449] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" iface="eth0" netns="/var/run/netns/cni-63eb20ef-9d2b-92f4-04b0-0212bfc9262a" Sep 12 19:46:27.184092 containerd[1629]: 2025-09-12 19:46:26.828 [INFO][4449] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" iface="eth0" netns="/var/run/netns/cni-63eb20ef-9d2b-92f4-04b0-0212bfc9262a" Sep 12 19:46:27.184092 containerd[1629]: 2025-09-12 19:46:26.828 [INFO][4449] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Sep 12 19:46:27.184092 containerd[1629]: 2025-09-12 19:46:26.828 [INFO][4449] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Sep 12 19:46:27.184092 containerd[1629]: 2025-09-12 19:46:27.101 [INFO][4504] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" HandleID="k8s-pod-network.710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:27.184092 containerd[1629]: 2025-09-12 19:46:27.105 [INFO][4504] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:27.184092 containerd[1629]: 2025-09-12 19:46:27.105 [INFO][4504] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:27.184092 containerd[1629]: 2025-09-12 19:46:27.133 [WARNING][4504] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" HandleID="k8s-pod-network.710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:27.184092 containerd[1629]: 2025-09-12 19:46:27.133 [INFO][4504] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" HandleID="k8s-pod-network.710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:27.184092 containerd[1629]: 2025-09-12 19:46:27.136 [INFO][4504] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:27.184092 containerd[1629]: 2025-09-12 19:46:27.144 [INFO][4449] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Sep 12 19:46:27.195341 containerd[1629]: time="2025-09-12T19:46:27.185960065Z" level=info msg="TearDown network for sandbox \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\" successfully" Sep 12 19:46:27.195341 containerd[1629]: time="2025-09-12T19:46:27.186021876Z" level=info msg="StopPodSandbox for \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\" returns successfully" Sep 12 19:46:27.195341 containerd[1629]: time="2025-09-12T19:46:27.190627452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nj4fp,Uid:6ae76510-d686-4c2f-936d-931b1dc2f310,Namespace:kube-system,Attempt:1,}" Sep 12 19:46:27.195341 containerd[1629]: 2025-09-12 19:46:26.935 [INFO][4478] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Sep 12 19:46:27.195341 containerd[1629]: 2025-09-12 19:46:26.937 [INFO][4478] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" iface="eth0" netns="/var/run/netns/cni-adeb2504-0680-05b6-53de-0a4578692243" Sep 12 19:46:27.195341 containerd[1629]: 2025-09-12 19:46:26.937 [INFO][4478] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" iface="eth0" netns="/var/run/netns/cni-adeb2504-0680-05b6-53de-0a4578692243" Sep 12 19:46:27.195341 containerd[1629]: 2025-09-12 19:46:26.937 [INFO][4478] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" iface="eth0" netns="/var/run/netns/cni-adeb2504-0680-05b6-53de-0a4578692243" Sep 12 19:46:27.195341 containerd[1629]: 2025-09-12 19:46:26.937 [INFO][4478] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Sep 12 19:46:27.195341 containerd[1629]: 2025-09-12 19:46:26.937 [INFO][4478] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Sep 12 19:46:27.195341 containerd[1629]: 2025-09-12 19:46:27.111 [INFO][4518] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" HandleID="k8s-pod-network.a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:27.195341 containerd[1629]: 2025-09-12 19:46:27.111 [INFO][4518] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:27.195341 containerd[1629]: 2025-09-12 19:46:27.137 [INFO][4518] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:27.195341 containerd[1629]: 2025-09-12 19:46:27.150 [WARNING][4518] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" HandleID="k8s-pod-network.a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:27.195341 containerd[1629]: 2025-09-12 19:46:27.150 [INFO][4518] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" HandleID="k8s-pod-network.a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:27.195341 containerd[1629]: 2025-09-12 19:46:27.153 [INFO][4518] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:27.195341 containerd[1629]: 2025-09-12 19:46:27.176 [INFO][4478] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Sep 12 19:46:27.195341 containerd[1629]: time="2025-09-12T19:46:27.194820075Z" level=info msg="TearDown network for sandbox \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\" successfully" Sep 12 19:46:27.195341 containerd[1629]: time="2025-09-12T19:46:27.195113092Z" level=info msg="StopPodSandbox for \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\" returns successfully" Sep 12 19:46:27.196102 systemd[1]: run-netns-cni\x2d63eb20ef\x2d9d2b\x2d92f4\x2d04b0\x2d0212bfc9262a.mount: Deactivated successfully. Sep 12 19:46:27.209763 systemd[1]: run-netns-cni\x2dadeb2504\x2d0680\x2d05b6\x2d53de\x2d0a4578692243.mount: Deactivated successfully. Sep 12 19:46:27.217016 containerd[1629]: time="2025-09-12T19:46:27.216779733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59796884b-cxjns,Uid:b92340aa-ced1-4270-b056-3fe317e4937f,Namespace:calico-system,Attempt:1,}" Sep 12 19:46:27.259978 containerd[1629]: time="2025-09-12T19:46:27.259750161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-759bcb7f56-rw7jz,Uid:45ad44b6-b4b4-4414-8af3-3dc77cfe4590,Namespace:calico-system,Attempt:0,} returns sandbox id \"92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2\"" Sep 12 19:46:27.278943 containerd[1629]: time="2025-09-12T19:46:27.278622448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 19:46:27.471275 systemd-networkd[1258]: calid53ff5e316a: Link UP Sep 12 19:46:27.483540 systemd-networkd[1258]: calid53ff5e316a: Gained carrier Sep 12 19:46:27.526029 containerd[1629]: time="2025-09-12T19:46:27.525403756Z" level=info msg="StopPodSandbox for \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\"" Sep 12 19:46:27.527307 containerd[1629]: time="2025-09-12T19:46:27.526226205Z" level=info msg="StopPodSandbox for \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\"" Sep 12 19:46:27.535627 containerd[1629]: time="2025-09-12T19:46:27.526516299Z" level=info msg="StopPodSandbox for \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\"" Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:26.990 [INFO][4452] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.064 [INFO][4452] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0 calico-apiserver-758b4974c- calico-apiserver d4a4fa81-9bd4-48b5-8758-d5569260af4c 914 0 2025-09-12 19:45:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:758b4974c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-l18mb.gb1.brightbox.com calico-apiserver-758b4974c-55zh4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid53ff5e316a [] [] }} ContainerID="33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-55zh4" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-" Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.066 [INFO][4452] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-55zh4" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.291 [INFO][4530] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" HandleID="k8s-pod-network.33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.295 [INFO][4530] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" HandleID="k8s-pod-network.33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000314ac0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-l18mb.gb1.brightbox.com", "pod":"calico-apiserver-758b4974c-55zh4", "timestamp":"2025-09-12 19:46:27.290760004 +0000 UTC"}, Hostname:"srv-l18mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.300 [INFO][4530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.300 [INFO][4530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.300 [INFO][4530] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-l18mb.gb1.brightbox.com' Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.326 [INFO][4530] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.342 [INFO][4530] ipam/ipam.go 394: Looking up existing affinities for host host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.356 [INFO][4530] ipam/ipam.go 511: Trying affinity for 192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.361 [INFO][4530] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.370 [INFO][4530] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.370 [INFO][4530] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.6.64/26 handle="k8s-pod-network.33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.375 [INFO][4530] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665 Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.390 [INFO][4530] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.6.64/26 handle="k8s-pod-network.33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.410 [INFO][4530] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.6.67/26] block=192.168.6.64/26 handle="k8s-pod-network.33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.411 [INFO][4530] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.67/26] handle="k8s-pod-network.33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.411 [INFO][4530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:27.650322 containerd[1629]: 2025-09-12 19:46:27.411 [INFO][4530] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.67/26] IPv6=[] ContainerID="33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" HandleID="k8s-pod-network.33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:27.652148 containerd[1629]: 2025-09-12 19:46:27.421 [INFO][4452] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-55zh4" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0", GenerateName:"calico-apiserver-758b4974c-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4a4fa81-9bd4-48b5-8758-d5569260af4c", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"758b4974c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-758b4974c-55zh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid53ff5e316a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:27.652148 containerd[1629]: 2025-09-12 19:46:27.423 [INFO][4452] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.67/32] ContainerID="33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-55zh4" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:27.652148 containerd[1629]: 2025-09-12 19:46:27.426 [INFO][4452] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid53ff5e316a ContainerID="33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-55zh4" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:27.652148 containerd[1629]: 2025-09-12 19:46:27.493 [INFO][4452] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-55zh4" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:27.652148 containerd[1629]: 2025-09-12 19:46:27.509 [INFO][4452] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-55zh4" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0", GenerateName:"calico-apiserver-758b4974c-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4a4fa81-9bd4-48b5-8758-d5569260af4c", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"758b4974c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665", Pod:"calico-apiserver-758b4974c-55zh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid53ff5e316a", MAC:"fa:0b:78:25:ee:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:27.652148 containerd[1629]: 2025-09-12 19:46:27.579 [INFO][4452] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-55zh4" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:27.973667 containerd[1629]: time="2025-09-12T19:46:27.942837679Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:46:27.973667 containerd[1629]: time="2025-09-12T19:46:27.944659323Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:46:27.973667 containerd[1629]: time="2025-09-12T19:46:27.944683224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:27.973667 containerd[1629]: time="2025-09-12T19:46:27.945355525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:27.995759 systemd-networkd[1258]: cali50d0ff3225e: Gained IPv6LL Sep 12 19:46:28.245905 kernel: bpftool[4720]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 19:46:28.436708 containerd[1629]: 2025-09-12 19:46:28.059 [INFO][4601] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Sep 12 19:46:28.436708 containerd[1629]: 2025-09-12 19:46:28.060 [INFO][4601] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" iface="eth0" netns="/var/run/netns/cni-6080baaa-8e11-f9db-2790-87e24c5afa26" Sep 12 19:46:28.436708 containerd[1629]: 2025-09-12 19:46:28.062 [INFO][4601] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" iface="eth0" netns="/var/run/netns/cni-6080baaa-8e11-f9db-2790-87e24c5afa26" Sep 12 19:46:28.436708 containerd[1629]: 2025-09-12 19:46:28.067 [INFO][4601] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" iface="eth0" netns="/var/run/netns/cni-6080baaa-8e11-f9db-2790-87e24c5afa26" Sep 12 19:46:28.436708 containerd[1629]: 2025-09-12 19:46:28.067 [INFO][4601] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Sep 12 19:46:28.436708 containerd[1629]: 2025-09-12 19:46:28.067 [INFO][4601] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Sep 12 19:46:28.436708 containerd[1629]: 2025-09-12 19:46:28.371 [INFO][4689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" HandleID="k8s-pod-network.ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Workload="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:28.436708 containerd[1629]: 2025-09-12 19:46:28.389 [INFO][4689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:28.436708 containerd[1629]: 2025-09-12 19:46:28.389 [INFO][4689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:28.436708 containerd[1629]: 2025-09-12 19:46:28.415 [WARNING][4689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" HandleID="k8s-pod-network.ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Workload="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:28.436708 containerd[1629]: 2025-09-12 19:46:28.415 [INFO][4689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" HandleID="k8s-pod-network.ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Workload="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:28.436708 containerd[1629]: 2025-09-12 19:46:28.418 [INFO][4689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:28.436708 containerd[1629]: 2025-09-12 19:46:28.426 [INFO][4601] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Sep 12 19:46:28.445881 containerd[1629]: time="2025-09-12T19:46:28.442613235Z" level=info msg="TearDown network for sandbox \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\" successfully" Sep 12 19:46:28.445881 containerd[1629]: time="2025-09-12T19:46:28.443354678Z" level=info msg="StopPodSandbox for \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\" returns successfully" Sep 12 19:46:28.451456 systemd[1]: run-netns-cni\x2d6080baaa\x2d8e11\x2df9db\x2d2790\x2d87e24c5afa26.mount: Deactivated successfully. Sep 12 19:46:28.462679 containerd[1629]: time="2025-09-12T19:46:28.461829085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-758b4974c-55zh4,Uid:d4a4fa81-9bd4-48b5-8758-d5569260af4c,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665\"" Sep 12 19:46:28.464532 containerd[1629]: time="2025-09-12T19:46:28.463717227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k2xgk,Uid:c267a380-4b7e-4a52-b71c-69a8c98b3163,Namespace:calico-system,Attempt:1,}" Sep 12 19:46:28.547086 systemd-networkd[1258]: cali6db72e1109d: Link UP Sep 12 19:46:28.552828 systemd-networkd[1258]: cali6db72e1109d: Gained carrier Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:27.692 [INFO][4545] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:27.892 [INFO][4545] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0 coredns-7c65d6cfc9- kube-system 6ae76510-d686-4c2f-936d-931b1dc2f310 924 0 2025-09-12 19:45:41 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-l18mb.gb1.brightbox.com coredns-7c65d6cfc9-nj4fp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6db72e1109d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nj4fp" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-" Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:27.892 [INFO][4545] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nj4fp" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.406 [INFO][4671] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" HandleID="k8s-pod-network.dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.407 [INFO][4671] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" HandleID="k8s-pod-network.dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000328410), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-l18mb.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-nj4fp", "timestamp":"2025-09-12 19:46:28.406877802 +0000 UTC"}, Hostname:"srv-l18mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.407 [INFO][4671] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.418 [INFO][4671] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.418 [INFO][4671] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-l18mb.gb1.brightbox.com' Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.433 [INFO][4671] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.445 [INFO][4671] ipam/ipam.go 394: Looking up existing affinities for host host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.463 [INFO][4671] ipam/ipam.go 511: Trying affinity for 192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.471 [INFO][4671] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.475 [INFO][4671] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.475 [INFO][4671] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.6.64/26 handle="k8s-pod-network.dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.479 [INFO][4671] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.494 [INFO][4671] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.6.64/26 handle="k8s-pod-network.dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.510 [INFO][4671] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.6.68/26] block=192.168.6.64/26 handle="k8s-pod-network.dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.511 [INFO][4671] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.68/26] handle="k8s-pod-network.dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.512 [INFO][4671] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:28.629993 containerd[1629]: 2025-09-12 19:46:28.512 [INFO][4671] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.68/26] IPv6=[] ContainerID="dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" HandleID="k8s-pod-network.dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:28.631945 containerd[1629]: 2025-09-12 19:46:28.525 [INFO][4545] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nj4fp" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6ae76510-d686-4c2f-936d-931b1dc2f310", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-nj4fp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6db72e1109d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:28.631945 containerd[1629]: 2025-09-12 19:46:28.526 [INFO][4545] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.68/32] ContainerID="dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nj4fp" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:28.631945 containerd[1629]: 2025-09-12 19:46:28.526 [INFO][4545] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6db72e1109d ContainerID="dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nj4fp" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:28.631945 containerd[1629]: 2025-09-12 19:46:28.559 [INFO][4545] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nj4fp" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:28.631945 containerd[1629]: 2025-09-12 19:46:28.561 [INFO][4545] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nj4fp" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6ae76510-d686-4c2f-936d-931b1dc2f310", ResourceVersion:"924", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d", Pod:"coredns-7c65d6cfc9-nj4fp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6db72e1109d", MAC:"aa:13:c7:c2:9d:bb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:28.631945 containerd[1629]: 2025-09-12 19:46:28.626 [INFO][4545] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d" Namespace="kube-system" Pod="coredns-7c65d6cfc9-nj4fp" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:28.743209 systemd-networkd[1258]: calia5de5077e0c: Link UP Sep 12 19:46:28.748162 systemd-networkd[1258]: calia5de5077e0c: Gained carrier Sep 12 19:46:28.765997 systemd-networkd[1258]: calid53ff5e316a: Gained IPv6LL Sep 12 19:46:28.767367 containerd[1629]: time="2025-09-12T19:46:28.767230530Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:46:28.767497 containerd[1629]: time="2025-09-12T19:46:28.767347126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:46:28.767497 containerd[1629]: time="2025-09-12T19:46:28.767372247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:28.767653 containerd[1629]: time="2025-09-12T19:46:28.767539385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:28.784144 containerd[1629]: 2025-09-12 19:46:28.101 [INFO][4600] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Sep 12 19:46:28.784144 containerd[1629]: 2025-09-12 19:46:28.102 [INFO][4600] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" iface="eth0" netns="/var/run/netns/cni-dcd01d8b-32b7-0be0-dffd-4e6f0d3714f0" Sep 12 19:46:28.784144 containerd[1629]: 2025-09-12 19:46:28.103 [INFO][4600] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" iface="eth0" netns="/var/run/netns/cni-dcd01d8b-32b7-0be0-dffd-4e6f0d3714f0" Sep 12 19:46:28.784144 containerd[1629]: 2025-09-12 19:46:28.103 [INFO][4600] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" iface="eth0" netns="/var/run/netns/cni-dcd01d8b-32b7-0be0-dffd-4e6f0d3714f0" Sep 12 19:46:28.784144 containerd[1629]: 2025-09-12 19:46:28.103 [INFO][4600] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Sep 12 19:46:28.784144 containerd[1629]: 2025-09-12 19:46:28.103 [INFO][4600] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Sep 12 19:46:28.784144 containerd[1629]: 2025-09-12 19:46:28.443 [INFO][4697] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" HandleID="k8s-pod-network.dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:28.784144 containerd[1629]: 2025-09-12 19:46:28.447 [INFO][4697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:28.784144 containerd[1629]: 2025-09-12 19:46:28.729 [INFO][4697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:28.784144 containerd[1629]: 2025-09-12 19:46:28.762 [WARNING][4697] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" HandleID="k8s-pod-network.dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:28.784144 containerd[1629]: 2025-09-12 19:46:28.762 [INFO][4697] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" HandleID="k8s-pod-network.dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:28.784144 containerd[1629]: 2025-09-12 19:46:28.765 [INFO][4697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:28.784144 containerd[1629]: 2025-09-12 19:46:28.779 [INFO][4600] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Sep 12 19:46:28.804371 systemd[1]: run-netns-cni\x2ddcd01d8b\x2d32b7\x2d0be0\x2ddffd\x2d4e6f0d3714f0.mount: Deactivated successfully. Sep 12 19:46:28.813095 containerd[1629]: 2025-09-12 19:46:27.931 [INFO][4602] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Sep 12 19:46:28.813095 containerd[1629]: 2025-09-12 19:46:27.931 [INFO][4602] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" iface="eth0" netns="/var/run/netns/cni-f8e383f5-56bc-16e7-238d-dc3069a3b06e" Sep 12 19:46:28.813095 containerd[1629]: 2025-09-12 19:46:27.932 [INFO][4602] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" iface="eth0" netns="/var/run/netns/cni-f8e383f5-56bc-16e7-238d-dc3069a3b06e" Sep 12 19:46:28.813095 containerd[1629]: 2025-09-12 19:46:27.962 [INFO][4602] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" iface="eth0" netns="/var/run/netns/cni-f8e383f5-56bc-16e7-238d-dc3069a3b06e" Sep 12 19:46:28.813095 containerd[1629]: 2025-09-12 19:46:27.962 [INFO][4602] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Sep 12 19:46:28.813095 containerd[1629]: 2025-09-12 19:46:27.962 [INFO][4602] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Sep 12 19:46:28.813095 containerd[1629]: 2025-09-12 19:46:28.425 [INFO][4669] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" HandleID="k8s-pod-network.70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Workload="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:28.813095 containerd[1629]: 2025-09-12 19:46:28.425 [INFO][4669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:28.813095 containerd[1629]: 2025-09-12 19:46:28.699 [INFO][4669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:28.813095 containerd[1629]: 2025-09-12 19:46:28.719 [WARNING][4669] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" HandleID="k8s-pod-network.70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Workload="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:28.813095 containerd[1629]: 2025-09-12 19:46:28.719 [INFO][4669] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" HandleID="k8s-pod-network.70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Workload="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:28.813095 containerd[1629]: 2025-09-12 19:46:28.727 [INFO][4669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:28.813095 containerd[1629]: 2025-09-12 19:46:28.770 [INFO][4602] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Sep 12 19:46:28.817914 containerd[1629]: time="2025-09-12T19:46:28.789435231Z" level=info msg="TearDown network for sandbox \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\" successfully" Sep 12 19:46:28.817914 containerd[1629]: time="2025-09-12T19:46:28.816066109Z" level=info msg="StopPodSandbox for \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\" returns successfully" Sep 12 19:46:28.817914 containerd[1629]: time="2025-09-12T19:46:28.815990120Z" level=info msg="TearDown network for sandbox \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\" successfully" Sep 12 19:46:28.817914 containerd[1629]: time="2025-09-12T19:46:28.816188645Z" level=info msg="StopPodSandbox for \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\" returns successfully" Sep 12 19:46:28.820158 systemd[1]: run-netns-cni\x2df8e383f5\x2d56bc\x2d16e7\x2d238d\x2ddc3069a3b06e.mount: Deactivated successfully. Sep 12 19:46:28.823284 containerd[1629]: time="2025-09-12T19:46:28.823230777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-758b4974c-zpjm5,Uid:35e0e2e0-2224-46e0-8d30-2bff8d42b502,Namespace:calico-apiserver,Attempt:1,}" Sep 12 19:46:28.824077 containerd[1629]: time="2025-09-12T19:46:28.823684427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-glqx6,Uid:47b90bcb-34aa-42dd-ada4-6ab99df0bd40,Namespace:calico-system,Attempt:1,}" Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:27.782 [INFO][4555] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:27.914 [INFO][4555] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0 calico-kube-controllers-59796884b- calico-system b92340aa-ced1-4270-b056-3fe317e4937f 926 0 2025-09-12 19:45:58 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:59796884b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-l18mb.gb1.brightbox.com calico-kube-controllers-59796884b-cxjns eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia5de5077e0c [] [] }} ContainerID="5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" Namespace="calico-system" Pod="calico-kube-controllers-59796884b-cxjns" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-" Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:27.914 [INFO][4555] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" Namespace="calico-system" Pod="calico-kube-controllers-59796884b-cxjns" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.405 [INFO][4668] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" HandleID="k8s-pod-network.5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.407 [INFO][4668] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" HandleID="k8s-pod-network.5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000103980), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-l18mb.gb1.brightbox.com", "pod":"calico-kube-controllers-59796884b-cxjns", "timestamp":"2025-09-12 19:46:28.405719613 +0000 UTC"}, Hostname:"srv-l18mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.408 [INFO][4668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.512 [INFO][4668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.514 [INFO][4668] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-l18mb.gb1.brightbox.com' Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.559 [INFO][4668] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.601 [INFO][4668] ipam/ipam.go 394: Looking up existing affinities for host host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.626 [INFO][4668] ipam/ipam.go 511: Trying affinity for 192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.640 [INFO][4668] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.648 [INFO][4668] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.648 [INFO][4668] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.6.64/26 handle="k8s-pod-network.5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.670 [INFO][4668] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230 Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.679 [INFO][4668] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.6.64/26 handle="k8s-pod-network.5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.697 [INFO][4668] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.6.69/26] block=192.168.6.64/26 handle="k8s-pod-network.5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.697 [INFO][4668] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.69/26] handle="k8s-pod-network.5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.697 [INFO][4668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:28.830776 containerd[1629]: 2025-09-12 19:46:28.697 [INFO][4668] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.69/26] IPv6=[] ContainerID="5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" HandleID="k8s-pod-network.5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:28.832658 containerd[1629]: 2025-09-12 19:46:28.714 [INFO][4555] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" Namespace="calico-system" Pod="calico-kube-controllers-59796884b-cxjns" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0", GenerateName:"calico-kube-controllers-59796884b-", Namespace:"calico-system", SelfLink:"", UID:"b92340aa-ced1-4270-b056-3fe317e4937f", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59796884b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-59796884b-cxjns", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia5de5077e0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:28.832658 containerd[1629]: 2025-09-12 19:46:28.714 [INFO][4555] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.69/32] ContainerID="5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" Namespace="calico-system" Pod="calico-kube-controllers-59796884b-cxjns" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:28.832658 containerd[1629]: 2025-09-12 19:46:28.714 [INFO][4555] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5de5077e0c ContainerID="5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" Namespace="calico-system" Pod="calico-kube-controllers-59796884b-cxjns" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:28.832658 containerd[1629]: 2025-09-12 19:46:28.752 [INFO][4555] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" Namespace="calico-system" Pod="calico-kube-controllers-59796884b-cxjns" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:28.832658 containerd[1629]: 2025-09-12 19:46:28.767 [INFO][4555] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" Namespace="calico-system" Pod="calico-kube-controllers-59796884b-cxjns" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0", GenerateName:"calico-kube-controllers-59796884b-", Namespace:"calico-system", SelfLink:"", UID:"b92340aa-ced1-4270-b056-3fe317e4937f", ResourceVersion:"926", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59796884b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230", Pod:"calico-kube-controllers-59796884b-cxjns", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia5de5077e0c", MAC:"6e:ea:b3:09:fd:4f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:28.832658 containerd[1629]: 2025-09-12 19:46:28.822 [INFO][4555] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230" Namespace="calico-system" Pod="calico-kube-controllers-59796884b-cxjns" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:29.157771 containerd[1629]: time="2025-09-12T19:46:29.148939518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:46:29.157771 containerd[1629]: time="2025-09-12T19:46:29.149028679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:46:29.157771 containerd[1629]: time="2025-09-12T19:46:29.149060395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:29.157771 containerd[1629]: time="2025-09-12T19:46:29.149254475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:29.211032 systemd-networkd[1258]: vxlan.calico: Link UP Sep 12 19:46:29.213161 systemd-networkd[1258]: vxlan.calico: Gained carrier Sep 12 19:46:29.215131 containerd[1629]: time="2025-09-12T19:46:29.214975560Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nj4fp,Uid:6ae76510-d686-4c2f-936d-931b1dc2f310,Namespace:kube-system,Attempt:1,} returns sandbox id \"dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d\"" Sep 12 19:46:29.240155 containerd[1629]: time="2025-09-12T19:46:29.239987032Z" level=info msg="CreateContainer within sandbox \"dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 19:46:29.315553 containerd[1629]: time="2025-09-12T19:46:29.315468416Z" level=info msg="CreateContainer within sandbox \"dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"98f0c7849327e8fcfdd040a34efa423b442ca33a1a822e23d1a7c63977fc9600\"" Sep 12 19:46:29.331028 containerd[1629]: time="2025-09-12T19:46:29.329040022Z" level=info msg="StartContainer for \"98f0c7849327e8fcfdd040a34efa423b442ca33a1a822e23d1a7c63977fc9600\"" Sep 12 19:46:29.331408 systemd-networkd[1258]: calieb24dd7c77e: Link UP Sep 12 19:46:29.333798 systemd-networkd[1258]: calieb24dd7c77e: Gained carrier Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:28.675 [INFO][4737] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0 csi-node-driver- calico-system c267a380-4b7e-4a52-b71c-69a8c98b3163 944 0 2025-09-12 19:45:58 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:856c6b598f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-l18mb.gb1.brightbox.com csi-node-driver-k2xgk eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calieb24dd7c77e [] [] }} ContainerID="505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" Namespace="calico-system" Pod="csi-node-driver-k2xgk" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-" Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:28.675 [INFO][4737] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" Namespace="calico-system" Pod="csi-node-driver-k2xgk" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:28.966 [INFO][4761] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" HandleID="k8s-pod-network.505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" Workload="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:28.967 [INFO][4761] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" HandleID="k8s-pod-network.505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" Workload="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e3e00), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-l18mb.gb1.brightbox.com", "pod":"csi-node-driver-k2xgk", "timestamp":"2025-09-12 19:46:28.966885045 +0000 UTC"}, Hostname:"srv-l18mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:28.967 [INFO][4761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:28.967 [INFO][4761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:28.975 [INFO][4761] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-l18mb.gb1.brightbox.com' Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:29.033 [INFO][4761] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:29.093 [INFO][4761] ipam/ipam.go 394: Looking up existing affinities for host host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:29.118 [INFO][4761] ipam/ipam.go 511: Trying affinity for 192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:29.134 [INFO][4761] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:29.153 [INFO][4761] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:29.167 [INFO][4761] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.6.64/26 handle="k8s-pod-network.505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:29.191 [INFO][4761] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:29.238 [INFO][4761] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.6.64/26 handle="k8s-pod-network.505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:29.268 [INFO][4761] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.6.70/26] block=192.168.6.64/26 handle="k8s-pod-network.505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:29.271 [INFO][4761] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.70/26] handle="k8s-pod-network.505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:29.275 [INFO][4761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:29.425960 containerd[1629]: 2025-09-12 19:46:29.278 [INFO][4761] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.70/26] IPv6=[] ContainerID="505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" HandleID="k8s-pod-network.505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" Workload="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:29.429973 containerd[1629]: 2025-09-12 19:46:29.301 [INFO][4737] cni-plugin/k8s.go 418: Populated endpoint ContainerID="505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" Namespace="calico-system" Pod="csi-node-driver-k2xgk" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c267a380-4b7e-4a52-b71c-69a8c98b3163", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-k2xgk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieb24dd7c77e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:29.429973 containerd[1629]: 2025-09-12 19:46:29.301 [INFO][4737] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.70/32] ContainerID="505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" Namespace="calico-system" Pod="csi-node-driver-k2xgk" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:29.429973 containerd[1629]: 2025-09-12 19:46:29.302 [INFO][4737] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieb24dd7c77e ContainerID="505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" Namespace="calico-system" Pod="csi-node-driver-k2xgk" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:29.429973 containerd[1629]: 2025-09-12 19:46:29.351 [INFO][4737] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" Namespace="calico-system" Pod="csi-node-driver-k2xgk" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:29.429973 containerd[1629]: 2025-09-12 19:46:29.369 [INFO][4737] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" Namespace="calico-system" Pod="csi-node-driver-k2xgk" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c267a380-4b7e-4a52-b71c-69a8c98b3163", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a", Pod:"csi-node-driver-k2xgk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieb24dd7c77e", MAC:"ce:d1:88:79:91:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:29.429973 containerd[1629]: 2025-09-12 19:46:29.398 [INFO][4737] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a" Namespace="calico-system" Pod="csi-node-driver-k2xgk" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:29.591268 containerd[1629]: time="2025-09-12T19:46:29.589396059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-59796884b-cxjns,Uid:b92340aa-ced1-4270-b056-3fe317e4937f,Namespace:calico-system,Attempt:1,} returns sandbox id \"5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230\"" Sep 12 19:46:29.594983 systemd-networkd[1258]: cali6db72e1109d: Gained IPv6LL Sep 12 19:46:29.741101 systemd-networkd[1258]: cali3e857e72671: Link UP Sep 12 19:46:29.757147 systemd-networkd[1258]: cali3e857e72671: Gained carrier Sep 12 19:46:29.786696 containerd[1629]: time="2025-09-12T19:46:29.785178465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:46:29.786696 containerd[1629]: time="2025-09-12T19:46:29.785830102Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:46:29.786696 containerd[1629]: time="2025-09-12T19:46:29.786102534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:29.786696 containerd[1629]: time="2025-09-12T19:46:29.786289053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.385 [INFO][4823] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0 goldmane-7988f88666- calico-system 47b90bcb-34aa-42dd-ada4-6ab99df0bd40 943 0 2025-09-12 19:45:57 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7988f88666 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-l18mb.gb1.brightbox.com goldmane-7988f88666-glqx6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali3e857e72671 [] [] }} ContainerID="cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" Namespace="calico-system" Pod="goldmane-7988f88666-glqx6" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-" Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.387 [INFO][4823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" Namespace="calico-system" Pod="goldmane-7988f88666-glqx6" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.624 [INFO][4924] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" HandleID="k8s-pod-network.cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" Workload="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.624 [INFO][4924] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" HandleID="k8s-pod-network.cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" Workload="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123d60), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-l18mb.gb1.brightbox.com", "pod":"goldmane-7988f88666-glqx6", "timestamp":"2025-09-12 19:46:29.624392872 +0000 UTC"}, Hostname:"srv-l18mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.624 [INFO][4924] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.624 [INFO][4924] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.625 [INFO][4924] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-l18mb.gb1.brightbox.com' Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.652 [INFO][4924] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.671 [INFO][4924] ipam/ipam.go 394: Looking up existing affinities for host host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.682 [INFO][4924] ipam/ipam.go 511: Trying affinity for 192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.685 [INFO][4924] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.690 [INFO][4924] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.691 [INFO][4924] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.6.64/26 handle="k8s-pod-network.cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.694 [INFO][4924] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.702 [INFO][4924] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.6.64/26 handle="k8s-pod-network.cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.719 [INFO][4924] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.6.71/26] block=192.168.6.64/26 handle="k8s-pod-network.cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.722 [INFO][4924] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.71/26] handle="k8s-pod-network.cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.723 [INFO][4924] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:29.830146 containerd[1629]: 2025-09-12 19:46:29.723 [INFO][4924] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.71/26] IPv6=[] ContainerID="cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" HandleID="k8s-pod-network.cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" Workload="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:29.831511 containerd[1629]: 2025-09-12 19:46:29.731 [INFO][4823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" Namespace="calico-system" Pod="goldmane-7988f88666-glqx6" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"47b90bcb-34aa-42dd-ada4-6ab99df0bd40", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-7988f88666-glqx6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.6.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3e857e72671", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:29.831511 containerd[1629]: 2025-09-12 19:46:29.731 [INFO][4823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.71/32] ContainerID="cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" Namespace="calico-system" Pod="goldmane-7988f88666-glqx6" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:29.831511 containerd[1629]: 2025-09-12 19:46:29.731 [INFO][4823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3e857e72671 ContainerID="cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" Namespace="calico-system" Pod="goldmane-7988f88666-glqx6" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:29.831511 containerd[1629]: 2025-09-12 19:46:29.772 [INFO][4823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" Namespace="calico-system" Pod="goldmane-7988f88666-glqx6" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:29.831511 containerd[1629]: 2025-09-12 19:46:29.773 [INFO][4823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" Namespace="calico-system" Pod="goldmane-7988f88666-glqx6" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"47b90bcb-34aa-42dd-ada4-6ab99df0bd40", ResourceVersion:"943", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa", Pod:"goldmane-7988f88666-glqx6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.6.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3e857e72671", MAC:"52:5a:92:22:29:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:29.831511 containerd[1629]: 2025-09-12 19:46:29.807 [INFO][4823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa" Namespace="calico-system" Pod="goldmane-7988f88666-glqx6" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:29.873367 systemd[1]: run-containerd-runc-k8s.io-98f0c7849327e8fcfdd040a34efa423b442ca33a1a822e23d1a7c63977fc9600-runc.ZdUtO0.mount: Deactivated successfully. Sep 12 19:46:29.913766 systemd-networkd[1258]: calia5de5077e0c: Gained IPv6LL Sep 12 19:46:29.939750 systemd-networkd[1258]: caliab316a5c800: Link UP Sep 12 19:46:29.953285 systemd-networkd[1258]: caliab316a5c800: Gained carrier Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.554 [INFO][4827] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0 calico-apiserver-758b4974c- calico-apiserver 35e0e2e0-2224-46e0-8d30-2bff8d42b502 945 0 2025-09-12 19:45:54 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:758b4974c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-l18mb.gb1.brightbox.com calico-apiserver-758b4974c-zpjm5 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliab316a5c800 [] [] }} ContainerID="7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-zpjm5" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-" Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.555 [INFO][4827] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-zpjm5" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.716 [INFO][4944] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" HandleID="k8s-pod-network.7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.717 [INFO][4944] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" HandleID="k8s-pod-network.7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c19a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-l18mb.gb1.brightbox.com", "pod":"calico-apiserver-758b4974c-zpjm5", "timestamp":"2025-09-12 19:46:29.716820901 +0000 UTC"}, Hostname:"srv-l18mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.718 [INFO][4944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.723 [INFO][4944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.723 [INFO][4944] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-l18mb.gb1.brightbox.com' Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.750 [INFO][4944] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.784 [INFO][4944] ipam/ipam.go 394: Looking up existing affinities for host host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.812 [INFO][4944] ipam/ipam.go 511: Trying affinity for 192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.816 [INFO][4944] ipam/ipam.go 158: Attempting to load block cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.821 [INFO][4944] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.6.64/26 host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.821 [INFO][4944] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.6.64/26 handle="k8s-pod-network.7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.826 [INFO][4944] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.845 [INFO][4944] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.6.64/26 handle="k8s-pod-network.7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.882 [INFO][4944] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.6.72/26] block=192.168.6.64/26 handle="k8s-pod-network.7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.885 [INFO][4944] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.6.72/26] handle="k8s-pod-network.7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" host="srv-l18mb.gb1.brightbox.com" Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.888 [INFO][4944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:30.045124 containerd[1629]: 2025-09-12 19:46:29.890 [INFO][4944] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.6.72/26] IPv6=[] ContainerID="7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" HandleID="k8s-pod-network.7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:30.048068 containerd[1629]: 2025-09-12 19:46:29.916 [INFO][4827] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-zpjm5" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0", GenerateName:"calico-apiserver-758b4974c-", Namespace:"calico-apiserver", SelfLink:"", UID:"35e0e2e0-2224-46e0-8d30-2bff8d42b502", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"758b4974c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-758b4974c-zpjm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab316a5c800", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:30.048068 containerd[1629]: 2025-09-12 19:46:29.916 [INFO][4827] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.6.72/32] ContainerID="7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-zpjm5" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:30.048068 containerd[1629]: 2025-09-12 19:46:29.916 [INFO][4827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab316a5c800 ContainerID="7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-zpjm5" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:30.048068 containerd[1629]: 2025-09-12 19:46:29.961 [INFO][4827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-zpjm5" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:30.048068 containerd[1629]: 2025-09-12 19:46:29.964 [INFO][4827] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-zpjm5" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0", GenerateName:"calico-apiserver-758b4974c-", Namespace:"calico-apiserver", SelfLink:"", UID:"35e0e2e0-2224-46e0-8d30-2bff8d42b502", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"758b4974c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f", Pod:"calico-apiserver-758b4974c-zpjm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab316a5c800", MAC:"d6:95:64:07:03:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:30.048068 containerd[1629]: 2025-09-12 19:46:29.993 [INFO][4827] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f" Namespace="calico-apiserver" Pod="calico-apiserver-758b4974c-zpjm5" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:30.127181 containerd[1629]: time="2025-09-12T19:46:30.126944971Z" level=info msg="StartContainer for \"98f0c7849327e8fcfdd040a34efa423b442ca33a1a822e23d1a7c63977fc9600\" returns successfully" Sep 12 19:46:30.160466 containerd[1629]: time="2025-09-12T19:46:30.159263870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:46:30.162241 containerd[1629]: time="2025-09-12T19:46:30.160814736Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:46:30.162241 containerd[1629]: time="2025-09-12T19:46:30.160847956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:30.162241 containerd[1629]: time="2025-09-12T19:46:30.161020696Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:30.297918 systemd-networkd[1258]: vxlan.calico: Gained IPv6LL Sep 12 19:46:30.308154 containerd[1629]: time="2025-09-12T19:46:30.307346934Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:46:30.308154 containerd[1629]: time="2025-09-12T19:46:30.307438328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:46:30.308154 containerd[1629]: time="2025-09-12T19:46:30.307471928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:30.308154 containerd[1629]: time="2025-09-12T19:46:30.307652646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:46:30.476262 containerd[1629]: time="2025-09-12T19:46:30.474739545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k2xgk,Uid:c267a380-4b7e-4a52-b71c-69a8c98b3163,Namespace:calico-system,Attempt:1,} returns sandbox id \"505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a\"" Sep 12 19:46:30.489885 systemd-networkd[1258]: calieb24dd7c77e: Gained IPv6LL Sep 12 19:46:30.499097 containerd[1629]: time="2025-09-12T19:46:30.498752997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7988f88666-glqx6,Uid:47b90bcb-34aa-42dd-ada4-6ab99df0bd40,Namespace:calico-system,Attempt:1,} returns sandbox id \"cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa\"" Sep 12 19:46:30.565723 containerd[1629]: time="2025-09-12T19:46:30.565603258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-758b4974c-zpjm5,Uid:35e0e2e0-2224-46e0-8d30-2bff8d42b502,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f\"" Sep 12 19:46:30.692361 containerd[1629]: time="2025-09-12T19:46:30.692258466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:30.695847 containerd[1629]: time="2025-09-12T19:46:30.695004943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 19:46:30.702761 containerd[1629]: time="2025-09-12T19:46:30.702703772Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:30.715602 containerd[1629]: time="2025-09-12T19:46:30.715485662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:30.723143 containerd[1629]: time="2025-09-12T19:46:30.723046725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 3.444360153s" Sep 12 19:46:30.723392 containerd[1629]: time="2025-09-12T19:46:30.723332503Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 19:46:30.728279 containerd[1629]: time="2025-09-12T19:46:30.728228623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 19:46:30.733657 containerd[1629]: time="2025-09-12T19:46:30.733586992Z" level=info msg="CreateContainer within sandbox \"92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 19:46:30.754939 containerd[1629]: time="2025-09-12T19:46:30.754876952Z" level=info msg="CreateContainer within sandbox \"92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"8ca1c13dc8cb3133ac2f2d0bfd8b81fe1193ff22e969ce3ce4ba29cf91216613\"" Sep 12 19:46:30.758169 containerd[1629]: time="2025-09-12T19:46:30.756025319Z" level=info msg="StartContainer for \"8ca1c13dc8cb3133ac2f2d0bfd8b81fe1193ff22e969ce3ce4ba29cf91216613\"" Sep 12 19:46:30.810750 systemd-networkd[1258]: cali3e857e72671: Gained IPv6LL Sep 12 19:46:30.880744 containerd[1629]: time="2025-09-12T19:46:30.880579407Z" level=info msg="StartContainer for \"8ca1c13dc8cb3133ac2f2d0bfd8b81fe1193ff22e969ce3ce4ba29cf91216613\" returns successfully" Sep 12 19:46:31.191796 kubelet[2856]: I0912 19:46:31.190157 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nj4fp" podStartSLOduration=50.190109225 podStartE2EDuration="50.190109225s" podCreationTimestamp="2025-09-12 19:45:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 19:46:30.200123437 +0000 UTC m=+54.952335459" watchObservedRunningTime="2025-09-12 19:46:31.190109225 +0000 UTC m=+55.942321241" Sep 12 19:46:31.321200 systemd-networkd[1258]: caliab316a5c800: Gained IPv6LL Sep 12 19:46:35.051941 containerd[1629]: time="2025-09-12T19:46:35.050648832Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 19:46:35.053980 containerd[1629]: time="2025-09-12T19:46:35.052664985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:35.056286 containerd[1629]: time="2025-09-12T19:46:35.055915722Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:35.057510 containerd[1629]: time="2025-09-12T19:46:35.057464240Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 4.328714119s" Sep 12 19:46:35.057610 containerd[1629]: time="2025-09-12T19:46:35.057519819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 19:46:35.058902 containerd[1629]: time="2025-09-12T19:46:35.058839724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:35.061545 containerd[1629]: time="2025-09-12T19:46:35.059755852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 19:46:35.073356 containerd[1629]: time="2025-09-12T19:46:35.073319219Z" level=info msg="CreateContainer within sandbox \"33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 19:46:35.090425 containerd[1629]: time="2025-09-12T19:46:35.090262870Z" level=info msg="CreateContainer within sandbox \"33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"049f60a298cb1528185d806e371aa01af165c8fad2f7e13b8ca46674a62028ec\"" Sep 12 19:46:35.091670 containerd[1629]: time="2025-09-12T19:46:35.091637716Z" level=info msg="StartContainer for \"049f60a298cb1528185d806e371aa01af165c8fad2f7e13b8ca46674a62028ec\"" Sep 12 19:46:35.229141 containerd[1629]: time="2025-09-12T19:46:35.229091774Z" level=info msg="StartContainer for \"049f60a298cb1528185d806e371aa01af165c8fad2f7e13b8ca46674a62028ec\" returns successfully" Sep 12 19:46:35.630397 containerd[1629]: time="2025-09-12T19:46:35.629878386Z" level=info msg="StopPodSandbox for \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\"" Sep 12 19:46:35.824677 containerd[1629]: 2025-09-12 19:46:35.738 [WARNING][5272] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"47b90bcb-34aa-42dd-ada4-6ab99df0bd40", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa", Pod:"goldmane-7988f88666-glqx6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.6.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3e857e72671", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:35.824677 containerd[1629]: 2025-09-12 19:46:35.739 [INFO][5272] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Sep 12 19:46:35.824677 containerd[1629]: 2025-09-12 19:46:35.739 [INFO][5272] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" iface="eth0" netns="" Sep 12 19:46:35.824677 containerd[1629]: 2025-09-12 19:46:35.739 [INFO][5272] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Sep 12 19:46:35.824677 containerd[1629]: 2025-09-12 19:46:35.739 [INFO][5272] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Sep 12 19:46:35.824677 containerd[1629]: 2025-09-12 19:46:35.800 [INFO][5280] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" HandleID="k8s-pod-network.70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Workload="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:35.824677 containerd[1629]: 2025-09-12 19:46:35.801 [INFO][5280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:35.824677 containerd[1629]: 2025-09-12 19:46:35.801 [INFO][5280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:35.824677 containerd[1629]: 2025-09-12 19:46:35.813 [WARNING][5280] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" HandleID="k8s-pod-network.70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Workload="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:35.824677 containerd[1629]: 2025-09-12 19:46:35.813 [INFO][5280] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" HandleID="k8s-pod-network.70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Workload="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:35.824677 containerd[1629]: 2025-09-12 19:46:35.815 [INFO][5280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:35.824677 containerd[1629]: 2025-09-12 19:46:35.821 [INFO][5272] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Sep 12 19:46:35.828882 containerd[1629]: time="2025-09-12T19:46:35.826554319Z" level=info msg="TearDown network for sandbox \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\" successfully" Sep 12 19:46:35.828882 containerd[1629]: time="2025-09-12T19:46:35.826603415Z" level=info msg="StopPodSandbox for \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\" returns successfully" Sep 12 19:46:35.886356 containerd[1629]: time="2025-09-12T19:46:35.884228529Z" level=info msg="RemovePodSandbox for \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\"" Sep 12 19:46:35.890693 containerd[1629]: time="2025-09-12T19:46:35.890662376Z" level=info msg="Forcibly stopping sandbox \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\"" Sep 12 19:46:36.014442 containerd[1629]: 2025-09-12 19:46:35.947 [WARNING][5295] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0", GenerateName:"goldmane-7988f88666-", Namespace:"calico-system", SelfLink:"", UID:"47b90bcb-34aa-42dd-ada4-6ab99df0bd40", ResourceVersion:"964", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7988f88666", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa", Pod:"goldmane-7988f88666-glqx6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.6.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali3e857e72671", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:36.014442 containerd[1629]: 2025-09-12 19:46:35.947 [INFO][5295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Sep 12 19:46:36.014442 containerd[1629]: 2025-09-12 19:46:35.947 [INFO][5295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" iface="eth0" netns="" Sep 12 19:46:36.014442 containerd[1629]: 2025-09-12 19:46:35.947 [INFO][5295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Sep 12 19:46:36.014442 containerd[1629]: 2025-09-12 19:46:35.947 [INFO][5295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Sep 12 19:46:36.014442 containerd[1629]: 2025-09-12 19:46:35.992 [INFO][5302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" HandleID="k8s-pod-network.70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Workload="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:36.014442 containerd[1629]: 2025-09-12 19:46:35.992 [INFO][5302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:36.014442 containerd[1629]: 2025-09-12 19:46:35.992 [INFO][5302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:36.014442 containerd[1629]: 2025-09-12 19:46:36.006 [WARNING][5302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" HandleID="k8s-pod-network.70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Workload="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:36.014442 containerd[1629]: 2025-09-12 19:46:36.006 [INFO][5302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" HandleID="k8s-pod-network.70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Workload="srv--l18mb.gb1.brightbox.com-k8s-goldmane--7988f88666--glqx6-eth0" Sep 12 19:46:36.014442 containerd[1629]: 2025-09-12 19:46:36.009 [INFO][5302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:36.014442 containerd[1629]: 2025-09-12 19:46:36.011 [INFO][5295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690" Sep 12 19:46:36.016015 containerd[1629]: time="2025-09-12T19:46:36.014472983Z" level=info msg="TearDown network for sandbox \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\" successfully" Sep 12 19:46:36.039725 containerd[1629]: time="2025-09-12T19:46:36.039678509Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:46:36.055763 containerd[1629]: time="2025-09-12T19:46:36.055727441Z" level=info msg="RemovePodSandbox \"70b08078f23636e0cbbf9b7d3205b56127074f3218765ca939187da774ad6690\" returns successfully" Sep 12 19:46:36.065808 containerd[1629]: time="2025-09-12T19:46:36.065343072Z" level=info msg="StopPodSandbox for \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\"" Sep 12 19:46:36.125885 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:46:36.122806 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:46:36.122849 systemd-resolved[1511]: Flushed all caches. Sep 12 19:46:36.238188 containerd[1629]: 2025-09-12 19:46:36.157 [WARNING][5317] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0", GenerateName:"calico-apiserver-758b4974c-", Namespace:"calico-apiserver", SelfLink:"", UID:"35e0e2e0-2224-46e0-8d30-2bff8d42b502", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"758b4974c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f", Pod:"calico-apiserver-758b4974c-zpjm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab316a5c800", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:36.238188 containerd[1629]: 2025-09-12 19:46:36.158 [INFO][5317] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Sep 12 19:46:36.238188 containerd[1629]: 2025-09-12 19:46:36.158 [INFO][5317] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" iface="eth0" netns="" Sep 12 19:46:36.238188 containerd[1629]: 2025-09-12 19:46:36.158 [INFO][5317] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Sep 12 19:46:36.238188 containerd[1629]: 2025-09-12 19:46:36.158 [INFO][5317] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Sep 12 19:46:36.238188 containerd[1629]: 2025-09-12 19:46:36.205 [INFO][5324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" HandleID="k8s-pod-network.dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:36.238188 containerd[1629]: 2025-09-12 19:46:36.206 [INFO][5324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:36.238188 containerd[1629]: 2025-09-12 19:46:36.206 [INFO][5324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:36.238188 containerd[1629]: 2025-09-12 19:46:36.220 [WARNING][5324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" HandleID="k8s-pod-network.dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:36.238188 containerd[1629]: 2025-09-12 19:46:36.221 [INFO][5324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" HandleID="k8s-pod-network.dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:36.238188 containerd[1629]: 2025-09-12 19:46:36.224 [INFO][5324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:36.238188 containerd[1629]: 2025-09-12 19:46:36.233 [INFO][5317] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Sep 12 19:46:36.238188 containerd[1629]: time="2025-09-12T19:46:36.238015857Z" level=info msg="TearDown network for sandbox \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\" successfully" Sep 12 19:46:36.238188 containerd[1629]: time="2025-09-12T19:46:36.238048661Z" level=info msg="StopPodSandbox for \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\" returns successfully" Sep 12 19:46:36.242855 containerd[1629]: time="2025-09-12T19:46:36.241686067Z" level=info msg="RemovePodSandbox for \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\"" Sep 12 19:46:36.242855 containerd[1629]: time="2025-09-12T19:46:36.241727256Z" level=info msg="Forcibly stopping sandbox \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\"" Sep 12 19:46:36.311885 kubelet[2856]: I0912 19:46:36.310049 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-758b4974c-55zh4" podStartSLOduration=35.719458649 podStartE2EDuration="42.309984099s" podCreationTimestamp="2025-09-12 19:45:54 +0000 UTC" firstStartedPulling="2025-09-12 19:46:28.46870895 +0000 UTC m=+53.220920953" lastFinishedPulling="2025-09-12 19:46:35.059234391 +0000 UTC m=+59.811446403" observedRunningTime="2025-09-12 19:46:36.302445762 +0000 UTC m=+61.054657781" watchObservedRunningTime="2025-09-12 19:46:36.309984099 +0000 UTC m=+61.062196097" Sep 12 19:46:36.439514 containerd[1629]: 2025-09-12 19:46:36.376 [WARNING][5339] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0", GenerateName:"calico-apiserver-758b4974c-", Namespace:"calico-apiserver", SelfLink:"", UID:"35e0e2e0-2224-46e0-8d30-2bff8d42b502", ResourceVersion:"967", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"758b4974c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f", Pod:"calico-apiserver-758b4974c-zpjm5", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab316a5c800", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:36.439514 containerd[1629]: 2025-09-12 19:46:36.376 [INFO][5339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Sep 12 19:46:36.439514 containerd[1629]: 2025-09-12 19:46:36.376 [INFO][5339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" iface="eth0" netns="" Sep 12 19:46:36.439514 containerd[1629]: 2025-09-12 19:46:36.376 [INFO][5339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Sep 12 19:46:36.439514 containerd[1629]: 2025-09-12 19:46:36.376 [INFO][5339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Sep 12 19:46:36.439514 containerd[1629]: 2025-09-12 19:46:36.414 [INFO][5348] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" HandleID="k8s-pod-network.dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:36.439514 containerd[1629]: 2025-09-12 19:46:36.415 [INFO][5348] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:36.439514 containerd[1629]: 2025-09-12 19:46:36.415 [INFO][5348] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:36.439514 containerd[1629]: 2025-09-12 19:46:36.425 [WARNING][5348] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" HandleID="k8s-pod-network.dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:36.439514 containerd[1629]: 2025-09-12 19:46:36.425 [INFO][5348] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" HandleID="k8s-pod-network.dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--zpjm5-eth0" Sep 12 19:46:36.439514 containerd[1629]: 2025-09-12 19:46:36.427 [INFO][5348] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:36.439514 containerd[1629]: 2025-09-12 19:46:36.433 [INFO][5339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61" Sep 12 19:46:36.439514 containerd[1629]: time="2025-09-12T19:46:36.439088319Z" level=info msg="TearDown network for sandbox \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\" successfully" Sep 12 19:46:36.461154 containerd[1629]: time="2025-09-12T19:46:36.461092861Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:46:36.461387 containerd[1629]: time="2025-09-12T19:46:36.461179025Z" level=info msg="RemovePodSandbox \"dbbc87c34cb056cd85eee230064d5cdf9370e13738beb36f2d71fdbe4f900d61\" returns successfully" Sep 12 19:46:36.462586 containerd[1629]: time="2025-09-12T19:46:36.462503593Z" level=info msg="StopPodSandbox for \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\"" Sep 12 19:46:36.615366 containerd[1629]: 2025-09-12 19:46:36.556 [WARNING][5362] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0", GenerateName:"calico-kube-controllers-59796884b-", Namespace:"calico-system", SelfLink:"", UID:"b92340aa-ced1-4270-b056-3fe317e4937f", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59796884b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230", Pod:"calico-kube-controllers-59796884b-cxjns", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia5de5077e0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:36.615366 containerd[1629]: 2025-09-12 19:46:36.556 [INFO][5362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Sep 12 19:46:36.615366 containerd[1629]: 2025-09-12 19:46:36.556 [INFO][5362] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" iface="eth0" netns="" Sep 12 19:46:36.615366 containerd[1629]: 2025-09-12 19:46:36.556 [INFO][5362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Sep 12 19:46:36.615366 containerd[1629]: 2025-09-12 19:46:36.556 [INFO][5362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Sep 12 19:46:36.615366 containerd[1629]: 2025-09-12 19:46:36.596 [INFO][5369] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" HandleID="k8s-pod-network.a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:36.615366 containerd[1629]: 2025-09-12 19:46:36.596 [INFO][5369] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:36.615366 containerd[1629]: 2025-09-12 19:46:36.596 [INFO][5369] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:36.615366 containerd[1629]: 2025-09-12 19:46:36.608 [WARNING][5369] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" HandleID="k8s-pod-network.a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:36.615366 containerd[1629]: 2025-09-12 19:46:36.608 [INFO][5369] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" HandleID="k8s-pod-network.a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:36.615366 containerd[1629]: 2025-09-12 19:46:36.610 [INFO][5369] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:36.615366 containerd[1629]: 2025-09-12 19:46:36.612 [INFO][5362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Sep 12 19:46:36.616717 containerd[1629]: time="2025-09-12T19:46:36.616423294Z" level=info msg="TearDown network for sandbox \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\" successfully" Sep 12 19:46:36.616717 containerd[1629]: time="2025-09-12T19:46:36.616486031Z" level=info msg="StopPodSandbox for \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\" returns successfully" Sep 12 19:46:36.617476 containerd[1629]: time="2025-09-12T19:46:36.617412302Z" level=info msg="RemovePodSandbox for \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\"" Sep 12 19:46:36.617476 containerd[1629]: time="2025-09-12T19:46:36.617456654Z" level=info msg="Forcibly stopping sandbox \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\"" Sep 12 19:46:36.767241 containerd[1629]: 2025-09-12 19:46:36.688 [WARNING][5385] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0", GenerateName:"calico-kube-controllers-59796884b-", Namespace:"calico-system", SelfLink:"", UID:"b92340aa-ced1-4270-b056-3fe317e4937f", ResourceVersion:"954", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"59796884b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230", Pod:"calico-kube-controllers-59796884b-cxjns", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.6.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia5de5077e0c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:36.767241 containerd[1629]: 2025-09-12 19:46:36.688 [INFO][5385] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Sep 12 19:46:36.767241 containerd[1629]: 2025-09-12 19:46:36.688 [INFO][5385] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" iface="eth0" netns="" Sep 12 19:46:36.767241 containerd[1629]: 2025-09-12 19:46:36.689 [INFO][5385] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Sep 12 19:46:36.767241 containerd[1629]: 2025-09-12 19:46:36.689 [INFO][5385] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Sep 12 19:46:36.767241 containerd[1629]: 2025-09-12 19:46:36.745 [INFO][5392] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" HandleID="k8s-pod-network.a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:36.767241 containerd[1629]: 2025-09-12 19:46:36.745 [INFO][5392] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:36.767241 containerd[1629]: 2025-09-12 19:46:36.746 [INFO][5392] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:36.767241 containerd[1629]: 2025-09-12 19:46:36.758 [WARNING][5392] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" HandleID="k8s-pod-network.a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:36.767241 containerd[1629]: 2025-09-12 19:46:36.758 [INFO][5392] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" HandleID="k8s-pod-network.a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--kube--controllers--59796884b--cxjns-eth0" Sep 12 19:46:36.767241 containerd[1629]: 2025-09-12 19:46:36.760 [INFO][5392] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:36.767241 containerd[1629]: 2025-09-12 19:46:36.764 [INFO][5385] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02" Sep 12 19:46:36.767241 containerd[1629]: time="2025-09-12T19:46:36.767161913Z" level=info msg="TearDown network for sandbox \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\" successfully" Sep 12 19:46:36.784240 containerd[1629]: time="2025-09-12T19:46:36.784189910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:46:36.784364 containerd[1629]: time="2025-09-12T19:46:36.784262840Z" level=info msg="RemovePodSandbox \"a80c78500dad219ddc2e43adba4e5e71af66252c97ad7805b8f9a8ad822dad02\" returns successfully" Sep 12 19:46:36.785577 containerd[1629]: time="2025-09-12T19:46:36.785295176Z" level=info msg="StopPodSandbox for \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\"" Sep 12 19:46:37.006418 containerd[1629]: 2025-09-12 19:46:36.907 [WARNING][5407] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6ae76510-d686-4c2f-936d-931b1dc2f310", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d", Pod:"coredns-7c65d6cfc9-nj4fp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6db72e1109d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:37.006418 containerd[1629]: 2025-09-12 19:46:36.907 [INFO][5407] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Sep 12 19:46:37.006418 containerd[1629]: 2025-09-12 19:46:36.907 [INFO][5407] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" iface="eth0" netns="" Sep 12 19:46:37.006418 containerd[1629]: 2025-09-12 19:46:36.907 [INFO][5407] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Sep 12 19:46:37.006418 containerd[1629]: 2025-09-12 19:46:36.907 [INFO][5407] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Sep 12 19:46:37.006418 containerd[1629]: 2025-09-12 19:46:36.967 [INFO][5414] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" HandleID="k8s-pod-network.710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:37.006418 containerd[1629]: 2025-09-12 19:46:36.971 [INFO][5414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:37.006418 containerd[1629]: 2025-09-12 19:46:36.971 [INFO][5414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:37.006418 containerd[1629]: 2025-09-12 19:46:36.989 [WARNING][5414] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" HandleID="k8s-pod-network.710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:37.006418 containerd[1629]: 2025-09-12 19:46:36.990 [INFO][5414] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" HandleID="k8s-pod-network.710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:37.006418 containerd[1629]: 2025-09-12 19:46:36.992 [INFO][5414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:37.006418 containerd[1629]: 2025-09-12 19:46:36.996 [INFO][5407] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Sep 12 19:46:37.006418 containerd[1629]: time="2025-09-12T19:46:37.005690331Z" level=info msg="TearDown network for sandbox \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\" successfully" Sep 12 19:46:37.006418 containerd[1629]: time="2025-09-12T19:46:37.005731708Z" level=info msg="StopPodSandbox for \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\" returns successfully" Sep 12 19:46:37.017815 containerd[1629]: time="2025-09-12T19:46:37.008063481Z" level=info msg="RemovePodSandbox for \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\"" Sep 12 19:46:37.017815 containerd[1629]: time="2025-09-12T19:46:37.008100522Z" level=info msg="Forcibly stopping sandbox \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\"" Sep 12 19:46:37.214568 containerd[1629]: 2025-09-12 19:46:37.115 [WARNING][5428] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6ae76510-d686-4c2f-936d-931b1dc2f310", ResourceVersion:"982", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"dfdf0288b07e21806328c9035c35cc641fee092b6a729d98078836fe1db66f6d", Pod:"coredns-7c65d6cfc9-nj4fp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6db72e1109d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:37.214568 containerd[1629]: 2025-09-12 19:46:37.116 [INFO][5428] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Sep 12 19:46:37.214568 containerd[1629]: 2025-09-12 19:46:37.116 [INFO][5428] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" iface="eth0" netns="" Sep 12 19:46:37.214568 containerd[1629]: 2025-09-12 19:46:37.116 [INFO][5428] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Sep 12 19:46:37.214568 containerd[1629]: 2025-09-12 19:46:37.116 [INFO][5428] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Sep 12 19:46:37.214568 containerd[1629]: 2025-09-12 19:46:37.189 [INFO][5436] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" HandleID="k8s-pod-network.710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:37.214568 containerd[1629]: 2025-09-12 19:46:37.190 [INFO][5436] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:37.214568 containerd[1629]: 2025-09-12 19:46:37.190 [INFO][5436] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:37.214568 containerd[1629]: 2025-09-12 19:46:37.205 [WARNING][5436] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" HandleID="k8s-pod-network.710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:37.214568 containerd[1629]: 2025-09-12 19:46:37.205 [INFO][5436] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" HandleID="k8s-pod-network.710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--nj4fp-eth0" Sep 12 19:46:37.214568 containerd[1629]: 2025-09-12 19:46:37.207 [INFO][5436] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:37.214568 containerd[1629]: 2025-09-12 19:46:37.212 [INFO][5428] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2" Sep 12 19:46:37.216999 containerd[1629]: time="2025-09-12T19:46:37.215109679Z" level=info msg="TearDown network for sandbox \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\" successfully" Sep 12 19:46:37.235911 containerd[1629]: time="2025-09-12T19:46:37.234997914Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:46:37.235911 containerd[1629]: time="2025-09-12T19:46:37.235080967Z" level=info msg="RemovePodSandbox \"710f7e56679e9a13612eab10e396c7bb5a4ac6e6a0192b35ad15d53f9aa5d4c2\" returns successfully" Sep 12 19:46:37.236679 containerd[1629]: time="2025-09-12T19:46:37.236235965Z" level=info msg="StopPodSandbox for \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\"" Sep 12 19:46:37.550981 containerd[1629]: 2025-09-12 19:46:37.416 [WARNING][5450] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-whisker--bd8bb6677--w7ftn-eth0" Sep 12 19:46:37.550981 containerd[1629]: 2025-09-12 19:46:37.416 [INFO][5450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:37.550981 containerd[1629]: 2025-09-12 19:46:37.416 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" iface="eth0" netns="" Sep 12 19:46:37.550981 containerd[1629]: 2025-09-12 19:46:37.417 [INFO][5450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:37.550981 containerd[1629]: 2025-09-12 19:46:37.417 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:37.550981 containerd[1629]: 2025-09-12 19:46:37.515 [INFO][5457] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" HandleID="k8s-pod-network.989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Workload="srv--l18mb.gb1.brightbox.com-k8s-whisker--bd8bb6677--w7ftn-eth0" Sep 12 19:46:37.550981 containerd[1629]: 2025-09-12 19:46:37.515 [INFO][5457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:37.550981 containerd[1629]: 2025-09-12 19:46:37.515 [INFO][5457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:37.550981 containerd[1629]: 2025-09-12 19:46:37.535 [WARNING][5457] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" HandleID="k8s-pod-network.989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Workload="srv--l18mb.gb1.brightbox.com-k8s-whisker--bd8bb6677--w7ftn-eth0" Sep 12 19:46:37.550981 containerd[1629]: 2025-09-12 19:46:37.535 [INFO][5457] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" HandleID="k8s-pod-network.989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Workload="srv--l18mb.gb1.brightbox.com-k8s-whisker--bd8bb6677--w7ftn-eth0" Sep 12 19:46:37.550981 containerd[1629]: 2025-09-12 19:46:37.537 [INFO][5457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:37.550981 containerd[1629]: 2025-09-12 19:46:37.544 [INFO][5450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:37.555651 containerd[1629]: time="2025-09-12T19:46:37.551033004Z" level=info msg="TearDown network for sandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\" successfully" Sep 12 19:46:37.555651 containerd[1629]: time="2025-09-12T19:46:37.551065195Z" level=info msg="StopPodSandbox for \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\" returns successfully" Sep 12 19:46:37.566553 containerd[1629]: time="2025-09-12T19:46:37.566500738Z" level=info msg="RemovePodSandbox for \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\"" Sep 12 19:46:37.566929 containerd[1629]: time="2025-09-12T19:46:37.566554976Z" level=info msg="Forcibly stopping sandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\"" Sep 12 19:46:37.803409 containerd[1629]: 2025-09-12 19:46:37.688 [WARNING][5471] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" WorkloadEndpoint="srv--l18mb.gb1.brightbox.com-k8s-whisker--bd8bb6677--w7ftn-eth0" Sep 12 19:46:37.803409 containerd[1629]: 2025-09-12 19:46:37.688 [INFO][5471] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:37.803409 containerd[1629]: 2025-09-12 19:46:37.688 [INFO][5471] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" iface="eth0" netns="" Sep 12 19:46:37.803409 containerd[1629]: 2025-09-12 19:46:37.689 [INFO][5471] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:37.803409 containerd[1629]: 2025-09-12 19:46:37.689 [INFO][5471] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:37.803409 containerd[1629]: 2025-09-12 19:46:37.778 [INFO][5479] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" HandleID="k8s-pod-network.989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Workload="srv--l18mb.gb1.brightbox.com-k8s-whisker--bd8bb6677--w7ftn-eth0" Sep 12 19:46:37.803409 containerd[1629]: 2025-09-12 19:46:37.779 [INFO][5479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:37.803409 containerd[1629]: 2025-09-12 19:46:37.779 [INFO][5479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:37.803409 containerd[1629]: 2025-09-12 19:46:37.794 [WARNING][5479] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" HandleID="k8s-pod-network.989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Workload="srv--l18mb.gb1.brightbox.com-k8s-whisker--bd8bb6677--w7ftn-eth0" Sep 12 19:46:37.803409 containerd[1629]: 2025-09-12 19:46:37.794 [INFO][5479] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" HandleID="k8s-pod-network.989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Workload="srv--l18mb.gb1.brightbox.com-k8s-whisker--bd8bb6677--w7ftn-eth0" Sep 12 19:46:37.803409 containerd[1629]: 2025-09-12 19:46:37.797 [INFO][5479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:37.803409 containerd[1629]: 2025-09-12 19:46:37.800 [INFO][5471] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d" Sep 12 19:46:37.803409 containerd[1629]: time="2025-09-12T19:46:37.803276097Z" level=info msg="TearDown network for sandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\" successfully" Sep 12 19:46:37.810768 containerd[1629]: time="2025-09-12T19:46:37.810513461Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:46:37.810768 containerd[1629]: time="2025-09-12T19:46:37.810591373Z" level=info msg="RemovePodSandbox \"989dfc59634bbc1f98652d5ff77f3cbe257fb31987c54e588b5e21ee25df038d\" returns successfully" Sep 12 19:46:37.811511 containerd[1629]: time="2025-09-12T19:46:37.811463311Z" level=info msg="StopPodSandbox for \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\"" Sep 12 19:46:38.087023 containerd[1629]: 2025-09-12 19:46:37.939 [WARNING][5494] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"448b2875-d499-4595-84f9-0b0c9ee28b39", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe", Pod:"coredns-7c65d6cfc9-wltfb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f5a4f15e81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:38.087023 containerd[1629]: 2025-09-12 19:46:37.939 [INFO][5494] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Sep 12 19:46:38.087023 containerd[1629]: 2025-09-12 19:46:37.939 [INFO][5494] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" iface="eth0" netns="" Sep 12 19:46:38.087023 containerd[1629]: 2025-09-12 19:46:37.939 [INFO][5494] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Sep 12 19:46:38.087023 containerd[1629]: 2025-09-12 19:46:37.940 [INFO][5494] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Sep 12 19:46:38.087023 containerd[1629]: 2025-09-12 19:46:38.057 [INFO][5501] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" HandleID="k8s-pod-network.64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:38.087023 containerd[1629]: 2025-09-12 19:46:38.059 [INFO][5501] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:38.087023 containerd[1629]: 2025-09-12 19:46:38.059 [INFO][5501] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:38.087023 containerd[1629]: 2025-09-12 19:46:38.072 [WARNING][5501] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" HandleID="k8s-pod-network.64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:38.087023 containerd[1629]: 2025-09-12 19:46:38.072 [INFO][5501] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" HandleID="k8s-pod-network.64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:38.087023 containerd[1629]: 2025-09-12 19:46:38.078 [INFO][5501] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:38.087023 containerd[1629]: 2025-09-12 19:46:38.084 [INFO][5494] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Sep 12 19:46:38.090914 containerd[1629]: time="2025-09-12T19:46:38.087902879Z" level=info msg="TearDown network for sandbox \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\" successfully" Sep 12 19:46:38.090914 containerd[1629]: time="2025-09-12T19:46:38.087940185Z" level=info msg="StopPodSandbox for \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\" returns successfully" Sep 12 19:46:38.092165 containerd[1629]: time="2025-09-12T19:46:38.092100428Z" level=info msg="RemovePodSandbox for \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\"" Sep 12 19:46:38.092814 containerd[1629]: time="2025-09-12T19:46:38.092771950Z" level=info msg="Forcibly stopping sandbox \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\"" Sep 12 19:46:38.179802 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:46:38.169845 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:46:38.169932 systemd-resolved[1511]: Flushed all caches. Sep 12 19:46:38.408945 containerd[1629]: 2025-09-12 19:46:38.233 [WARNING][5515] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"448b2875-d499-4595-84f9-0b0c9ee28b39", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"435a82fc41ee6ab03c2c5b6e21e66b9b63015b1100bde1ea6fb183fdb7e6d1fe", Pod:"coredns-7c65d6cfc9-wltfb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.6.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6f5a4f15e81", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:38.408945 containerd[1629]: 2025-09-12 19:46:38.233 [INFO][5515] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Sep 12 19:46:38.408945 containerd[1629]: 2025-09-12 19:46:38.233 [INFO][5515] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" iface="eth0" netns="" Sep 12 19:46:38.408945 containerd[1629]: 2025-09-12 19:46:38.234 [INFO][5515] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Sep 12 19:46:38.408945 containerd[1629]: 2025-09-12 19:46:38.234 [INFO][5515] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Sep 12 19:46:38.408945 containerd[1629]: 2025-09-12 19:46:38.367 [INFO][5524] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" HandleID="k8s-pod-network.64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:38.408945 containerd[1629]: 2025-09-12 19:46:38.369 [INFO][5524] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:38.408945 containerd[1629]: 2025-09-12 19:46:38.370 [INFO][5524] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:38.408945 containerd[1629]: 2025-09-12 19:46:38.388 [WARNING][5524] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" HandleID="k8s-pod-network.64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:38.408945 containerd[1629]: 2025-09-12 19:46:38.388 [INFO][5524] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" HandleID="k8s-pod-network.64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Workload="srv--l18mb.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wltfb-eth0" Sep 12 19:46:38.408945 containerd[1629]: 2025-09-12 19:46:38.393 [INFO][5524] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:38.408945 containerd[1629]: 2025-09-12 19:46:38.398 [INFO][5515] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702" Sep 12 19:46:38.408945 containerd[1629]: time="2025-09-12T19:46:38.404589643Z" level=info msg="TearDown network for sandbox \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\" successfully" Sep 12 19:46:38.419283 containerd[1629]: time="2025-09-12T19:46:38.418686011Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:46:38.419283 containerd[1629]: time="2025-09-12T19:46:38.418791352Z" level=info msg="RemovePodSandbox \"64b5b8477aa47eda708d35fece82d29058ff7732f37158d42258286130e26702\" returns successfully" Sep 12 19:46:38.420895 containerd[1629]: time="2025-09-12T19:46:38.420716060Z" level=info msg="StopPodSandbox for \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\"" Sep 12 19:46:38.655072 containerd[1629]: 2025-09-12 19:46:38.544 [WARNING][5542] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0", GenerateName:"calico-apiserver-758b4974c-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4a4fa81-9bd4-48b5-8758-d5569260af4c", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"758b4974c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665", Pod:"calico-apiserver-758b4974c-55zh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid53ff5e316a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:38.655072 containerd[1629]: 2025-09-12 19:46:38.546 [INFO][5542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Sep 12 19:46:38.655072 containerd[1629]: 2025-09-12 19:46:38.546 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" iface="eth0" netns="" Sep 12 19:46:38.655072 containerd[1629]: 2025-09-12 19:46:38.546 [INFO][5542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Sep 12 19:46:38.655072 containerd[1629]: 2025-09-12 19:46:38.547 [INFO][5542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Sep 12 19:46:38.655072 containerd[1629]: 2025-09-12 19:46:38.619 [INFO][5549] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" HandleID="k8s-pod-network.1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:38.655072 containerd[1629]: 2025-09-12 19:46:38.620 [INFO][5549] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:38.655072 containerd[1629]: 2025-09-12 19:46:38.620 [INFO][5549] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:38.655072 containerd[1629]: 2025-09-12 19:46:38.638 [WARNING][5549] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" HandleID="k8s-pod-network.1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:38.655072 containerd[1629]: 2025-09-12 19:46:38.639 [INFO][5549] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" HandleID="k8s-pod-network.1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:38.655072 containerd[1629]: 2025-09-12 19:46:38.644 [INFO][5549] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:38.655072 containerd[1629]: 2025-09-12 19:46:38.649 [INFO][5542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Sep 12 19:46:38.655072 containerd[1629]: time="2025-09-12T19:46:38.654520953Z" level=info msg="TearDown network for sandbox \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\" successfully" Sep 12 19:46:38.655072 containerd[1629]: time="2025-09-12T19:46:38.654567870Z" level=info msg="StopPodSandbox for \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\" returns successfully" Sep 12 19:46:38.657379 containerd[1629]: time="2025-09-12T19:46:38.657107326Z" level=info msg="RemovePodSandbox for \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\"" Sep 12 19:46:38.657732 containerd[1629]: time="2025-09-12T19:46:38.657648503Z" level=info msg="Forcibly stopping sandbox \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\"" Sep 12 19:46:38.819897 containerd[1629]: 2025-09-12 19:46:38.738 [WARNING][5563] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0", GenerateName:"calico-apiserver-758b4974c-", Namespace:"calico-apiserver", SelfLink:"", UID:"d4a4fa81-9bd4-48b5-8758-d5569260af4c", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"758b4974c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"33ff5e8d43c4ff2f810a5ab5b3bbf8b97919700508a626363f5a0503f0ea0665", Pod:"calico-apiserver-758b4974c-55zh4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.6.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid53ff5e316a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:38.819897 containerd[1629]: 2025-09-12 19:46:38.738 [INFO][5563] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Sep 12 19:46:38.819897 containerd[1629]: 2025-09-12 19:46:38.738 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" iface="eth0" netns="" Sep 12 19:46:38.819897 containerd[1629]: 2025-09-12 19:46:38.738 [INFO][5563] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Sep 12 19:46:38.819897 containerd[1629]: 2025-09-12 19:46:38.738 [INFO][5563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Sep 12 19:46:38.819897 containerd[1629]: 2025-09-12 19:46:38.793 [INFO][5571] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" HandleID="k8s-pod-network.1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:38.819897 containerd[1629]: 2025-09-12 19:46:38.794 [INFO][5571] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:38.819897 containerd[1629]: 2025-09-12 19:46:38.794 [INFO][5571] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:38.819897 containerd[1629]: 2025-09-12 19:46:38.810 [WARNING][5571] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" HandleID="k8s-pod-network.1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:38.819897 containerd[1629]: 2025-09-12 19:46:38.810 [INFO][5571] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" HandleID="k8s-pod-network.1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Workload="srv--l18mb.gb1.brightbox.com-k8s-calico--apiserver--758b4974c--55zh4-eth0" Sep 12 19:46:38.819897 containerd[1629]: 2025-09-12 19:46:38.812 [INFO][5571] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:38.819897 containerd[1629]: 2025-09-12 19:46:38.816 [INFO][5563] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a" Sep 12 19:46:38.819897 containerd[1629]: time="2025-09-12T19:46:38.819638019Z" level=info msg="TearDown network for sandbox \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\" successfully" Sep 12 19:46:38.825526 containerd[1629]: time="2025-09-12T19:46:38.825106213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:46:38.825526 containerd[1629]: time="2025-09-12T19:46:38.825182169Z" level=info msg="RemovePodSandbox \"1afcc908377f0ba03a4b5834904d2f1616117be39980b8ae8d011c4e90081e7a\" returns successfully" Sep 12 19:46:38.826032 containerd[1629]: time="2025-09-12T19:46:38.825823526Z" level=info msg="StopPodSandbox for \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\"" Sep 12 19:46:38.971280 containerd[1629]: 2025-09-12 19:46:38.911 [WARNING][5586] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c267a380-4b7e-4a52-b71c-69a8c98b3163", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a", Pod:"csi-node-driver-k2xgk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieb24dd7c77e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:38.971280 containerd[1629]: 2025-09-12 19:46:38.911 [INFO][5586] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Sep 12 19:46:38.971280 containerd[1629]: 2025-09-12 19:46:38.911 [INFO][5586] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" iface="eth0" netns="" Sep 12 19:46:38.971280 containerd[1629]: 2025-09-12 19:46:38.911 [INFO][5586] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Sep 12 19:46:38.971280 containerd[1629]: 2025-09-12 19:46:38.911 [INFO][5586] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Sep 12 19:46:38.971280 containerd[1629]: 2025-09-12 19:46:38.950 [INFO][5594] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" HandleID="k8s-pod-network.ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Workload="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:38.971280 containerd[1629]: 2025-09-12 19:46:38.951 [INFO][5594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:38.971280 containerd[1629]: 2025-09-12 19:46:38.951 [INFO][5594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:38.971280 containerd[1629]: 2025-09-12 19:46:38.964 [WARNING][5594] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" HandleID="k8s-pod-network.ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Workload="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:38.971280 containerd[1629]: 2025-09-12 19:46:38.964 [INFO][5594] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" HandleID="k8s-pod-network.ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Workload="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:38.971280 containerd[1629]: 2025-09-12 19:46:38.966 [INFO][5594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:38.971280 containerd[1629]: 2025-09-12 19:46:38.968 [INFO][5586] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Sep 12 19:46:38.972269 containerd[1629]: time="2025-09-12T19:46:38.972005047Z" level=info msg="TearDown network for sandbox \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\" successfully" Sep 12 19:46:38.972269 containerd[1629]: time="2025-09-12T19:46:38.972042449Z" level=info msg="StopPodSandbox for \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\" returns successfully" Sep 12 19:46:38.973295 containerd[1629]: time="2025-09-12T19:46:38.972732568Z" level=info msg="RemovePodSandbox for \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\"" Sep 12 19:46:38.973295 containerd[1629]: time="2025-09-12T19:46:38.972810831Z" level=info msg="Forcibly stopping sandbox \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\"" Sep 12 19:46:39.166335 containerd[1629]: 2025-09-12 19:46:39.072 [WARNING][5608] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c267a380-4b7e-4a52-b71c-69a8c98b3163", ResourceVersion:"960", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 45, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"856c6b598f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-l18mb.gb1.brightbox.com", ContainerID:"505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a", Pod:"csi-node-driver-k2xgk", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.6.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieb24dd7c77e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:46:39.166335 containerd[1629]: 2025-09-12 19:46:39.073 [INFO][5608] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Sep 12 19:46:39.166335 containerd[1629]: 2025-09-12 19:46:39.073 [INFO][5608] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" iface="eth0" netns="" Sep 12 19:46:39.166335 containerd[1629]: 2025-09-12 19:46:39.073 [INFO][5608] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Sep 12 19:46:39.166335 containerd[1629]: 2025-09-12 19:46:39.073 [INFO][5608] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Sep 12 19:46:39.166335 containerd[1629]: 2025-09-12 19:46:39.140 [INFO][5615] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" HandleID="k8s-pod-network.ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Workload="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:39.166335 containerd[1629]: 2025-09-12 19:46:39.140 [INFO][5615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:46:39.166335 containerd[1629]: 2025-09-12 19:46:39.140 [INFO][5615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:46:39.166335 containerd[1629]: 2025-09-12 19:46:39.155 [WARNING][5615] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" HandleID="k8s-pod-network.ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Workload="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:39.166335 containerd[1629]: 2025-09-12 19:46:39.155 [INFO][5615] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" HandleID="k8s-pod-network.ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Workload="srv--l18mb.gb1.brightbox.com-k8s-csi--node--driver--k2xgk-eth0" Sep 12 19:46:39.166335 containerd[1629]: 2025-09-12 19:46:39.158 [INFO][5615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:46:39.166335 containerd[1629]: 2025-09-12 19:46:39.160 [INFO][5608] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919" Sep 12 19:46:39.179222 containerd[1629]: time="2025-09-12T19:46:39.166426715Z" level=info msg="TearDown network for sandbox \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\" successfully" Sep 12 19:46:39.187833 containerd[1629]: time="2025-09-12T19:46:39.187789423Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:46:39.188380 containerd[1629]: time="2025-09-12T19:46:39.188345734Z" level=info msg="RemovePodSandbox \"ba7ff96e7283136773fe9e9af5912c56100fc3aa0178d57555b5a8856cc56919\" returns successfully" Sep 12 19:46:40.168809 containerd[1629]: time="2025-09-12T19:46:40.168680197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:40.171241 containerd[1629]: time="2025-09-12T19:46:40.170531397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 19:46:40.171241 containerd[1629]: time="2025-09-12T19:46:40.171169342Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:40.178616 containerd[1629]: time="2025-09-12T19:46:40.177697339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:40.181983 containerd[1629]: time="2025-09-12T19:46:40.181945085Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 5.121558652s" Sep 12 19:46:40.182458 containerd[1629]: time="2025-09-12T19:46:40.182406235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 19:46:40.185179 containerd[1629]: time="2025-09-12T19:46:40.184854307Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 19:46:40.272238 containerd[1629]: time="2025-09-12T19:46:40.272187795Z" level=info msg="CreateContainer within sandbox \"5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 19:46:40.352202 containerd[1629]: time="2025-09-12T19:46:40.352149068Z" level=info msg="CreateContainer within sandbox \"5dde28013305e4f73552457634317ef7d4cf9adea1091725348e8eddbc293230\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"13e7affe3fa38dc110a454636e4568baacb15bb2bad347edcb4742fedc301b76\"" Sep 12 19:46:40.353742 containerd[1629]: time="2025-09-12T19:46:40.353705186Z" level=info msg="StartContainer for \"13e7affe3fa38dc110a454636e4568baacb15bb2bad347edcb4742fedc301b76\"" Sep 12 19:46:40.742371 containerd[1629]: time="2025-09-12T19:46:40.742199335Z" level=info msg="StartContainer for \"13e7affe3fa38dc110a454636e4568baacb15bb2bad347edcb4742fedc301b76\" returns successfully" Sep 12 19:46:41.373314 kubelet[2856]: I0912 19:46:41.373114 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-59796884b-cxjns" podStartSLOduration=32.856812322 podStartE2EDuration="43.355451086s" podCreationTimestamp="2025-09-12 19:45:58 +0000 UTC" firstStartedPulling="2025-09-12 19:46:29.685704048 +0000 UTC m=+54.437916049" lastFinishedPulling="2025-09-12 19:46:40.184342798 +0000 UTC m=+64.936554813" observedRunningTime="2025-09-12 19:46:41.355323885 +0000 UTC m=+66.107535903" watchObservedRunningTime="2025-09-12 19:46:41.355451086 +0000 UTC m=+66.107663095" Sep 12 19:46:42.140491 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:46:42.137289 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:46:42.137347 systemd-resolved[1511]: Flushed all caches. Sep 12 19:46:42.255350 containerd[1629]: time="2025-09-12T19:46:42.255287890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:42.256681 containerd[1629]: time="2025-09-12T19:46:42.256496655Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 19:46:42.257961 containerd[1629]: time="2025-09-12T19:46:42.257553249Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:42.266782 containerd[1629]: time="2025-09-12T19:46:42.266722429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:42.268179 containerd[1629]: time="2025-09-12T19:46:42.268141542Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.083208644s" Sep 12 19:46:42.268328 containerd[1629]: time="2025-09-12T19:46:42.268299480Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 19:46:42.271193 containerd[1629]: time="2025-09-12T19:46:42.271154166Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 19:46:42.274961 containerd[1629]: time="2025-09-12T19:46:42.274885969Z" level=info msg="CreateContainer within sandbox \"505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 19:46:42.328346 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3627143316.mount: Deactivated successfully. Sep 12 19:46:42.339233 containerd[1629]: time="2025-09-12T19:46:42.329680509Z" level=info msg="CreateContainer within sandbox \"505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0a784037f8a9443e675e1753e8ba943a33fd47e58491a75163c73d4ce2e71c58\"" Sep 12 19:46:42.339233 containerd[1629]: time="2025-09-12T19:46:42.333144240Z" level=info msg="StartContainer for \"0a784037f8a9443e675e1753e8ba943a33fd47e58491a75163c73d4ce2e71c58\"" Sep 12 19:46:42.452988 containerd[1629]: time="2025-09-12T19:46:42.452506328Z" level=info msg="StartContainer for \"0a784037f8a9443e675e1753e8ba943a33fd47e58491a75163c73d4ce2e71c58\" returns successfully" Sep 12 19:46:44.189016 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:46:44.187667 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:46:44.187717 systemd-resolved[1511]: Flushed all caches. Sep 12 19:46:45.966500 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3328686824.mount: Deactivated successfully. Sep 12 19:46:47.228922 containerd[1629]: time="2025-09-12T19:46:47.228780730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:47.231930 containerd[1629]: time="2025-09-12T19:46:47.231786647Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 19:46:47.233430 containerd[1629]: time="2025-09-12T19:46:47.233119765Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:47.238179 containerd[1629]: time="2025-09-12T19:46:47.238112643Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:47.240686 containerd[1629]: time="2025-09-12T19:46:47.239913149Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 4.968705368s" Sep 12 19:46:47.240686 containerd[1629]: time="2025-09-12T19:46:47.239976069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 19:46:47.282194 containerd[1629]: time="2025-09-12T19:46:47.281650187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 19:46:47.297560 containerd[1629]: time="2025-09-12T19:46:47.297354257Z" level=info msg="CreateContainer within sandbox \"cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 19:46:47.364818 containerd[1629]: time="2025-09-12T19:46:47.364677225Z" level=info msg="CreateContainer within sandbox \"cd9fb7456e92968105273e19e4e01b3d0c1486278d940a2f131eba069ac0deaa\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"bed6d19ade1b7855a0e1653815a151432436b8043e5fa3bbd0d4e88b3be55d18\"" Sep 12 19:46:47.375919 containerd[1629]: time="2025-09-12T19:46:47.375746742Z" level=info msg="StartContainer for \"bed6d19ade1b7855a0e1653815a151432436b8043e5fa3bbd0d4e88b3be55d18\"" Sep 12 19:46:47.752325 containerd[1629]: time="2025-09-12T19:46:47.752020216Z" level=info msg="StartContainer for \"bed6d19ade1b7855a0e1653815a151432436b8043e5fa3bbd0d4e88b3be55d18\" returns successfully" Sep 12 19:46:47.768904 containerd[1629]: time="2025-09-12T19:46:47.768431903Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:47.775006 containerd[1629]: time="2025-09-12T19:46:47.773524480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 19:46:47.780684 containerd[1629]: time="2025-09-12T19:46:47.780605853Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 498.875318ms" Sep 12 19:46:47.785496 containerd[1629]: time="2025-09-12T19:46:47.785338798Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 19:46:47.796088 containerd[1629]: time="2025-09-12T19:46:47.795433242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 19:46:47.805691 containerd[1629]: time="2025-09-12T19:46:47.805580920Z" level=info msg="CreateContainer within sandbox \"7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 19:46:47.848377 containerd[1629]: time="2025-09-12T19:46:47.848308455Z" level=info msg="CreateContainer within sandbox \"7c0f3068b6db45bcd7a9f61a99c221e5372bb2613564eabaa35aaba3c4b08a6f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e72bde246adaeb15e537ae97512b9f410a99fdf66ea0267cdb0d2907105ac0d1\"" Sep 12 19:46:47.853213 containerd[1629]: time="2025-09-12T19:46:47.853178536Z" level=info msg="StartContainer for \"e72bde246adaeb15e537ae97512b9f410a99fdf66ea0267cdb0d2907105ac0d1\"" Sep 12 19:46:48.025542 containerd[1629]: time="2025-09-12T19:46:48.023033880Z" level=info msg="StartContainer for \"e72bde246adaeb15e537ae97512b9f410a99fdf66ea0267cdb0d2907105ac0d1\" returns successfully" Sep 12 19:46:48.175566 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:46:48.174379 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:46:48.174415 systemd-resolved[1511]: Flushed all caches. Sep 12 19:46:48.712129 systemd[1]: run-containerd-runc-k8s.io-bed6d19ade1b7855a0e1653815a151432436b8043e5fa3bbd0d4e88b3be55d18-runc.JbTkXN.mount: Deactivated successfully. Sep 12 19:46:49.028545 kubelet[2856]: I0912 19:46:49.006596 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-7988f88666-glqx6" podStartSLOduration=35.201737579 podStartE2EDuration="51.979321998s" podCreationTimestamp="2025-09-12 19:45:57 +0000 UTC" firstStartedPulling="2025-09-12 19:46:30.501689942 +0000 UTC m=+55.253901940" lastFinishedPulling="2025-09-12 19:46:47.279274348 +0000 UTC m=+72.031486359" observedRunningTime="2025-09-12 19:46:48.784551571 +0000 UTC m=+73.536763597" watchObservedRunningTime="2025-09-12 19:46:48.979321998 +0000 UTC m=+73.731534007" Sep 12 19:46:49.033714 kubelet[2856]: I0912 19:46:49.029251 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-758b4974c-zpjm5" podStartSLOduration=37.805770867 podStartE2EDuration="55.029228552s" podCreationTimestamp="2025-09-12 19:45:54 +0000 UTC" firstStartedPulling="2025-09-12 19:46:30.567829795 +0000 UTC m=+55.320041799" lastFinishedPulling="2025-09-12 19:46:47.791287485 +0000 UTC m=+72.543499484" observedRunningTime="2025-09-12 19:46:48.979151894 +0000 UTC m=+73.731363915" watchObservedRunningTime="2025-09-12 19:46:49.029228552 +0000 UTC m=+73.781440561" Sep 12 19:46:49.578092 kubelet[2856]: I0912 19:46:49.576844 2856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 19:46:50.224377 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:46:50.214197 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:46:50.214246 systemd-resolved[1511]: Flushed all caches. Sep 12 19:46:51.901808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3713951136.mount: Deactivated successfully. Sep 12 19:46:51.922817 containerd[1629]: time="2025-09-12T19:46:51.922717275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:51.924702 containerd[1629]: time="2025-09-12T19:46:51.924613302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 19:46:51.926927 containerd[1629]: time="2025-09-12T19:46:51.925194126Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:51.930006 containerd[1629]: time="2025-09-12T19:46:51.929955830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:51.932542 containerd[1629]: time="2025-09-12T19:46:51.932412006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 4.136913611s" Sep 12 19:46:51.932542 containerd[1629]: time="2025-09-12T19:46:51.932498663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 19:46:51.993770 containerd[1629]: time="2025-09-12T19:46:51.993692552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 19:46:52.063893 containerd[1629]: time="2025-09-12T19:46:52.063803405Z" level=info msg="CreateContainer within sandbox \"92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 19:46:52.083182 containerd[1629]: time="2025-09-12T19:46:52.083128062Z" level=info msg="CreateContainer within sandbox \"92bb3c86b583e12d961ccbba0ae20cdf0b50e7ba2d961215d24e4b7369154bb2\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"85a9448a28677eb504d6efc764b03652f522c14608de0dcba3b01b511837c3c7\"" Sep 12 19:46:52.085799 containerd[1629]: time="2025-09-12T19:46:52.085757242Z" level=info msg="StartContainer for \"85a9448a28677eb504d6efc764b03652f522c14608de0dcba3b01b511837c3c7\"" Sep 12 19:46:52.240924 containerd[1629]: time="2025-09-12T19:46:52.240587654Z" level=info msg="StartContainer for \"85a9448a28677eb504d6efc764b03652f522c14608de0dcba3b01b511837c3c7\" returns successfully" Sep 12 19:46:52.252905 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:46:52.252953 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:46:52.252981 systemd-resolved[1511]: Flushed all caches. Sep 12 19:46:52.645020 kubelet[2856]: I0912 19:46:52.639355 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-759bcb7f56-rw7jz" podStartSLOduration=2.914127192 podStartE2EDuration="27.639302199s" podCreationTimestamp="2025-09-12 19:46:25 +0000 UTC" firstStartedPulling="2025-09-12 19:46:27.277948445 +0000 UTC m=+52.030160445" lastFinishedPulling="2025-09-12 19:46:52.003123435 +0000 UTC m=+76.755335452" observedRunningTime="2025-09-12 19:46:52.628039559 +0000 UTC m=+77.380251578" watchObservedRunningTime="2025-09-12 19:46:52.639302199 +0000 UTC m=+77.391514209" Sep 12 19:46:54.304698 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:46:54.300312 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:46:54.300415 systemd-resolved[1511]: Flushed all caches. Sep 12 19:46:54.456397 containerd[1629]: time="2025-09-12T19:46:54.452804471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:54.456397 containerd[1629]: time="2025-09-12T19:46:54.455003726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 19:46:54.461524 containerd[1629]: time="2025-09-12T19:46:54.461481534Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:54.493880 containerd[1629]: time="2025-09-12T19:46:54.492607330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:46:54.512895 containerd[1629]: time="2025-09-12T19:46:54.512436471Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.518647196s" Sep 12 19:46:54.512895 containerd[1629]: time="2025-09-12T19:46:54.512499130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 19:46:54.535595 containerd[1629]: time="2025-09-12T19:46:54.535521274Z" level=info msg="CreateContainer within sandbox \"505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 19:46:54.573169 containerd[1629]: time="2025-09-12T19:46:54.573006218Z" level=info msg="CreateContainer within sandbox \"505c9eff1bcbb0b4aee376b09c336f89a99782aebecf7af65b98ceb36ff7e24a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"db3c0fcaafe6e7d49f0bc1f04c436165bcde4019cd177ba0cf9b7f9f3dbeb2f8\"" Sep 12 19:46:54.577149 containerd[1629]: time="2025-09-12T19:46:54.574944603Z" level=info msg="StartContainer for \"db3c0fcaafe6e7d49f0bc1f04c436165bcde4019cd177ba0cf9b7f9f3dbeb2f8\"" Sep 12 19:46:54.668163 systemd[1]: run-containerd-runc-k8s.io-db3c0fcaafe6e7d49f0bc1f04c436165bcde4019cd177ba0cf9b7f9f3dbeb2f8-runc.jz7ZWX.mount: Deactivated successfully. Sep 12 19:46:54.738970 containerd[1629]: time="2025-09-12T19:46:54.738915234Z" level=info msg="StartContainer for \"db3c0fcaafe6e7d49f0bc1f04c436165bcde4019cd177ba0cf9b7f9f3dbeb2f8\" returns successfully" Sep 12 19:46:55.933058 kubelet[2856]: I0912 19:46:55.930656 2856 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 19:46:55.936705 kubelet[2856]: I0912 19:46:55.936528 2856 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 19:46:56.346695 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:46:56.349792 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:46:56.346711 systemd-resolved[1511]: Flushed all caches. Sep 12 19:47:08.001842 systemd[1]: Started sshd@9-10.230.9.238:22-139.178.68.195:47812.service - OpenSSH per-connection server daemon (139.178.68.195:47812). Sep 12 19:47:08.121018 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:47:08.126817 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:47:08.121029 systemd-resolved[1511]: Flushed all caches. Sep 12 19:47:08.160652 kubelet[2856]: I0912 19:47:08.159885 2856 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 19:47:08.295011 kubelet[2856]: I0912 19:47:08.292906 2856 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k2xgk" podStartSLOduration=46.234229366 podStartE2EDuration="1m10.26036703s" podCreationTimestamp="2025-09-12 19:45:58 +0000 UTC" firstStartedPulling="2025-09-12 19:46:30.489705975 +0000 UTC m=+55.241917976" lastFinishedPulling="2025-09-12 19:46:54.515843637 +0000 UTC m=+79.268055640" observedRunningTime="2025-09-12 19:46:55.724068497 +0000 UTC m=+80.476280557" watchObservedRunningTime="2025-09-12 19:47:08.26036703 +0000 UTC m=+93.012579078" Sep 12 19:47:09.168007 sshd[6000]: Accepted publickey for core from 139.178.68.195 port 47812 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:47:09.173011 sshd[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:47:09.227447 systemd-logind[1607]: New session 12 of user core. Sep 12 19:47:09.231312 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 19:47:10.173628 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:47:10.171995 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:47:10.172027 systemd-resolved[1511]: Flushed all caches. Sep 12 19:47:10.755688 sshd[6000]: pam_unix(sshd:session): session closed for user core Sep 12 19:47:10.765582 systemd[1]: run-containerd-runc-k8s.io-13e7affe3fa38dc110a454636e4568baacb15bb2bad347edcb4742fedc301b76-runc.txXlVX.mount: Deactivated successfully. Sep 12 19:47:10.792073 systemd[1]: sshd@9-10.230.9.238:22-139.178.68.195:47812.service: Deactivated successfully. Sep 12 19:47:10.795404 systemd-logind[1607]: Session 12 logged out. Waiting for processes to exit. Sep 12 19:47:10.811136 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 19:47:10.827421 systemd-logind[1607]: Removed session 12. Sep 12 19:47:12.221615 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:47:12.218331 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:47:12.218356 systemd-resolved[1511]: Flushed all caches. Sep 12 19:47:15.912405 systemd[1]: Started sshd@10-10.230.9.238:22-139.178.68.195:34406.service - OpenSSH per-connection server daemon (139.178.68.195:34406). Sep 12 19:47:16.898072 sshd[6091]: Accepted publickey for core from 139.178.68.195 port 34406 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:47:16.901532 sshd[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:47:16.923683 systemd-logind[1607]: New session 13 of user core. Sep 12 19:47:16.930430 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 19:47:18.182174 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:47:18.170884 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:47:18.170946 systemd-resolved[1511]: Flushed all caches. Sep 12 19:47:18.224782 sshd[6091]: pam_unix(sshd:session): session closed for user core Sep 12 19:47:18.242723 systemd[1]: sshd@10-10.230.9.238:22-139.178.68.195:34406.service: Deactivated successfully. Sep 12 19:47:18.251403 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 19:47:18.253294 systemd-logind[1607]: Session 13 logged out. Waiting for processes to exit. Sep 12 19:47:18.257504 systemd-logind[1607]: Removed session 13. Sep 12 19:47:20.220040 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:47:20.219023 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:47:20.219034 systemd-resolved[1511]: Flushed all caches. Sep 12 19:47:23.390263 systemd[1]: Started sshd@11-10.230.9.238:22-139.178.68.195:38220.service - OpenSSH per-connection server daemon (139.178.68.195:38220). Sep 12 19:47:24.362454 sshd[6116]: Accepted publickey for core from 139.178.68.195 port 38220 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:47:24.367101 sshd[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:47:24.378894 systemd-logind[1607]: New session 14 of user core. Sep 12 19:47:24.387929 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 19:47:25.362285 sshd[6116]: pam_unix(sshd:session): session closed for user core Sep 12 19:47:25.372964 systemd[1]: sshd@11-10.230.9.238:22-139.178.68.195:38220.service: Deactivated successfully. Sep 12 19:47:25.381696 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 19:47:25.385804 systemd-logind[1607]: Session 14 logged out. Waiting for processes to exit. Sep 12 19:47:25.388569 systemd-logind[1607]: Removed session 14. Sep 12 19:47:25.530237 systemd[1]: Started sshd@12-10.230.9.238:22-139.178.68.195:38236.service - OpenSSH per-connection server daemon (139.178.68.195:38236). Sep 12 19:47:26.509910 sshd[6142]: Accepted publickey for core from 139.178.68.195 port 38236 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:47:26.515513 sshd[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:47:26.536496 systemd-logind[1607]: New session 15 of user core. Sep 12 19:47:26.543329 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 19:47:27.557166 sshd[6142]: pam_unix(sshd:session): session closed for user core Sep 12 19:47:27.567514 systemd[1]: sshd@12-10.230.9.238:22-139.178.68.195:38236.service: Deactivated successfully. Sep 12 19:47:27.581271 systemd-logind[1607]: Session 15 logged out. Waiting for processes to exit. Sep 12 19:47:27.581420 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 19:47:27.590528 systemd-logind[1607]: Removed session 15. Sep 12 19:47:27.710540 systemd[1]: Started sshd@13-10.230.9.238:22-139.178.68.195:38244.service - OpenSSH per-connection server daemon (139.178.68.195:38244). Sep 12 19:47:28.637656 sshd[6173]: Accepted publickey for core from 139.178.68.195 port 38244 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:47:28.641694 sshd[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:47:28.663230 systemd-logind[1607]: New session 16 of user core. Sep 12 19:47:28.668335 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 19:47:29.421248 sshd[6173]: pam_unix(sshd:session): session closed for user core Sep 12 19:47:29.429231 systemd-logind[1607]: Session 16 logged out. Waiting for processes to exit. Sep 12 19:47:29.430975 systemd[1]: sshd@13-10.230.9.238:22-139.178.68.195:38244.service: Deactivated successfully. Sep 12 19:47:29.445284 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 19:47:29.446838 systemd-logind[1607]: Removed session 16. Sep 12 19:47:34.577088 systemd[1]: Started sshd@14-10.230.9.238:22-139.178.68.195:43440.service - OpenSSH per-connection server daemon (139.178.68.195:43440). Sep 12 19:47:35.540557 sshd[6189]: Accepted publickey for core from 139.178.68.195 port 43440 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:47:35.544592 sshd[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:47:35.566262 systemd-logind[1607]: New session 17 of user core. Sep 12 19:47:35.571157 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 19:47:36.993303 sshd[6189]: pam_unix(sshd:session): session closed for user core Sep 12 19:47:37.011233 systemd-logind[1607]: Session 17 logged out. Waiting for processes to exit. Sep 12 19:47:37.014797 systemd[1]: sshd@14-10.230.9.238:22-139.178.68.195:43440.service: Deactivated successfully. Sep 12 19:47:37.023084 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 19:47:37.026151 systemd-logind[1607]: Removed session 17. Sep 12 19:47:38.208983 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:47:38.208251 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:47:38.208281 systemd-resolved[1511]: Flushed all caches. Sep 12 19:47:42.169248 systemd[1]: Started sshd@15-10.230.9.238:22-139.178.68.195:58704.service - OpenSSH per-connection server daemon (139.178.68.195:58704). Sep 12 19:47:43.137946 sshd[6243]: Accepted publickey for core from 139.178.68.195 port 58704 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:47:43.140729 sshd[6243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:47:43.161110 systemd-logind[1607]: New session 18 of user core. Sep 12 19:47:43.169339 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 19:47:44.268492 sshd[6243]: pam_unix(sshd:session): session closed for user core Sep 12 19:47:44.281248 systemd[1]: sshd@15-10.230.9.238:22-139.178.68.195:58704.service: Deactivated successfully. Sep 12 19:47:44.294352 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 19:47:44.304446 systemd-logind[1607]: Session 18 logged out. Waiting for processes to exit. Sep 12 19:47:44.317108 systemd-logind[1607]: Removed session 18. Sep 12 19:47:44.355621 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:47:44.347448 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:47:44.347473 systemd-resolved[1511]: Flushed all caches. Sep 12 19:47:49.425036 systemd[1]: Started sshd@16-10.230.9.238:22-139.178.68.195:58706.service - OpenSSH per-connection server daemon (139.178.68.195:58706). Sep 12 19:47:50.382955 sshd[6265]: Accepted publickey for core from 139.178.68.195 port 58706 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:47:50.388822 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:47:50.407956 systemd-logind[1607]: New session 19 of user core. Sep 12 19:47:50.415406 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 19:47:51.269934 sshd[6265]: pam_unix(sshd:session): session closed for user core Sep 12 19:47:51.278649 systemd[1]: sshd@16-10.230.9.238:22-139.178.68.195:58706.service: Deactivated successfully. Sep 12 19:47:51.290510 systemd-logind[1607]: Session 19 logged out. Waiting for processes to exit. Sep 12 19:47:51.293481 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 19:47:51.295748 systemd-logind[1607]: Removed session 19. Sep 12 19:47:51.424576 systemd[1]: Started sshd@17-10.230.9.238:22-139.178.68.195:56658.service - OpenSSH per-connection server daemon (139.178.68.195:56658). Sep 12 19:47:52.344167 sshd[6287]: Accepted publickey for core from 139.178.68.195 port 56658 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:47:52.346289 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:47:52.357113 systemd-logind[1607]: New session 20 of user core. Sep 12 19:47:52.368562 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 19:47:53.591473 sshd[6287]: pam_unix(sshd:session): session closed for user core Sep 12 19:47:53.604898 systemd[1]: sshd@17-10.230.9.238:22-139.178.68.195:56658.service: Deactivated successfully. Sep 12 19:47:53.611458 systemd-logind[1607]: Session 20 logged out. Waiting for processes to exit. Sep 12 19:47:53.613406 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 19:47:53.621038 systemd-logind[1607]: Removed session 20. Sep 12 19:47:53.737208 systemd[1]: Started sshd@18-10.230.9.238:22-139.178.68.195:56674.service - OpenSSH per-connection server daemon (139.178.68.195:56674). Sep 12 19:47:54.142625 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:47:54.137855 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:47:54.137949 systemd-resolved[1511]: Flushed all caches. Sep 12 19:47:54.708388 sshd[6320]: Accepted publickey for core from 139.178.68.195 port 56674 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:47:54.714135 sshd[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:47:54.738009 systemd-logind[1607]: New session 21 of user core. Sep 12 19:47:54.744277 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 19:47:58.179017 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:47:58.173164 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:47:58.173205 systemd-resolved[1511]: Flushed all caches. Sep 12 19:47:59.458960 sshd[6320]: pam_unix(sshd:session): session closed for user core Sep 12 19:47:59.591538 systemd[1]: sshd@18-10.230.9.238:22-139.178.68.195:56674.service: Deactivated successfully. Sep 12 19:47:59.648849 systemd-logind[1607]: Session 21 logged out. Waiting for processes to exit. Sep 12 19:47:59.652466 systemd[1]: Started sshd@19-10.230.9.238:22-139.178.68.195:56678.service - OpenSSH per-connection server daemon (139.178.68.195:56678). Sep 12 19:47:59.661739 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 19:47:59.683310 systemd-logind[1607]: Removed session 21. Sep 12 19:48:00.228338 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:48:00.218100 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:48:00.218113 systemd-resolved[1511]: Flushed all caches. Sep 12 19:48:00.775549 sshd[6339]: Accepted publickey for core from 139.178.68.195 port 56678 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:48:00.787415 sshd[6339]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:48:01.008350 systemd-logind[1607]: New session 22 of user core. Sep 12 19:48:01.024437 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 19:48:01.626912 kubelet[2856]: E0912 19:48:01.560250 2856 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.962s" Sep 12 19:48:02.317613 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:48:02.298023 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:48:02.298052 systemd-resolved[1511]: Flushed all caches. Sep 12 19:48:04.332798 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:48:04.329249 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:48:04.329284 systemd-resolved[1511]: Flushed all caches. Sep 12 19:48:04.798929 kubelet[2856]: E0912 19:48:04.712765 2856 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.587s" Sep 12 19:48:06.368023 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:48:06.362253 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:48:06.362265 systemd-resolved[1511]: Flushed all caches. Sep 12 19:48:06.446818 sshd[6339]: pam_unix(sshd:session): session closed for user core Sep 12 19:48:06.520657 systemd[1]: sshd@19-10.230.9.238:22-139.178.68.195:56678.service: Deactivated successfully. Sep 12 19:48:06.529356 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 19:48:06.533689 systemd-logind[1607]: Session 22 logged out. Waiting for processes to exit. Sep 12 19:48:06.580714 systemd-logind[1607]: Removed session 22. Sep 12 19:48:06.618364 systemd[1]: Started sshd@20-10.230.9.238:22-139.178.68.195:42330.service - OpenSSH per-connection server daemon (139.178.68.195:42330). Sep 12 19:48:07.565942 sshd[6377]: Accepted publickey for core from 139.178.68.195 port 42330 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:48:07.569169 sshd[6377]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:48:07.579088 systemd-logind[1607]: New session 23 of user core. Sep 12 19:48:07.586296 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 19:48:08.901203 sshd[6377]: pam_unix(sshd:session): session closed for user core Sep 12 19:48:08.908207 systemd[1]: sshd@20-10.230.9.238:22-139.178.68.195:42330.service: Deactivated successfully. Sep 12 19:48:08.918762 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 19:48:08.921793 systemd-logind[1607]: Session 23 logged out. Waiting for processes to exit. Sep 12 19:48:08.925408 systemd-logind[1607]: Removed session 23. Sep 12 19:48:12.167283 systemd[1]: run-containerd-runc-k8s.io-bed6d19ade1b7855a0e1653815a151432436b8043e5fa3bbd0d4e88b3be55d18-runc.WSiLwe.mount: Deactivated successfully. Sep 12 19:48:12.200777 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:48:12.196763 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:48:12.196774 systemd-resolved[1511]: Flushed all caches. Sep 12 19:48:14.066238 systemd[1]: Started sshd@21-10.230.9.238:22-139.178.68.195:53950.service - OpenSSH per-connection server daemon (139.178.68.195:53950). Sep 12 19:48:14.237015 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:48:14.232980 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:48:14.232991 systemd-resolved[1511]: Flushed all caches. Sep 12 19:48:15.253030 sshd[6462]: Accepted publickey for core from 139.178.68.195 port 53950 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:48:15.257647 sshd[6462]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:48:15.301941 systemd-logind[1607]: New session 24 of user core. Sep 12 19:48:15.306275 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 19:48:16.288029 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:48:16.282384 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:48:16.282404 systemd-resolved[1511]: Flushed all caches. Sep 12 19:48:16.762182 sshd[6462]: pam_unix(sshd:session): session closed for user core Sep 12 19:48:16.774989 systemd-logind[1607]: Session 24 logged out. Waiting for processes to exit. Sep 12 19:48:16.775642 systemd[1]: sshd@21-10.230.9.238:22-139.178.68.195:53950.service: Deactivated successfully. Sep 12 19:48:16.782762 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 19:48:16.785715 systemd-logind[1607]: Removed session 24. Sep 12 19:48:21.932379 systemd[1]: Started sshd@22-10.230.9.238:22-139.178.68.195:46208.service - OpenSSH per-connection server daemon (139.178.68.195:46208). Sep 12 19:48:22.985479 sshd[6477]: Accepted publickey for core from 139.178.68.195 port 46208 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:48:22.994136 sshd[6477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:48:23.014139 systemd-logind[1607]: New session 25 of user core. Sep 12 19:48:23.021700 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 19:48:24.209396 sshd[6477]: pam_unix(sshd:session): session closed for user core Sep 12 19:48:24.224343 systemd[1]: sshd@22-10.230.9.238:22-139.178.68.195:46208.service: Deactivated successfully. Sep 12 19:48:24.232246 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 19:48:24.234413 systemd-logind[1607]: Session 25 logged out. Waiting for processes to exit. Sep 12 19:48:24.241936 systemd-logind[1607]: Removed session 25. Sep 12 19:48:25.443039 update_engine[1611]: I20250912 19:48:25.441689 1611 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Sep 12 19:48:25.443039 update_engine[1611]: I20250912 19:48:25.441879 1611 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Sep 12 19:48:25.464566 update_engine[1611]: I20250912 19:48:25.455418 1611 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Sep 12 19:48:25.464566 update_engine[1611]: I20250912 19:48:25.463842 1611 omaha_request_params.cc:62] Current group set to lts Sep 12 19:48:25.468071 update_engine[1611]: I20250912 19:48:25.466928 1611 update_attempter.cc:499] Already updated boot flags. Skipping. Sep 12 19:48:25.468071 update_engine[1611]: I20250912 19:48:25.466962 1611 update_attempter.cc:643] Scheduling an action processor start. Sep 12 19:48:25.468071 update_engine[1611]: I20250912 19:48:25.466997 1611 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Sep 12 19:48:25.468071 update_engine[1611]: I20250912 19:48:25.467075 1611 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Sep 12 19:48:25.472707 update_engine[1611]: I20250912 19:48:25.472648 1611 omaha_request_action.cc:271] Posting an Omaha request to disabled Sep 12 19:48:25.472707 update_engine[1611]: I20250912 19:48:25.472685 1611 omaha_request_action.cc:272] Request: Sep 12 19:48:25.472707 update_engine[1611]: Sep 12 19:48:25.472707 update_engine[1611]: Sep 12 19:48:25.472707 update_engine[1611]: Sep 12 19:48:25.472707 update_engine[1611]: Sep 12 19:48:25.472707 update_engine[1611]: Sep 12 19:48:25.472707 update_engine[1611]: Sep 12 19:48:25.472707 update_engine[1611]: Sep 12 19:48:25.472707 update_engine[1611]: Sep 12 19:48:25.476816 update_engine[1611]: I20250912 19:48:25.472710 1611 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Sep 12 19:48:25.492222 update_engine[1611]: I20250912 19:48:25.492177 1611 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Sep 12 19:48:25.492997 update_engine[1611]: I20250912 19:48:25.492930 1611 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Sep 12 19:48:25.502702 update_engine[1611]: E20250912 19:48:25.502661 1611 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Sep 12 19:48:25.503069 update_engine[1611]: I20250912 19:48:25.502980 1611 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Sep 12 19:48:25.525899 locksmithd[1644]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Sep 12 19:48:26.274409 systemd-journald[1180]: Under memory pressure, flushing caches. Sep 12 19:48:26.276713 systemd-resolved[1511]: Under memory pressure, flushing caches. Sep 12 19:48:26.276750 systemd-resolved[1511]: Flushed all caches.