Sep 12 19:23:59.042400 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Sep 12 16:05:08 -00 2025 Sep 12 19:23:59.042454 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 19:23:59.042469 kernel: BIOS-provided physical RAM map: Sep 12 19:23:59.042485 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Sep 12 19:23:59.042495 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Sep 12 19:23:59.042505 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Sep 12 19:23:59.042516 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Sep 12 19:23:59.042526 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Sep 12 19:23:59.042537 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Sep 12 19:23:59.042547 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Sep 12 19:23:59.042557 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Sep 12 19:23:59.042567 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Sep 12 19:23:59.042589 kernel: NX (Execute Disable) protection: active Sep 12 19:23:59.042600 kernel: APIC: Static calls initialized Sep 12 19:23:59.042613 kernel: SMBIOS 2.8 present. Sep 12 19:23:59.042629 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Sep 12 19:23:59.042641 kernel: Hypervisor detected: KVM Sep 12 19:23:59.042657 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Sep 12 19:23:59.042669 kernel: kvm-clock: using sched offset of 5125485223 cycles Sep 12 19:23:59.042681 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Sep 12 19:23:59.042692 kernel: tsc: Detected 2799.998 MHz processor Sep 12 19:23:59.042704 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Sep 12 19:23:59.042717 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Sep 12 19:23:59.042728 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Sep 12 19:23:59.042739 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Sep 12 19:23:59.042751 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Sep 12 19:23:59.042767 kernel: Using GB pages for direct mapping Sep 12 19:23:59.042778 kernel: ACPI: Early table checksum verification disabled Sep 12 19:23:59.042789 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Sep 12 19:23:59.042801 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 19:23:59.042812 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 19:23:59.042823 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 19:23:59.042835 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Sep 12 19:23:59.042846 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 19:23:59.042857 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 19:23:59.042873 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 19:23:59.042885 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 19:23:59.042896 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Sep 12 19:23:59.042910 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Sep 12 19:23:59.042922 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Sep 12 19:23:59.042939 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Sep 12 19:23:59.042951 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Sep 12 19:23:59.042973 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Sep 12 19:23:59.042985 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Sep 12 19:23:59.042996 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Sep 12 19:23:59.043013 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Sep 12 19:23:59.043026 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Sep 12 19:23:59.043038 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Sep 12 19:23:59.043050 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Sep 12 19:23:59.043062 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Sep 12 19:23:59.043079 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Sep 12 19:23:59.043091 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Sep 12 19:23:59.043102 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Sep 12 19:23:59.043114 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Sep 12 19:23:59.043126 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Sep 12 19:23:59.043137 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Sep 12 19:23:59.043149 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Sep 12 19:23:59.043160 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Sep 12 19:23:59.043177 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Sep 12 19:23:59.043219 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Sep 12 19:23:59.043233 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Sep 12 19:23:59.043245 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Sep 12 19:23:59.043256 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Sep 12 19:23:59.043269 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Sep 12 19:23:59.043281 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Sep 12 19:23:59.043293 kernel: Zone ranges: Sep 12 19:23:59.043305 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Sep 12 19:23:59.043316 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Sep 12 19:23:59.043334 kernel: Normal empty Sep 12 19:23:59.043346 kernel: Movable zone start for each node Sep 12 19:23:59.043358 kernel: Early memory node ranges Sep 12 19:23:59.043369 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Sep 12 19:23:59.043381 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Sep 12 19:23:59.043393 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Sep 12 19:23:59.043405 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Sep 12 19:23:59.043416 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Sep 12 19:23:59.043445 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Sep 12 19:23:59.043459 kernel: ACPI: PM-Timer IO Port: 0x608 Sep 12 19:23:59.043477 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Sep 12 19:23:59.043489 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Sep 12 19:23:59.043501 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Sep 12 19:23:59.043513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Sep 12 19:23:59.043525 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Sep 12 19:23:59.043537 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Sep 12 19:23:59.043548 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Sep 12 19:23:59.043560 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Sep 12 19:23:59.043572 kernel: TSC deadline timer available Sep 12 19:23:59.043589 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Sep 12 19:23:59.043601 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Sep 12 19:23:59.043612 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Sep 12 19:23:59.043624 kernel: Booting paravirtualized kernel on KVM Sep 12 19:23:59.043636 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Sep 12 19:23:59.043648 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Sep 12 19:23:59.043660 kernel: percpu: Embedded 58 pages/cpu s197160 r8192 d32216 u262144 Sep 12 19:23:59.043671 kernel: pcpu-alloc: s197160 r8192 d32216 u262144 alloc=1*2097152 Sep 12 19:23:59.043683 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Sep 12 19:23:59.043700 kernel: kvm-guest: PV spinlocks enabled Sep 12 19:23:59.043712 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Sep 12 19:23:59.043725 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 19:23:59.043737 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 19:23:59.043749 kernel: random: crng init done Sep 12 19:23:59.043761 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 19:23:59.043773 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Sep 12 19:23:59.045236 kernel: Fallback order for Node 0: 0 Sep 12 19:23:59.045264 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Sep 12 19:23:59.045292 kernel: Policy zone: DMA32 Sep 12 19:23:59.045305 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 19:23:59.045318 kernel: software IO TLB: area num 16. Sep 12 19:23:59.045330 kernel: Memory: 1901532K/2096616K available (12288K kernel code, 2293K rwdata, 22744K rodata, 42884K init, 2312K bss, 194824K reserved, 0K cma-reserved) Sep 12 19:23:59.045342 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Sep 12 19:23:59.045354 kernel: Kernel/User page tables isolation: enabled Sep 12 19:23:59.045366 kernel: ftrace: allocating 37974 entries in 149 pages Sep 12 19:23:59.045378 kernel: ftrace: allocated 149 pages with 4 groups Sep 12 19:23:59.045396 kernel: Dynamic Preempt: voluntary Sep 12 19:23:59.045408 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 19:23:59.045421 kernel: rcu: RCU event tracing is enabled. Sep 12 19:23:59.045433 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Sep 12 19:23:59.045457 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 19:23:59.045483 kernel: Rude variant of Tasks RCU enabled. Sep 12 19:23:59.045501 kernel: Tracing variant of Tasks RCU enabled. Sep 12 19:23:59.045513 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 19:23:59.045526 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Sep 12 19:23:59.045538 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Sep 12 19:23:59.045551 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 19:23:59.045564 kernel: Console: colour VGA+ 80x25 Sep 12 19:23:59.045581 kernel: printk: console [tty0] enabled Sep 12 19:23:59.045594 kernel: printk: console [ttyS0] enabled Sep 12 19:23:59.045606 kernel: ACPI: Core revision 20230628 Sep 12 19:23:59.045619 kernel: APIC: Switch to symmetric I/O mode setup Sep 12 19:23:59.045632 kernel: x2apic enabled Sep 12 19:23:59.045649 kernel: APIC: Switched APIC routing to: physical x2apic Sep 12 19:23:59.045668 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Sep 12 19:23:59.045681 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Sep 12 19:23:59.045694 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Sep 12 19:23:59.045706 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Sep 12 19:23:59.045719 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Sep 12 19:23:59.045732 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Sep 12 19:23:59.045745 kernel: Spectre V2 : Mitigation: Retpolines Sep 12 19:23:59.045757 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Sep 12 19:23:59.045775 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Sep 12 19:23:59.045787 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Sep 12 19:23:59.045800 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Sep 12 19:23:59.045812 kernel: MDS: Mitigation: Clear CPU buffers Sep 12 19:23:59.045824 kernel: MMIO Stale Data: Unknown: No mitigations Sep 12 19:23:59.045837 kernel: SRBDS: Unknown: Dependent on hypervisor status Sep 12 19:23:59.045857 kernel: active return thunk: its_return_thunk Sep 12 19:23:59.045870 kernel: ITS: Mitigation: Aligned branch/return thunks Sep 12 19:23:59.045882 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Sep 12 19:23:59.045895 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Sep 12 19:23:59.045907 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Sep 12 19:23:59.045930 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Sep 12 19:23:59.045942 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Sep 12 19:23:59.045960 kernel: Freeing SMP alternatives memory: 32K Sep 12 19:23:59.045974 kernel: pid_max: default: 32768 minimum: 301 Sep 12 19:23:59.045986 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 12 19:23:59.045999 kernel: landlock: Up and running. Sep 12 19:23:59.046011 kernel: SELinux: Initializing. Sep 12 19:23:59.046024 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 19:23:59.046036 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Sep 12 19:23:59.046049 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Sep 12 19:23:59.046070 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 19:23:59.046088 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 19:23:59.046101 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Sep 12 19:23:59.046113 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Sep 12 19:23:59.046126 kernel: signal: max sigframe size: 1776 Sep 12 19:23:59.046138 kernel: rcu: Hierarchical SRCU implementation. Sep 12 19:23:59.046151 kernel: rcu: Max phase no-delay instances is 400. Sep 12 19:23:59.046164 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Sep 12 19:23:59.046176 kernel: smp: Bringing up secondary CPUs ... Sep 12 19:23:59.046207 kernel: smpboot: x86: Booting SMP configuration: Sep 12 19:23:59.046229 kernel: .... node #0, CPUs: #1 Sep 12 19:23:59.046242 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Sep 12 19:23:59.046265 kernel: smp: Brought up 1 node, 2 CPUs Sep 12 19:23:59.046279 kernel: smpboot: Max logical packages: 16 Sep 12 19:23:59.046291 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Sep 12 19:23:59.046304 kernel: devtmpfs: initialized Sep 12 19:23:59.046316 kernel: x86/mm: Memory block size: 128MB Sep 12 19:23:59.046329 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 19:23:59.046342 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Sep 12 19:23:59.046361 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 19:23:59.046373 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 19:23:59.046386 kernel: audit: initializing netlink subsys (disabled) Sep 12 19:23:59.046398 kernel: audit: type=2000 audit(1757705037.511:1): state=initialized audit_enabled=0 res=1 Sep 12 19:23:59.046411 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 19:23:59.046430 kernel: thermal_sys: Registered thermal governor 'user_space' Sep 12 19:23:59.046452 kernel: cpuidle: using governor menu Sep 12 19:23:59.046465 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 19:23:59.046477 kernel: dca service started, version 1.12.1 Sep 12 19:23:59.046496 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Sep 12 19:23:59.046509 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Sep 12 19:23:59.046522 kernel: PCI: Using configuration type 1 for base access Sep 12 19:23:59.046534 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Sep 12 19:23:59.046547 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 19:23:59.046559 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 19:23:59.046572 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 19:23:59.046584 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 19:23:59.046597 kernel: ACPI: Added _OSI(Module Device) Sep 12 19:23:59.046615 kernel: ACPI: Added _OSI(Processor Device) Sep 12 19:23:59.046627 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 19:23:59.046640 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 19:23:59.046652 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Sep 12 19:23:59.046665 kernel: ACPI: Interpreter enabled Sep 12 19:23:59.046677 kernel: ACPI: PM: (supports S0 S5) Sep 12 19:23:59.046690 kernel: ACPI: Using IOAPIC for interrupt routing Sep 12 19:23:59.046702 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Sep 12 19:23:59.046715 kernel: PCI: Using E820 reservations for host bridge windows Sep 12 19:23:59.046732 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Sep 12 19:23:59.046744 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 19:23:59.047041 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 19:23:59.049312 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 19:23:59.049520 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 19:23:59.049541 kernel: PCI host bridge to bus 0000:00 Sep 12 19:23:59.049729 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Sep 12 19:23:59.049901 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Sep 12 19:23:59.050059 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Sep 12 19:23:59.050237 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Sep 12 19:23:59.050477 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Sep 12 19:23:59.051368 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Sep 12 19:23:59.051592 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 19:23:59.051862 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Sep 12 19:23:59.052094 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Sep 12 19:23:59.052300 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Sep 12 19:23:59.052515 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Sep 12 19:23:59.052690 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Sep 12 19:23:59.052867 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Sep 12 19:23:59.053075 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Sep 12 19:23:59.055335 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Sep 12 19:23:59.055551 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Sep 12 19:23:59.055741 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Sep 12 19:23:59.055940 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Sep 12 19:23:59.056128 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Sep 12 19:23:59.057502 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Sep 12 19:23:59.057688 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Sep 12 19:23:59.057886 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Sep 12 19:23:59.058077 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Sep 12 19:23:59.058343 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Sep 12 19:23:59.058542 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Sep 12 19:23:59.058766 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Sep 12 19:23:59.058953 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Sep 12 19:23:59.059152 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Sep 12 19:23:59.061400 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Sep 12 19:23:59.061632 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Sep 12 19:23:59.061814 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Sep 12 19:23:59.061991 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Sep 12 19:23:59.062165 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Sep 12 19:23:59.064402 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Sep 12 19:23:59.064622 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Sep 12 19:23:59.064805 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Sep 12 19:23:59.064980 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Sep 12 19:23:59.065159 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Sep 12 19:23:59.065422 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Sep 12 19:23:59.065613 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Sep 12 19:23:59.065813 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Sep 12 19:23:59.065985 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Sep 12 19:23:59.066156 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Sep 12 19:23:59.066362 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Sep 12 19:23:59.066553 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Sep 12 19:23:59.066753 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Sep 12 19:23:59.066942 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Sep 12 19:23:59.067119 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 12 19:23:59.068043 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 12 19:23:59.070354 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 12 19:23:59.070593 kernel: pci_bus 0000:02: extended config space not accessible Sep 12 19:23:59.070804 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Sep 12 19:23:59.071008 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Sep 12 19:23:59.073222 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 12 19:23:59.073416 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 12 19:23:59.073634 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Sep 12 19:23:59.073827 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Sep 12 19:23:59.074016 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 12 19:23:59.074201 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 12 19:23:59.074416 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 12 19:23:59.074629 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Sep 12 19:23:59.074811 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Sep 12 19:23:59.074985 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 12 19:23:59.075156 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 12 19:23:59.075348 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 12 19:23:59.075545 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 12 19:23:59.075718 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 12 19:23:59.075898 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 12 19:23:59.076091 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 12 19:23:59.078378 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 12 19:23:59.078587 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 12 19:23:59.078779 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 12 19:23:59.078955 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 12 19:23:59.079129 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 12 19:23:59.079338 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 12 19:23:59.079571 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 12 19:23:59.079782 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 12 19:23:59.079960 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 12 19:23:59.080144 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 12 19:23:59.082363 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 12 19:23:59.082386 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Sep 12 19:23:59.082399 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Sep 12 19:23:59.082412 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Sep 12 19:23:59.082425 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Sep 12 19:23:59.082460 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Sep 12 19:23:59.082473 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Sep 12 19:23:59.082486 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Sep 12 19:23:59.082499 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Sep 12 19:23:59.082512 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Sep 12 19:23:59.082524 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Sep 12 19:23:59.082537 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Sep 12 19:23:59.082550 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Sep 12 19:23:59.082562 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Sep 12 19:23:59.082582 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Sep 12 19:23:59.082595 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Sep 12 19:23:59.082607 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Sep 12 19:23:59.082620 kernel: iommu: Default domain type: Translated Sep 12 19:23:59.082633 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Sep 12 19:23:59.082662 kernel: PCI: Using ACPI for IRQ routing Sep 12 19:23:59.082685 kernel: PCI: pci_cache_line_size set to 64 bytes Sep 12 19:23:59.082703 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Sep 12 19:23:59.082716 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Sep 12 19:23:59.082899 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Sep 12 19:23:59.083072 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Sep 12 19:23:59.085272 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Sep 12 19:23:59.085294 kernel: vgaarb: loaded Sep 12 19:23:59.085307 kernel: clocksource: Switched to clocksource kvm-clock Sep 12 19:23:59.085320 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 19:23:59.085333 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 19:23:59.085346 kernel: pnp: PnP ACPI init Sep 12 19:23:59.085576 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Sep 12 19:23:59.085598 kernel: pnp: PnP ACPI: found 5 devices Sep 12 19:23:59.085611 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Sep 12 19:23:59.085624 kernel: NET: Registered PF_INET protocol family Sep 12 19:23:59.085636 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 19:23:59.085649 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Sep 12 19:23:59.085662 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 19:23:59.085675 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Sep 12 19:23:59.085695 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Sep 12 19:23:59.085708 kernel: TCP: Hash tables configured (established 16384 bind 16384) Sep 12 19:23:59.085720 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 19:23:59.085733 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Sep 12 19:23:59.085746 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 19:23:59.085759 kernel: NET: Registered PF_XDP protocol family Sep 12 19:23:59.085940 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Sep 12 19:23:59.086143 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Sep 12 19:23:59.086343 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Sep 12 19:23:59.086531 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Sep 12 19:23:59.086707 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Sep 12 19:23:59.086880 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Sep 12 19:23:59.087054 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Sep 12 19:23:59.089290 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Sep 12 19:23:59.089512 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Sep 12 19:23:59.089719 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Sep 12 19:23:59.089938 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Sep 12 19:23:59.090116 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Sep 12 19:23:59.090345 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Sep 12 19:23:59.090534 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Sep 12 19:23:59.090709 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Sep 12 19:23:59.090891 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Sep 12 19:23:59.091113 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Sep 12 19:23:59.092382 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Sep 12 19:23:59.092594 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Sep 12 19:23:59.092767 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Sep 12 19:23:59.092953 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Sep 12 19:23:59.093128 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Sep 12 19:23:59.093391 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Sep 12 19:23:59.093579 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Sep 12 19:23:59.093768 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Sep 12 19:23:59.093968 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 12 19:23:59.094164 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Sep 12 19:23:59.094373 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Sep 12 19:23:59.094568 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Sep 12 19:23:59.094748 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 12 19:23:59.094927 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Sep 12 19:23:59.095097 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Sep 12 19:23:59.095321 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Sep 12 19:23:59.095527 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 12 19:23:59.095704 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Sep 12 19:23:59.095899 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Sep 12 19:23:59.096073 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Sep 12 19:23:59.096312 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 12 19:23:59.096521 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Sep 12 19:23:59.096702 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Sep 12 19:23:59.096873 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Sep 12 19:23:59.097054 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 12 19:23:59.097282 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Sep 12 19:23:59.097472 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Sep 12 19:23:59.097654 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Sep 12 19:23:59.097826 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 12 19:23:59.097998 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Sep 12 19:23:59.098182 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Sep 12 19:23:59.098406 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Sep 12 19:23:59.098604 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 12 19:23:59.098770 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Sep 12 19:23:59.098928 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Sep 12 19:23:59.099100 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Sep 12 19:23:59.099342 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Sep 12 19:23:59.099534 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Sep 12 19:23:59.099689 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Sep 12 19:23:59.099895 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Sep 12 19:23:59.100047 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Sep 12 19:23:59.100232 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Sep 12 19:23:59.100408 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Sep 12 19:23:59.100624 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Sep 12 19:23:59.100789 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Sep 12 19:23:59.100960 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Sep 12 19:23:59.101230 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Sep 12 19:23:59.101410 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Sep 12 19:23:59.101589 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Sep 12 19:23:59.101781 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Sep 12 19:23:59.101946 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Sep 12 19:23:59.102129 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Sep 12 19:23:59.102369 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Sep 12 19:23:59.102554 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Sep 12 19:23:59.102733 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Sep 12 19:23:59.102904 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Sep 12 19:23:59.103078 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Sep 12 19:23:59.103287 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Sep 12 19:23:59.103489 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Sep 12 19:23:59.103657 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Sep 12 19:23:59.103819 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Sep 12 19:23:59.103988 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Sep 12 19:23:59.104150 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Sep 12 19:23:59.104357 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Sep 12 19:23:59.104379 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Sep 12 19:23:59.104393 kernel: PCI: CLS 0 bytes, default 64 Sep 12 19:23:59.104407 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Sep 12 19:23:59.104428 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Sep 12 19:23:59.104455 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Sep 12 19:23:59.104469 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Sep 12 19:23:59.104483 kernel: Initialise system trusted keyrings Sep 12 19:23:59.104502 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Sep 12 19:23:59.104516 kernel: Key type asymmetric registered Sep 12 19:23:59.104529 kernel: Asymmetric key parser 'x509' registered Sep 12 19:23:59.104542 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Sep 12 19:23:59.104555 kernel: io scheduler mq-deadline registered Sep 12 19:23:59.104568 kernel: io scheduler kyber registered Sep 12 19:23:59.104581 kernel: io scheduler bfq registered Sep 12 19:23:59.104758 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Sep 12 19:23:59.104936 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Sep 12 19:23:59.105120 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:23:59.105361 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Sep 12 19:23:59.105594 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Sep 12 19:23:59.105780 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:23:59.105954 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Sep 12 19:23:59.106127 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Sep 12 19:23:59.106349 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:23:59.106542 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Sep 12 19:23:59.106716 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Sep 12 19:23:59.106891 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:23:59.107065 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Sep 12 19:23:59.107291 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Sep 12 19:23:59.107488 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:23:59.107762 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Sep 12 19:23:59.107937 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Sep 12 19:23:59.108109 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:23:59.108323 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Sep 12 19:23:59.108535 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Sep 12 19:23:59.108715 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:23:59.108889 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Sep 12 19:23:59.109083 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Sep 12 19:23:59.109335 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Sep 12 19:23:59.109357 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Sep 12 19:23:59.109372 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Sep 12 19:23:59.109394 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Sep 12 19:23:59.109408 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 19:23:59.109431 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Sep 12 19:23:59.109485 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Sep 12 19:23:59.109518 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Sep 12 19:23:59.109532 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Sep 12 19:23:59.109545 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Sep 12 19:23:59.109751 kernel: rtc_cmos 00:03: RTC can wake from S4 Sep 12 19:23:59.109918 kernel: rtc_cmos 00:03: registered as rtc0 Sep 12 19:23:59.110143 kernel: rtc_cmos 00:03: setting system clock to 2025-09-12T19:23:58 UTC (1757705038) Sep 12 19:23:59.110390 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Sep 12 19:23:59.110411 kernel: intel_pstate: CPU model not supported Sep 12 19:23:59.110431 kernel: NET: Registered PF_INET6 protocol family Sep 12 19:23:59.110458 kernel: Segment Routing with IPv6 Sep 12 19:23:59.110472 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 19:23:59.110493 kernel: NET: Registered PF_PACKET protocol family Sep 12 19:23:59.110507 kernel: Key type dns_resolver registered Sep 12 19:23:59.110528 kernel: IPI shorthand broadcast: enabled Sep 12 19:23:59.110542 kernel: sched_clock: Marking stable (1580006682, 226915436)->(2056834261, -249912143) Sep 12 19:23:59.110555 kernel: registered taskstats version 1 Sep 12 19:23:59.110568 kernel: Loading compiled-in X.509 certificates Sep 12 19:23:59.110582 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 449ba23cbe21e08b3bddb674b4885682335ee1f9' Sep 12 19:23:59.110600 kernel: Key type .fscrypt registered Sep 12 19:23:59.110614 kernel: Key type fscrypt-provisioning registered Sep 12 19:23:59.110627 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 19:23:59.110641 kernel: ima: Allocated hash algorithm: sha1 Sep 12 19:23:59.110659 kernel: ima: No architecture policies found Sep 12 19:23:59.110673 kernel: clk: Disabling unused clocks Sep 12 19:23:59.110686 kernel: Freeing unused kernel image (initmem) memory: 42884K Sep 12 19:23:59.110699 kernel: Write protecting the kernel read-only data: 36864k Sep 12 19:23:59.110712 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Sep 12 19:23:59.110726 kernel: Run /init as init process Sep 12 19:23:59.110739 kernel: with arguments: Sep 12 19:23:59.110752 kernel: /init Sep 12 19:23:59.110765 kernel: with environment: Sep 12 19:23:59.110784 kernel: HOME=/ Sep 12 19:23:59.110797 kernel: TERM=linux Sep 12 19:23:59.110810 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 19:23:59.110826 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 19:23:59.110842 systemd[1]: Detected virtualization kvm. Sep 12 19:23:59.110857 systemd[1]: Detected architecture x86-64. Sep 12 19:23:59.110871 systemd[1]: Running in initrd. Sep 12 19:23:59.110884 systemd[1]: No hostname configured, using default hostname. Sep 12 19:23:59.110904 systemd[1]: Hostname set to . Sep 12 19:23:59.110918 systemd[1]: Initializing machine ID from VM UUID. Sep 12 19:23:59.110932 systemd[1]: Queued start job for default target initrd.target. Sep 12 19:23:59.110946 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 19:23:59.110960 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 19:23:59.110974 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 19:23:59.111023 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 19:23:59.111038 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 19:23:59.111060 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 19:23:59.111077 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 19:23:59.111091 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 19:23:59.111105 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 19:23:59.111119 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 19:23:59.111133 systemd[1]: Reached target paths.target - Path Units. Sep 12 19:23:59.111153 systemd[1]: Reached target slices.target - Slice Units. Sep 12 19:23:59.111167 systemd[1]: Reached target swap.target - Swaps. Sep 12 19:23:59.111181 systemd[1]: Reached target timers.target - Timer Units. Sep 12 19:23:59.111211 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 19:23:59.111226 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 19:23:59.111240 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 19:23:59.111254 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 12 19:23:59.111268 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 19:23:59.111282 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 19:23:59.111302 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 19:23:59.111329 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 19:23:59.111343 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 19:23:59.111357 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 19:23:59.111370 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 19:23:59.111390 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 19:23:59.111403 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 19:23:59.111429 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 19:23:59.111465 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 19:23:59.111486 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 19:23:59.111544 systemd-journald[202]: Collecting audit messages is disabled. Sep 12 19:23:59.111576 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 19:23:59.111591 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 19:23:59.111612 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 19:23:59.111627 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 19:23:59.111640 kernel: Bridge firewalling registered Sep 12 19:23:59.111655 systemd-journald[202]: Journal started Sep 12 19:23:59.111685 systemd-journald[202]: Runtime Journal (/run/log/journal/d22b38e189d548a3b2cb94195c709052) is 4.7M, max 38.0M, 33.2M free. Sep 12 19:23:59.051420 systemd-modules-load[203]: Inserted module 'overlay' Sep 12 19:23:59.091985 systemd-modules-load[203]: Inserted module 'br_netfilter' Sep 12 19:23:59.156251 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 19:23:59.155875 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 19:23:59.158080 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 19:23:59.160041 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 19:23:59.169594 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 19:23:59.171330 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 19:23:59.177390 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 19:23:59.180994 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 19:23:59.200658 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 19:23:59.204573 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 19:23:59.214761 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 19:23:59.218243 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 19:23:59.221613 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 19:23:59.234387 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 19:23:59.240451 dracut-cmdline[234]: dracut-dracut-053 Sep 12 19:23:59.244402 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=1ff9ec556ac80c67ae2340139aa421bf26af13357ec9e72632b4878e9945dc9a Sep 12 19:23:59.280938 systemd-resolved[237]: Positive Trust Anchors: Sep 12 19:23:59.282075 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 19:23:59.283071 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 19:23:59.296279 systemd-resolved[237]: Defaulting to hostname 'linux'. Sep 12 19:23:59.299941 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 19:23:59.303267 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 19:23:59.362292 kernel: SCSI subsystem initialized Sep 12 19:23:59.374223 kernel: Loading iSCSI transport class v2.0-870. Sep 12 19:23:59.388238 kernel: iscsi: registered transport (tcp) Sep 12 19:23:59.414529 kernel: iscsi: registered transport (qla4xxx) Sep 12 19:23:59.414609 kernel: QLogic iSCSI HBA Driver Sep 12 19:23:59.475634 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 19:23:59.485440 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 19:23:59.519563 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 19:23:59.519664 kernel: device-mapper: uevent: version 1.0.3 Sep 12 19:23:59.519699 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 12 19:23:59.571245 kernel: raid6: sse2x4 gen() 13817 MB/s Sep 12 19:23:59.587234 kernel: raid6: sse2x2 gen() 9681 MB/s Sep 12 19:23:59.605800 kernel: raid6: sse2x1 gen() 9969 MB/s Sep 12 19:23:59.605849 kernel: raid6: using algorithm sse2x4 gen() 13817 MB/s Sep 12 19:23:59.624731 kernel: raid6: .... xor() 8070 MB/s, rmw enabled Sep 12 19:23:59.624785 kernel: raid6: using ssse3x2 recovery algorithm Sep 12 19:23:59.651226 kernel: xor: automatically using best checksumming function avx Sep 12 19:23:59.883518 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 19:23:59.897536 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 19:23:59.903394 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 19:23:59.931166 systemd-udevd[420]: Using default interface naming scheme 'v255'. Sep 12 19:23:59.939361 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 19:23:59.949463 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 19:23:59.971171 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Sep 12 19:24:00.011752 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 19:24:00.020449 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 19:24:00.139047 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 19:24:00.149407 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 19:24:00.168293 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 19:24:00.175322 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 19:24:00.176786 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 19:24:00.178870 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 19:24:00.187543 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 19:24:00.213862 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 19:24:00.298232 kernel: cryptd: max_cpu_qlen set to 1000 Sep 12 19:24:00.305235 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Sep 12 19:24:00.330959 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Sep 12 19:24:00.335233 kernel: libata version 3.00 loaded. Sep 12 19:24:00.343906 kernel: AVX version of gcm_enc/dec engaged. Sep 12 19:24:00.343981 kernel: AES CTR mode by8 optimization enabled Sep 12 19:24:00.344444 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 19:24:00.344725 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 19:24:00.351852 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 19:24:00.351924 kernel: GPT:17805311 != 125829119 Sep 12 19:24:00.351946 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 19:24:00.351980 kernel: GPT:17805311 != 125829119 Sep 12 19:24:00.351998 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 19:24:00.352016 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 19:24:00.347618 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 19:24:00.374944 kernel: ahci 0000:00:1f.2: version 3.0 Sep 12 19:24:00.375330 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Sep 12 19:24:00.360942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 19:24:00.385130 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Sep 12 19:24:00.385443 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Sep 12 19:24:00.385661 kernel: scsi host0: ahci Sep 12 19:24:00.385974 kernel: scsi host1: ahci Sep 12 19:24:00.386275 kernel: scsi host2: ahci Sep 12 19:24:00.361181 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 19:24:00.388719 kernel: scsi host3: ahci Sep 12 19:24:00.389100 kernel: scsi host4: ahci Sep 12 19:24:00.381571 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 19:24:00.441369 kernel: scsi host5: ahci Sep 12 19:24:00.441664 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Sep 12 19:24:00.441688 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Sep 12 19:24:00.441706 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Sep 12 19:24:00.441730 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Sep 12 19:24:00.441750 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Sep 12 19:24:00.441768 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Sep 12 19:24:00.441802 kernel: BTRFS: device fsid 6dad227e-2c0d-42e6-b0d2-5c756384bc19 devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (469) Sep 12 19:24:00.441823 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (476) Sep 12 19:24:00.392369 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 19:24:00.453217 kernel: ACPI: bus type USB registered Sep 12 19:24:00.468970 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 19:24:00.536499 kernel: usbcore: registered new interface driver usbfs Sep 12 19:24:00.536548 kernel: usbcore: registered new interface driver hub Sep 12 19:24:00.536591 kernel: usbcore: registered new device driver usb Sep 12 19:24:00.538098 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 19:24:00.547158 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 19:24:00.566116 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 19:24:00.573282 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 19:24:00.574158 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 19:24:00.588415 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 19:24:00.592809 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 19:24:00.596432 disk-uuid[558]: Primary Header is updated. Sep 12 19:24:00.596432 disk-uuid[558]: Secondary Entries is updated. Sep 12 19:24:00.596432 disk-uuid[558]: Secondary Header is updated. Sep 12 19:24:00.605827 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 19:24:00.613243 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 19:24:00.637861 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 19:24:00.700227 kernel: ata6: SATA link down (SStatus 0 SControl 300) Sep 12 19:24:00.700310 kernel: ata1: SATA link down (SStatus 0 SControl 300) Sep 12 19:24:00.704456 kernel: ata2: SATA link down (SStatus 0 SControl 300) Sep 12 19:24:00.704629 kernel: ata3: SATA link down (SStatus 0 SControl 300) Sep 12 19:24:00.706377 kernel: ata4: SATA link down (SStatus 0 SControl 300) Sep 12 19:24:00.707226 kernel: ata5: SATA link down (SStatus 0 SControl 300) Sep 12 19:24:00.768219 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 12 19:24:00.772279 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Sep 12 19:24:00.782215 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Sep 12 19:24:00.795534 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Sep 12 19:24:00.795806 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Sep 12 19:24:00.797309 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Sep 12 19:24:00.799850 kernel: hub 1-0:1.0: USB hub found Sep 12 19:24:00.800144 kernel: hub 1-0:1.0: 4 ports detected Sep 12 19:24:00.804457 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Sep 12 19:24:00.804717 kernel: hub 2-0:1.0: USB hub found Sep 12 19:24:00.805216 kernel: hub 2-0:1.0: 4 ports detected Sep 12 19:24:01.039326 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Sep 12 19:24:01.180477 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 19:24:01.186828 kernel: usbcore: registered new interface driver usbhid Sep 12 19:24:01.186875 kernel: usbhid: USB HID core driver Sep 12 19:24:01.193938 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Sep 12 19:24:01.193984 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Sep 12 19:24:01.614244 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 19:24:01.616525 disk-uuid[559]: The operation has completed successfully. Sep 12 19:24:01.669626 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 19:24:01.669791 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 19:24:01.693462 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 19:24:01.705690 sh[585]: Success Sep 12 19:24:01.722252 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Sep 12 19:24:01.785592 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 19:24:01.798362 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 19:24:01.802307 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 19:24:01.834632 kernel: BTRFS info (device dm-0): first mount of filesystem 6dad227e-2c0d-42e6-b0d2-5c756384bc19 Sep 12 19:24:01.834699 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Sep 12 19:24:01.834747 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 12 19:24:01.837335 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 19:24:01.840109 kernel: BTRFS info (device dm-0): using free space tree Sep 12 19:24:01.852615 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 19:24:01.854342 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 19:24:01.864502 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 19:24:01.868385 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 19:24:01.886727 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 19:24:01.886781 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 19:24:01.886801 kernel: BTRFS info (device vda6): using free space tree Sep 12 19:24:01.892211 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 19:24:01.906009 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 12 19:24:01.908445 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 19:24:01.915419 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 19:24:01.923479 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 19:24:02.158735 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 19:24:02.165845 ignition[672]: Ignition 2.19.0 Sep 12 19:24:02.166244 ignition[672]: Stage: fetch-offline Sep 12 19:24:02.166340 ignition[672]: no configs at "/usr/lib/ignition/base.d" Sep 12 19:24:02.166372 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 12 19:24:02.166618 ignition[672]: parsed url from cmdline: "" Sep 12 19:24:02.166626 ignition[672]: no config URL provided Sep 12 19:24:02.170452 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 19:24:02.166636 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 19:24:02.171549 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 19:24:02.166653 ignition[672]: no config at "/usr/lib/ignition/user.ign" Sep 12 19:24:02.166662 ignition[672]: failed to fetch config: resource requires networking Sep 12 19:24:02.167064 ignition[672]: Ignition finished successfully Sep 12 19:24:02.207359 systemd-networkd[774]: lo: Link UP Sep 12 19:24:02.207388 systemd-networkd[774]: lo: Gained carrier Sep 12 19:24:02.209872 systemd-networkd[774]: Enumeration completed Sep 12 19:24:02.210461 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 19:24:02.210466 systemd-networkd[774]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 19:24:02.210716 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 19:24:02.211752 systemd[1]: Reached target network.target - Network. Sep 12 19:24:02.212090 systemd-networkd[774]: eth0: Link UP Sep 12 19:24:02.212097 systemd-networkd[774]: eth0: Gained carrier Sep 12 19:24:02.212108 systemd-networkd[774]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 19:24:02.217386 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Sep 12 19:24:02.259294 systemd-networkd[774]: eth0: DHCPv4 address 10.230.43.118/30, gateway 10.230.43.117 acquired from 10.230.43.117 Sep 12 19:24:02.287029 ignition[777]: Ignition 2.19.0 Sep 12 19:24:02.287054 ignition[777]: Stage: fetch Sep 12 19:24:02.288889 ignition[777]: no configs at "/usr/lib/ignition/base.d" Sep 12 19:24:02.288920 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 12 19:24:02.289122 ignition[777]: parsed url from cmdline: "" Sep 12 19:24:02.289130 ignition[777]: no config URL provided Sep 12 19:24:02.289141 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 19:24:02.289158 ignition[777]: no config at "/usr/lib/ignition/user.ign" Sep 12 19:24:02.289432 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Sep 12 19:24:02.289779 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Sep 12 19:24:02.290152 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Sep 12 19:24:02.309039 ignition[777]: GET result: OK Sep 12 19:24:02.309806 ignition[777]: parsing config with SHA512: c498f0606844cbb686c2d4ca2b8c610c5f165871a838e39a1ef60f7cb29cf6531be7976f753c0e892cef0222acbc4921a1c9288dc83adb93e9ed9556c0016e16 Sep 12 19:24:02.317291 unknown[777]: fetched base config from "system" Sep 12 19:24:02.317310 unknown[777]: fetched base config from "system" Sep 12 19:24:02.317922 ignition[777]: fetch: fetch complete Sep 12 19:24:02.317321 unknown[777]: fetched user config from "openstack" Sep 12 19:24:02.317931 ignition[777]: fetch: fetch passed Sep 12 19:24:02.319880 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Sep 12 19:24:02.317997 ignition[777]: Ignition finished successfully Sep 12 19:24:02.332265 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 19:24:02.353300 ignition[784]: Ignition 2.19.0 Sep 12 19:24:02.353322 ignition[784]: Stage: kargs Sep 12 19:24:02.353583 ignition[784]: no configs at "/usr/lib/ignition/base.d" Sep 12 19:24:02.353603 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 12 19:24:02.354713 ignition[784]: kargs: kargs passed Sep 12 19:24:02.354784 ignition[784]: Ignition finished successfully Sep 12 19:24:02.357655 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 19:24:02.367433 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 19:24:02.388976 ignition[790]: Ignition 2.19.0 Sep 12 19:24:02.390040 ignition[790]: Stage: disks Sep 12 19:24:02.390276 ignition[790]: no configs at "/usr/lib/ignition/base.d" Sep 12 19:24:02.390296 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 12 19:24:02.394087 ignition[790]: disks: disks passed Sep 12 19:24:02.394901 ignition[790]: Ignition finished successfully Sep 12 19:24:02.396755 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 19:24:02.398119 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 19:24:02.399256 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 19:24:02.400822 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 19:24:02.402412 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 19:24:02.403792 systemd[1]: Reached target basic.target - Basic System. Sep 12 19:24:02.411422 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 19:24:02.431296 systemd-fsck[799]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Sep 12 19:24:02.436104 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 19:24:02.441351 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 19:24:02.564262 kernel: EXT4-fs (vda9): mounted filesystem 791ad691-63ae-4dbc-8ce3-6c8819e56736 r/w with ordered data mode. Quota mode: none. Sep 12 19:24:02.565238 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 19:24:02.566640 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 19:24:02.573306 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 19:24:02.579503 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 19:24:02.581366 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 19:24:02.585539 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Sep 12 19:24:02.587372 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 19:24:02.588462 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 19:24:02.602471 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (807) Sep 12 19:24:02.602505 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 19:24:02.602525 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 19:24:02.602544 kernel: BTRFS info (device vda6): using free space tree Sep 12 19:24:02.602561 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 19:24:02.615050 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 19:24:02.616856 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 19:24:02.627495 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 19:24:02.725110 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 19:24:02.733429 initrd-setup-root[841]: cut: /sysroot/etc/group: No such file or directory Sep 12 19:24:02.744488 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 19:24:02.756703 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 19:24:02.879855 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 19:24:02.886420 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 19:24:02.893493 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 19:24:02.905827 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 19:24:02.910271 kernel: BTRFS info (device vda6): last unmount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 19:24:02.933832 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 19:24:02.946460 ignition[924]: INFO : Ignition 2.19.0 Sep 12 19:24:02.946460 ignition[924]: INFO : Stage: mount Sep 12 19:24:02.948404 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 19:24:02.948404 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 12 19:24:02.951003 ignition[924]: INFO : mount: mount passed Sep 12 19:24:02.951003 ignition[924]: INFO : Ignition finished successfully Sep 12 19:24:02.950718 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 19:24:03.395578 systemd-networkd[774]: eth0: Gained IPv6LL Sep 12 19:24:04.904688 systemd-networkd[774]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8add:24:19ff:fee6:2b76/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8add:24:19ff:fee6:2b76/64 assigned by NDisc. Sep 12 19:24:04.904705 systemd-networkd[774]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 12 19:24:09.827609 coreos-metadata[809]: Sep 12 19:24:09.827 WARN failed to locate config-drive, using the metadata service API instead Sep 12 19:24:09.848864 coreos-metadata[809]: Sep 12 19:24:09.848 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 12 19:24:09.868486 coreos-metadata[809]: Sep 12 19:24:09.868 INFO Fetch successful Sep 12 19:24:09.870794 coreos-metadata[809]: Sep 12 19:24:09.868 INFO wrote hostname srv-gt1mb.gb1.brightbox.com to /sysroot/etc/hostname Sep 12 19:24:09.872474 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Sep 12 19:24:09.872652 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Sep 12 19:24:09.882326 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 19:24:09.898440 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 19:24:09.909242 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Sep 12 19:24:09.913228 kernel: BTRFS info (device vda6): first mount of filesystem 4080f51d-d3f2-4545-8f59-3798077218dc Sep 12 19:24:09.916842 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Sep 12 19:24:09.916911 kernel: BTRFS info (device vda6): using free space tree Sep 12 19:24:09.921242 kernel: BTRFS info (device vda6): auto enabling async discard Sep 12 19:24:09.924645 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 19:24:09.961983 ignition[958]: INFO : Ignition 2.19.0 Sep 12 19:24:09.961983 ignition[958]: INFO : Stage: files Sep 12 19:24:09.963899 ignition[958]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 19:24:09.963899 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 12 19:24:09.963899 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Sep 12 19:24:09.966862 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 19:24:09.966862 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 19:24:09.969422 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 19:24:09.969422 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 19:24:09.969422 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 19:24:09.968561 unknown[958]: wrote ssh authorized keys file for user: core Sep 12 19:24:09.973518 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 19:24:09.973518 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Sep 12 19:24:10.181546 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 19:24:10.574588 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Sep 12 19:24:10.576151 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Sep 12 19:24:10.576151 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 19:24:10.576151 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 19:24:10.576151 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 19:24:10.576151 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 19:24:10.576151 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 19:24:10.576151 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 19:24:10.576151 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 19:24:10.591384 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 19:24:10.591384 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 19:24:10.591384 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 19:24:10.591384 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 19:24:10.591384 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 19:24:10.591384 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Sep 12 19:24:11.006555 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Sep 12 19:24:13.255908 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Sep 12 19:24:13.255908 ignition[958]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Sep 12 19:24:13.274732 ignition[958]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 19:24:13.274732 ignition[958]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 19:24:13.274732 ignition[958]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Sep 12 19:24:13.274732 ignition[958]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Sep 12 19:24:13.274732 ignition[958]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 19:24:13.274732 ignition[958]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 19:24:13.274732 ignition[958]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 19:24:13.274732 ignition[958]: INFO : files: files passed Sep 12 19:24:13.274732 ignition[958]: INFO : Ignition finished successfully Sep 12 19:24:13.276823 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 19:24:13.287469 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 19:24:13.297448 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 19:24:13.304875 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 19:24:13.305849 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 19:24:13.317366 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 19:24:13.317366 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 19:24:13.320654 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 19:24:13.322373 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 19:24:13.323814 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 19:24:13.330464 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 19:24:13.374971 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 19:24:13.375210 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 19:24:13.377541 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 19:24:13.378868 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 19:24:13.380672 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 19:24:13.394437 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 19:24:13.413659 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 19:24:13.418399 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 19:24:13.445905 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 19:24:13.446973 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 19:24:13.448694 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 19:24:13.450233 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 19:24:13.450423 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 19:24:13.452400 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 19:24:13.453424 systemd[1]: Stopped target basic.target - Basic System. Sep 12 19:24:13.454950 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 19:24:13.456336 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 19:24:13.457730 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 19:24:13.459289 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 19:24:13.460848 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 19:24:13.462458 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 19:24:13.463916 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 19:24:13.465477 systemd[1]: Stopped target swap.target - Swaps. Sep 12 19:24:13.466762 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 19:24:13.466967 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 19:24:13.468645 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 19:24:13.469628 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 19:24:13.471026 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 19:24:13.473313 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 19:24:13.474525 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 19:24:13.474687 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 19:24:13.476750 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 19:24:13.476929 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 19:24:13.478587 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 19:24:13.478766 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 19:24:13.493951 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 19:24:13.496541 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 19:24:13.497353 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 19:24:13.498358 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 19:24:13.500464 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 19:24:13.500725 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 19:24:13.509247 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 19:24:13.509416 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 19:24:13.538929 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 19:24:13.541743 ignition[1010]: INFO : Ignition 2.19.0 Sep 12 19:24:13.541743 ignition[1010]: INFO : Stage: umount Sep 12 19:24:13.545265 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 19:24:13.545265 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Sep 12 19:24:13.545265 ignition[1010]: INFO : umount: umount passed Sep 12 19:24:13.545265 ignition[1010]: INFO : Ignition finished successfully Sep 12 19:24:13.546159 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 19:24:13.546363 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 19:24:13.547509 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 19:24:13.547647 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 19:24:13.550018 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 19:24:13.550666 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 19:24:13.551884 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 19:24:13.551968 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 19:24:13.553349 systemd[1]: ignition-fetch.service: Deactivated successfully. Sep 12 19:24:13.553435 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Sep 12 19:24:13.554724 systemd[1]: Stopped target network.target - Network. Sep 12 19:24:13.555946 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 19:24:13.556025 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 19:24:13.557467 systemd[1]: Stopped target paths.target - Path Units. Sep 12 19:24:13.558745 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 19:24:13.563254 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 19:24:13.564247 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 19:24:13.565786 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 19:24:13.567102 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 19:24:13.567175 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 19:24:13.568713 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 19:24:13.568787 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 19:24:13.570447 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 19:24:13.570527 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 19:24:13.571754 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 19:24:13.571841 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 19:24:13.573061 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 19:24:13.573144 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 19:24:13.574771 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 19:24:13.576879 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 19:24:13.580505 systemd-networkd[774]: eth0: DHCPv6 lease lost Sep 12 19:24:13.584798 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 19:24:13.585037 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 19:24:13.588005 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 19:24:13.588253 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 19:24:13.593331 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 19:24:13.593629 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 19:24:13.600396 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 19:24:13.602548 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 19:24:13.602627 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 19:24:13.606146 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 19:24:13.606240 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 19:24:13.607908 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 19:24:13.607992 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 19:24:13.609374 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 19:24:13.609447 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 19:24:13.611110 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 19:24:13.625454 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 19:24:13.625652 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 19:24:13.627787 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 19:24:13.628032 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 19:24:13.630786 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 19:24:13.630946 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 19:24:13.632666 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 19:24:13.632756 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 19:24:13.634260 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 19:24:13.634334 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 19:24:13.636448 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 19:24:13.636515 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 19:24:13.637888 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 19:24:13.637961 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 19:24:13.648414 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 19:24:13.651591 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 19:24:13.651727 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 19:24:13.655451 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 12 19:24:13.655552 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 19:24:13.656358 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 19:24:13.656445 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 19:24:13.657233 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 19:24:13.657296 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 19:24:13.659820 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 19:24:13.659970 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 19:24:13.661851 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 19:24:13.673791 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 19:24:13.682623 systemd[1]: Switching root. Sep 12 19:24:13.719609 systemd-journald[202]: Journal stopped Sep 12 19:24:15.353048 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Sep 12 19:24:15.353162 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 19:24:15.353239 kernel: SELinux: policy capability open_perms=1 Sep 12 19:24:15.353269 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 19:24:15.353309 kernel: SELinux: policy capability always_check_network=0 Sep 12 19:24:15.353331 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 19:24:15.353350 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 19:24:15.353368 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 19:24:15.353393 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 19:24:15.353424 kernel: audit: type=1403 audit(1757705053.974:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 19:24:15.353443 systemd[1]: Successfully loaded SELinux policy in 51.971ms. Sep 12 19:24:15.353474 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 29.109ms. Sep 12 19:24:15.353513 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 12 19:24:15.353547 systemd[1]: Detected virtualization kvm. Sep 12 19:24:15.353568 systemd[1]: Detected architecture x86-64. Sep 12 19:24:15.353599 systemd[1]: Detected first boot. Sep 12 19:24:15.353626 systemd[1]: Hostname set to . Sep 12 19:24:15.353659 systemd[1]: Initializing machine ID from VM UUID. Sep 12 19:24:15.353679 zram_generator::config[1053]: No configuration found. Sep 12 19:24:15.353712 systemd[1]: Populated /etc with preset unit settings. Sep 12 19:24:15.355489 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 19:24:15.355535 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 19:24:15.355558 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 19:24:15.355580 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 19:24:15.355601 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 19:24:15.355637 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 19:24:15.355656 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 19:24:15.355687 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 19:24:15.355707 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 19:24:15.355754 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 19:24:15.355777 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 19:24:15.355796 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 19:24:15.355817 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 19:24:15.355838 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 19:24:15.355858 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 19:24:15.355877 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 19:24:15.355898 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 19:24:15.355936 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Sep 12 19:24:15.355971 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 19:24:15.355993 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 19:24:15.356014 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 19:24:15.356034 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 19:24:15.356066 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 19:24:15.356088 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 19:24:15.356122 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 19:24:15.356144 systemd[1]: Reached target slices.target - Slice Units. Sep 12 19:24:15.356164 systemd[1]: Reached target swap.target - Swaps. Sep 12 19:24:15.356183 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 19:24:15.356218 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 19:24:15.356238 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 19:24:15.356275 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 19:24:15.356319 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 19:24:15.356352 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 19:24:15.356372 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 19:24:15.356389 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 19:24:15.356418 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 19:24:15.356437 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:24:15.356455 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 19:24:15.356499 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 19:24:15.356546 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 19:24:15.356568 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 19:24:15.356588 systemd[1]: Reached target machines.target - Containers. Sep 12 19:24:15.356621 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 19:24:15.356641 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 19:24:15.356661 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 19:24:15.356681 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 19:24:15.356701 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 19:24:15.356735 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 19:24:15.356757 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 19:24:15.356777 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 19:24:15.356797 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 19:24:15.356818 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 19:24:15.356838 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 19:24:15.356865 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 19:24:15.356887 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 19:24:15.356906 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 19:24:15.356940 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 19:24:15.356961 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 19:24:15.356982 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 19:24:15.357015 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 19:24:15.357036 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 19:24:15.357066 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 19:24:15.357088 systemd[1]: Stopped verity-setup.service. Sep 12 19:24:15.357108 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:24:15.357128 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 19:24:15.357163 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 19:24:15.357185 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 19:24:15.365483 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 19:24:15.365510 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 19:24:15.365550 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 19:24:15.365573 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 19:24:15.365594 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 19:24:15.365613 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 19:24:15.365633 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 19:24:15.365653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 19:24:15.365695 kernel: loop: module loaded Sep 12 19:24:15.365748 systemd-journald[1135]: Collecting audit messages is disabled. Sep 12 19:24:15.365785 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 19:24:15.365806 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 19:24:15.365826 kernel: fuse: init (API version 7.39) Sep 12 19:24:15.365846 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 19:24:15.365865 kernel: ACPI: bus type drm_connector registered Sep 12 19:24:15.365908 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 19:24:15.365932 systemd-journald[1135]: Journal started Sep 12 19:24:15.365984 systemd-journald[1135]: Runtime Journal (/run/log/journal/d22b38e189d548a3b2cb94195c709052) is 4.7M, max 38.0M, 33.2M free. Sep 12 19:24:14.812318 systemd[1]: Queued start job for default target multi-user.target. Sep 12 19:24:15.369466 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 19:24:14.843369 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 19:24:14.844147 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 19:24:15.372989 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 19:24:15.373275 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 19:24:15.374417 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 19:24:15.374645 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 19:24:15.376097 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 19:24:15.377284 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 19:24:15.378362 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 19:24:15.388920 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 19:24:15.397844 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 19:24:15.403255 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 19:24:15.416265 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 19:24:15.417185 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 19:24:15.417294 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 19:24:15.421387 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 12 19:24:15.429691 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 19:24:15.433414 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 19:24:15.434328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 19:24:15.444459 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 19:24:15.460442 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 19:24:15.461358 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 19:24:15.473672 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 19:24:15.475728 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 19:24:15.484174 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 19:24:15.493545 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 19:24:15.504407 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 19:24:15.511383 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 19:24:15.512917 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 19:24:15.522714 kernel: loop0: detected capacity change from 0 to 142488 Sep 12 19:24:15.515261 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 19:24:15.587232 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 19:24:15.582789 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 19:24:15.597480 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 12 19:24:15.605670 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 19:24:15.607397 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 19:24:15.610574 systemd-journald[1135]: Time spent on flushing to /var/log/journal/d22b38e189d548a3b2cb94195c709052 is 107.905ms for 1146 entries. Sep 12 19:24:15.610574 systemd-journald[1135]: System Journal (/var/log/journal/d22b38e189d548a3b2cb94195c709052) is 8.0M, max 584.8M, 576.8M free. Sep 12 19:24:15.771848 systemd-journald[1135]: Received client request to flush runtime journal. Sep 12 19:24:15.772068 kernel: loop1: detected capacity change from 0 to 224512 Sep 12 19:24:15.772118 kernel: loop2: detected capacity change from 0 to 140768 Sep 12 19:24:15.618488 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 12 19:24:15.678260 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 19:24:15.687492 udevadm[1194]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 12 19:24:15.724434 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Sep 12 19:24:15.724454 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Sep 12 19:24:15.745728 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 19:24:15.754445 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 19:24:15.757402 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 19:24:15.759532 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 12 19:24:15.778716 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 19:24:15.823348 kernel: loop3: detected capacity change from 0 to 8 Sep 12 19:24:15.839009 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 19:24:15.848149 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 19:24:15.853245 kernel: loop4: detected capacity change from 0 to 142488 Sep 12 19:24:15.877867 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Sep 12 19:24:15.877896 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Sep 12 19:24:15.885174 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 19:24:15.929220 kernel: loop5: detected capacity change from 0 to 224512 Sep 12 19:24:15.956255 kernel: loop6: detected capacity change from 0 to 140768 Sep 12 19:24:16.009215 kernel: loop7: detected capacity change from 0 to 8 Sep 12 19:24:16.016790 (sd-merge)[1212]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Sep 12 19:24:16.021297 (sd-merge)[1212]: Merged extensions into '/usr'. Sep 12 19:24:16.030181 systemd[1]: Reloading requested from client PID 1186 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 19:24:16.031233 systemd[1]: Reloading... Sep 12 19:24:16.299859 zram_generator::config[1240]: No configuration found. Sep 12 19:24:16.543334 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 19:24:16.604938 ldconfig[1181]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 19:24:16.635094 systemd[1]: Reloading finished in 602 ms. Sep 12 19:24:16.671109 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 19:24:16.675471 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 19:24:16.686467 systemd[1]: Starting ensure-sysext.service... Sep 12 19:24:16.701289 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 19:24:16.718391 systemd[1]: Reloading requested from client PID 1296 ('systemctl') (unit ensure-sysext.service)... Sep 12 19:24:16.718414 systemd[1]: Reloading... Sep 12 19:24:16.940542 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 19:24:16.944944 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 19:24:16.951086 systemd-tmpfiles[1297]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 19:24:16.952686 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Sep 12 19:24:16.953466 systemd-tmpfiles[1297]: ACLs are not supported, ignoring. Sep 12 19:24:16.967717 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 19:24:16.969470 systemd-tmpfiles[1297]: Skipping /boot Sep 12 19:24:17.011235 zram_generator::config[1324]: No configuration found. Sep 12 19:24:17.018597 systemd-tmpfiles[1297]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 19:24:17.021228 systemd-tmpfiles[1297]: Skipping /boot Sep 12 19:24:17.231886 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 19:24:17.300662 systemd[1]: Reloading finished in 581 ms. Sep 12 19:24:17.323991 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 19:24:17.331996 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 19:24:17.344406 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 19:24:17.358931 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 19:24:17.363453 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 19:24:17.374410 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 19:24:17.379446 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 19:24:17.383957 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 19:24:17.392814 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:24:17.393138 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 19:24:17.402544 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 19:24:17.411583 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 19:24:17.416632 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 19:24:17.419412 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 19:24:17.419586 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:24:17.435257 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 19:24:17.439953 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:24:17.440260 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 19:24:17.440495 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 19:24:17.440645 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:24:17.450238 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:24:17.450547 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 19:24:17.460088 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 19:24:17.463211 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 19:24:17.463394 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Sep 12 19:24:17.472291 systemd[1]: Finished ensure-sysext.service. Sep 12 19:24:17.485841 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 19:24:17.488077 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 19:24:17.495683 systemd-udevd[1388]: Using default interface naming scheme 'v255'. Sep 12 19:24:17.496684 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 19:24:17.497441 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 19:24:17.520678 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 19:24:17.521044 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 19:24:17.527248 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 19:24:17.537515 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 19:24:17.540892 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 19:24:17.542239 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 19:24:17.542494 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 19:24:17.543681 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 19:24:17.543892 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 19:24:17.549183 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 19:24:17.550767 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 19:24:17.550866 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 19:24:17.569132 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 19:24:17.582412 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 19:24:17.583392 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 19:24:17.595161 augenrules[1423]: No rules Sep 12 19:24:17.598943 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 19:24:17.601242 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 19:24:17.671859 systemd-networkd[1427]: lo: Link UP Sep 12 19:24:17.672444 systemd-networkd[1427]: lo: Gained carrier Sep 12 19:24:17.673932 systemd-networkd[1427]: Enumeration completed Sep 12 19:24:17.674324 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 19:24:17.696401 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 19:24:17.761589 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 19:24:17.764386 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 19:24:17.768177 systemd-resolved[1386]: Positive Trust Anchors: Sep 12 19:24:17.769237 systemd-resolved[1386]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 19:24:17.769371 systemd-resolved[1386]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 19:24:17.781043 systemd-resolved[1386]: Using system hostname 'srv-gt1mb.gb1.brightbox.com'. Sep 12 19:24:17.785573 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 19:24:17.786579 systemd[1]: Reached target network.target - Network. Sep 12 19:24:17.787256 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 19:24:17.807479 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Sep 12 19:24:17.902272 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1425) Sep 12 19:24:17.913950 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 19:24:17.914125 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 19:24:17.918320 systemd-networkd[1427]: eth0: Link UP Sep 12 19:24:17.918423 systemd-networkd[1427]: eth0: Gained carrier Sep 12 19:24:17.918537 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 19:24:17.945309 systemd-networkd[1427]: eth0: DHCPv4 address 10.230.43.118/30, gateway 10.230.43.117 acquired from 10.230.43.117 Sep 12 19:24:17.948491 systemd-timesyncd[1401]: Network configuration changed, trying to establish connection. Sep 12 19:24:17.960499 kernel: mousedev: PS/2 mouse device common for all mice Sep 12 19:24:17.989257 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Sep 12 19:24:18.007212 kernel: ACPI: button: Power Button [PWRF] Sep 12 19:24:18.047027 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Sep 12 19:24:18.047900 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Sep 12 19:24:18.047947 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Sep 12 19:24:18.048247 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Sep 12 19:24:18.220704 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 19:24:18.248034 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 19:24:18.255430 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 19:24:18.302705 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 19:24:19.103020 systemd-resolved[1386]: Clock change detected. Flushing caches. Sep 12 19:24:19.103436 systemd-timesyncd[1401]: Contacted time server 51.89.151.183:123 (0.flatcar.pool.ntp.org). Sep 12 19:24:19.103672 systemd-timesyncd[1401]: Initial clock synchronization to Fri 2025-09-12 19:24:19.102816 UTC. Sep 12 19:24:19.177920 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 12 19:24:19.231379 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 19:24:19.241216 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 12 19:24:19.264994 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 19:24:19.306633 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 12 19:24:19.308476 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 19:24:19.309302 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 19:24:19.310253 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 19:24:19.311204 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 19:24:19.312431 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 19:24:19.313354 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 19:24:19.314156 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 19:24:19.314914 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 19:24:19.314990 systemd[1]: Reached target paths.target - Path Units. Sep 12 19:24:19.315618 systemd[1]: Reached target timers.target - Timer Units. Sep 12 19:24:19.318125 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 19:24:19.321337 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 19:24:19.327107 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 19:24:19.329709 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 12 19:24:19.331220 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 19:24:19.332121 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 19:24:19.332764 systemd[1]: Reached target basic.target - Basic System. Sep 12 19:24:19.333457 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 19:24:19.333515 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 19:24:19.337139 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 19:24:19.342898 lvm[1473]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 12 19:24:19.351196 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Sep 12 19:24:19.360184 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 19:24:19.365698 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 19:24:19.372227 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 19:24:19.373076 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 19:24:19.382191 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 19:24:19.384769 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 19:24:19.396206 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 19:24:19.410130 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 19:24:19.419253 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 19:24:19.421582 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 19:24:19.422565 jq[1477]: false Sep 12 19:24:19.423819 dbus-daemon[1476]: [system] SELinux support is enabled Sep 12 19:24:19.423513 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 19:24:19.429439 dbus-daemon[1476]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1427 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Sep 12 19:24:19.431055 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 19:24:19.447139 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 19:24:19.450282 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 19:24:19.452728 extend-filesystems[1478]: Found loop4 Sep 12 19:24:19.463043 extend-filesystems[1478]: Found loop5 Sep 12 19:24:19.463043 extend-filesystems[1478]: Found loop6 Sep 12 19:24:19.463043 extend-filesystems[1478]: Found loop7 Sep 12 19:24:19.463043 extend-filesystems[1478]: Found vda Sep 12 19:24:19.463043 extend-filesystems[1478]: Found vda1 Sep 12 19:24:19.463043 extend-filesystems[1478]: Found vda2 Sep 12 19:24:19.463043 extend-filesystems[1478]: Found vda3 Sep 12 19:24:19.463043 extend-filesystems[1478]: Found usr Sep 12 19:24:19.463043 extend-filesystems[1478]: Found vda4 Sep 12 19:24:19.463043 extend-filesystems[1478]: Found vda6 Sep 12 19:24:19.463043 extend-filesystems[1478]: Found vda7 Sep 12 19:24:19.463043 extend-filesystems[1478]: Found vda9 Sep 12 19:24:19.463043 extend-filesystems[1478]: Checking size of /dev/vda9 Sep 12 19:24:19.468031 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 12 19:24:19.492956 extend-filesystems[1478]: Resized partition /dev/vda9 Sep 12 19:24:19.483549 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 19:24:19.495367 extend-filesystems[1494]: resize2fs 1.47.1 (20-May-2024) Sep 12 19:24:19.483856 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 19:24:19.489101 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 19:24:19.490010 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 19:24:19.510056 jq[1488]: true Sep 12 19:24:19.507649 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 19:24:19.513636 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.systemd1' Sep 12 19:24:19.507692 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 19:24:19.509109 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 19:24:19.509138 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 19:24:19.520131 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Sep 12 19:24:19.536168 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Sep 12 19:24:19.561791 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 19:24:19.562051 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 19:24:19.592632 jq[1509]: true Sep 12 19:24:19.612668 tar[1501]: linux-amd64/LICENSE Sep 12 19:24:19.617923 tar[1501]: linux-amd64/helm Sep 12 19:24:19.620996 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1447) Sep 12 19:24:19.624951 (ntainerd)[1514]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 19:24:19.666141 update_engine[1486]: I20250912 19:24:19.663844 1486 main.cc:92] Flatcar Update Engine starting Sep 12 19:24:19.682288 systemd[1]: Started update-engine.service - Update Engine. Sep 12 19:24:19.691331 update_engine[1486]: I20250912 19:24:19.689078 1486 update_check_scheduler.cc:74] Next update check in 10m16s Sep 12 19:24:19.691283 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 19:24:19.765286 systemd-logind[1485]: Watching system buttons on /dev/input/event2 (Power Button) Sep 12 19:24:19.768945 systemd-logind[1485]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Sep 12 19:24:19.771607 systemd-logind[1485]: New seat seat0. Sep 12 19:24:19.780574 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 19:24:19.886079 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Sep 12 19:24:19.916514 extend-filesystems[1494]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 19:24:19.916514 extend-filesystems[1494]: old_desc_blocks = 1, new_desc_blocks = 8 Sep 12 19:24:19.916514 extend-filesystems[1494]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Sep 12 19:24:19.934504 extend-filesystems[1478]: Resized filesystem in /dev/vda9 Sep 12 19:24:19.939893 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Sep 12 19:24:19.917751 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 19:24:19.918456 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 19:24:19.930523 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 19:24:19.942341 systemd[1]: Starting sshkeys.service... Sep 12 19:24:20.001452 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Sep 12 19:24:20.010786 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.hostname1' Sep 12 19:24:20.021874 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Sep 12 19:24:20.023601 dbus-daemon[1476]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1511 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Sep 12 19:24:20.025555 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Sep 12 19:24:20.041414 systemd[1]: Starting polkit.service - Authorization Manager... Sep 12 19:24:20.233952 polkitd[1544]: Started polkitd version 121 Sep 12 19:24:20.296595 polkitd[1544]: Loading rules from directory /etc/polkit-1/rules.d Sep 12 19:24:20.296699 polkitd[1544]: Loading rules from directory /usr/share/polkit-1/rules.d Sep 12 19:24:20.299264 polkitd[1544]: Finished loading, compiling and executing 2 rules Sep 12 19:24:20.301852 dbus-daemon[1476]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Sep 12 19:24:20.302121 systemd[1]: Started polkit.service - Authorization Manager. Sep 12 19:24:20.303941 polkitd[1544]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Sep 12 19:24:20.381000 systemd-hostnamed[1511]: Hostname set to (static) Sep 12 19:24:20.507009 containerd[1514]: time="2025-09-12T19:24:20.504296893Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 12 19:24:20.517274 systemd-networkd[1427]: eth0: Gained IPv6LL Sep 12 19:24:20.527979 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 19:24:20.529943 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 19:24:20.541432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:24:20.543818 locksmithd[1524]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 19:24:20.550484 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 19:24:20.753948 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 19:24:20.822482 containerd[1514]: time="2025-09-12T19:24:20.820924695Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 12 19:24:20.829213 containerd[1514]: time="2025-09-12T19:24:20.829161521Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 12 19:24:20.831040 containerd[1514]: time="2025-09-12T19:24:20.831008903Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 12 19:24:20.831209 containerd[1514]: time="2025-09-12T19:24:20.831164759Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 12 19:24:20.840139 containerd[1514]: time="2025-09-12T19:24:20.838212977Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 12 19:24:20.840139 containerd[1514]: time="2025-09-12T19:24:20.838271721Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 12 19:24:20.840139 containerd[1514]: time="2025-09-12T19:24:20.838425660Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 19:24:20.840139 containerd[1514]: time="2025-09-12T19:24:20.838455218Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 12 19:24:20.840139 containerd[1514]: time="2025-09-12T19:24:20.838759393Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 19:24:20.840139 containerd[1514]: time="2025-09-12T19:24:20.838785881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 12 19:24:20.840139 containerd[1514]: time="2025-09-12T19:24:20.838805638Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 19:24:20.840139 containerd[1514]: time="2025-09-12T19:24:20.838821821Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 12 19:24:20.840139 containerd[1514]: time="2025-09-12T19:24:20.839030604Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 12 19:24:20.840139 containerd[1514]: time="2025-09-12T19:24:20.839487182Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 12 19:24:20.840139 containerd[1514]: time="2025-09-12T19:24:20.839627682Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 12 19:24:20.840633 containerd[1514]: time="2025-09-12T19:24:20.839650141Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 12 19:24:20.840633 containerd[1514]: time="2025-09-12T19:24:20.839884754Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 12 19:24:20.843392 containerd[1514]: time="2025-09-12T19:24:20.842028431Z" level=info msg="metadata content store policy set" policy=shared Sep 12 19:24:20.849342 containerd[1514]: time="2025-09-12T19:24:20.849306927Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 12 19:24:20.849654 containerd[1514]: time="2025-09-12T19:24:20.849413644Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 12 19:24:20.849654 containerd[1514]: time="2025-09-12T19:24:20.849449615Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 12 19:24:20.849654 containerd[1514]: time="2025-09-12T19:24:20.849501838Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 12 19:24:20.849654 containerd[1514]: time="2025-09-12T19:24:20.849526929Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 12 19:24:20.854146 containerd[1514]: time="2025-09-12T19:24:20.849837418Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 12 19:24:20.854146 containerd[1514]: time="2025-09-12T19:24:20.850452567Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 12 19:24:20.854146 containerd[1514]: time="2025-09-12T19:24:20.850787851Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 12 19:24:20.854146 containerd[1514]: time="2025-09-12T19:24:20.850817939Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 12 19:24:20.854146 containerd[1514]: time="2025-09-12T19:24:20.850839733Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 12 19:24:20.854146 containerd[1514]: time="2025-09-12T19:24:20.850860376Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 12 19:24:20.854146 containerd[1514]: time="2025-09-12T19:24:20.850908584Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 12 19:24:20.854146 containerd[1514]: time="2025-09-12T19:24:20.850944579Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 12 19:24:20.854146 containerd[1514]: time="2025-09-12T19:24:20.850989188Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 12 19:24:20.854146 containerd[1514]: time="2025-09-12T19:24:20.851030782Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 12 19:24:20.854146 containerd[1514]: time="2025-09-12T19:24:20.851066854Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 12 19:24:20.854146 containerd[1514]: time="2025-09-12T19:24:20.851092369Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 12 19:24:20.854146 containerd[1514]: time="2025-09-12T19:24:20.851120847Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 12 19:24:20.854146 containerd[1514]: time="2025-09-12T19:24:20.851173177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851211949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851232979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851252033Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851269830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851311646Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851335946Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851372392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851393733Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851444347Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851464426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851488142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851548285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851574643Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851618249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.854662 containerd[1514]: time="2025-09-12T19:24:20.851651650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.855247 containerd[1514]: time="2025-09-12T19:24:20.851674067Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 12 19:24:20.855247 containerd[1514]: time="2025-09-12T19:24:20.851810473Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 12 19:24:20.855247 containerd[1514]: time="2025-09-12T19:24:20.851844191Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 12 19:24:20.855247 containerd[1514]: time="2025-09-12T19:24:20.851863392Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 12 19:24:20.855247 containerd[1514]: time="2025-09-12T19:24:20.851892311Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 12 19:24:20.855247 containerd[1514]: time="2025-09-12T19:24:20.851914627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.855247 containerd[1514]: time="2025-09-12T19:24:20.851941550Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 12 19:24:20.865176 containerd[1514]: time="2025-09-12T19:24:20.862823238Z" level=info msg="NRI interface is disabled by configuration." Sep 12 19:24:20.865176 containerd[1514]: time="2025-09-12T19:24:20.862894699Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 12 19:24:20.865275 containerd[1514]: time="2025-09-12T19:24:20.863693814Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 12 19:24:20.865275 containerd[1514]: time="2025-09-12T19:24:20.863820315Z" level=info msg="Connect containerd service" Sep 12 19:24:20.865275 containerd[1514]: time="2025-09-12T19:24:20.863884011Z" level=info msg="using legacy CRI server" Sep 12 19:24:20.865275 containerd[1514]: time="2025-09-12T19:24:20.863921532Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 19:24:20.867366 containerd[1514]: time="2025-09-12T19:24:20.867090908Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 12 19:24:20.872004 containerd[1514]: time="2025-09-12T19:24:20.871950940Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 19:24:20.879201 containerd[1514]: time="2025-09-12T19:24:20.879099613Z" level=info msg="Start subscribing containerd event" Sep 12 19:24:20.879314 containerd[1514]: time="2025-09-12T19:24:20.879227024Z" level=info msg="Start recovering state" Sep 12 19:24:20.879808 containerd[1514]: time="2025-09-12T19:24:20.879380163Z" level=info msg="Start event monitor" Sep 12 19:24:20.879808 containerd[1514]: time="2025-09-12T19:24:20.879435153Z" level=info msg="Start snapshots syncer" Sep 12 19:24:20.879808 containerd[1514]: time="2025-09-12T19:24:20.879462319Z" level=info msg="Start cni network conf syncer for default" Sep 12 19:24:20.879808 containerd[1514]: time="2025-09-12T19:24:20.879500358Z" level=info msg="Start streaming server" Sep 12 19:24:20.880205 containerd[1514]: time="2025-09-12T19:24:20.880176755Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 19:24:20.880291 containerd[1514]: time="2025-09-12T19:24:20.880265130Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 19:24:20.886864 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 19:24:20.901183 containerd[1514]: time="2025-09-12T19:24:20.901097149Z" level=info msg="containerd successfully booted in 0.413267s" Sep 12 19:24:21.370636 sshd_keygen[1512]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 19:24:21.426794 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 19:24:21.443248 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 19:24:21.469028 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 19:24:21.469403 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 19:24:21.484205 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 19:24:21.551406 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 19:24:21.553983 tar[1501]: linux-amd64/README.md Sep 12 19:24:21.579734 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 19:24:21.584376 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Sep 12 19:24:21.592724 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 19:24:21.595202 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 19:24:22.026275 systemd-networkd[1427]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8add:24:19ff:fee6:2b76/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8add:24:19ff:fee6:2b76/64 assigned by NDisc. Sep 12 19:24:22.026291 systemd-networkd[1427]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Sep 12 19:24:22.725567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:24:22.749081 (kubelet)[1603]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 19:24:22.866893 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 19:24:22.878933 systemd[1]: Started sshd@0-10.230.43.118:22-139.178.68.195:48712.service - OpenSSH per-connection server daemon (139.178.68.195:48712). Sep 12 19:24:23.587392 kubelet[1603]: E0912 19:24:23.587300 1603 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 19:24:23.591671 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 19:24:23.591943 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 19:24:23.594142 systemd[1]: kubelet.service: Consumed 2.034s CPU time. Sep 12 19:24:23.878219 sshd[1608]: Accepted publickey for core from 139.178.68.195 port 48712 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:24:23.881335 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:24:23.899992 systemd-logind[1485]: New session 1 of user core. Sep 12 19:24:23.903424 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 19:24:23.911408 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 19:24:23.947036 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 19:24:23.957506 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 19:24:23.979596 (systemd)[1616]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 19:24:24.131171 systemd[1616]: Queued start job for default target default.target. Sep 12 19:24:24.148088 systemd[1616]: Created slice app.slice - User Application Slice. Sep 12 19:24:24.148137 systemd[1616]: Reached target paths.target - Paths. Sep 12 19:24:24.148161 systemd[1616]: Reached target timers.target - Timers. Sep 12 19:24:24.150416 systemd[1616]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 19:24:24.167562 systemd[1616]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 19:24:24.167826 systemd[1616]: Reached target sockets.target - Sockets. Sep 12 19:24:24.167851 systemd[1616]: Reached target basic.target - Basic System. Sep 12 19:24:24.167924 systemd[1616]: Reached target default.target - Main User Target. Sep 12 19:24:24.168052 systemd[1616]: Startup finished in 176ms. Sep 12 19:24:24.168144 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 19:24:24.177298 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 19:24:24.819689 systemd[1]: Started sshd@1-10.230.43.118:22-139.178.68.195:48716.service - OpenSSH per-connection server daemon (139.178.68.195:48716). Sep 12 19:24:25.700105 sshd[1627]: Accepted publickey for core from 139.178.68.195 port 48716 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:24:25.702202 sshd[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:24:25.709232 systemd-logind[1485]: New session 2 of user core. Sep 12 19:24:25.720277 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 19:24:26.319337 sshd[1627]: pam_unix(sshd:session): session closed for user core Sep 12 19:24:26.323336 systemd-logind[1485]: Session 2 logged out. Waiting for processes to exit. Sep 12 19:24:26.324222 systemd[1]: sshd@1-10.230.43.118:22-139.178.68.195:48716.service: Deactivated successfully. Sep 12 19:24:26.326526 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 19:24:26.328787 systemd-logind[1485]: Removed session 2. Sep 12 19:24:26.486828 systemd[1]: Started sshd@2-10.230.43.118:22-139.178.68.195:48720.service - OpenSSH per-connection server daemon (139.178.68.195:48720). Sep 12 19:24:26.564039 coreos-metadata[1475]: Sep 12 19:24:26.563 WARN failed to locate config-drive, using the metadata service API instead Sep 12 19:24:26.596103 coreos-metadata[1475]: Sep 12 19:24:26.594 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Sep 12 19:24:26.601571 coreos-metadata[1475]: Sep 12 19:24:26.601 INFO Fetch failed with 404: resource not found Sep 12 19:24:26.601571 coreos-metadata[1475]: Sep 12 19:24:26.601 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Sep 12 19:24:26.602295 coreos-metadata[1475]: Sep 12 19:24:26.602 INFO Fetch successful Sep 12 19:24:26.602627 coreos-metadata[1475]: Sep 12 19:24:26.602 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Sep 12 19:24:26.619225 coreos-metadata[1475]: Sep 12 19:24:26.619 INFO Fetch successful Sep 12 19:24:26.619543 coreos-metadata[1475]: Sep 12 19:24:26.619 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Sep 12 19:24:26.638074 login[1594]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 19:24:26.638706 coreos-metadata[1475]: Sep 12 19:24:26.638 INFO Fetch successful Sep 12 19:24:26.638706 coreos-metadata[1475]: Sep 12 19:24:26.638 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Sep 12 19:24:26.644631 login[1593]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Sep 12 19:24:26.648289 systemd-logind[1485]: New session 3 of user core. Sep 12 19:24:26.652957 coreos-metadata[1475]: Sep 12 19:24:26.652 INFO Fetch successful Sep 12 19:24:26.653332 coreos-metadata[1475]: Sep 12 19:24:26.653 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Sep 12 19:24:26.655587 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 19:24:26.660323 systemd-logind[1485]: New session 4 of user core. Sep 12 19:24:26.669319 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 19:24:26.674051 coreos-metadata[1475]: Sep 12 19:24:26.674 INFO Fetch successful Sep 12 19:24:26.714700 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Sep 12 19:24:26.716818 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 19:24:27.385266 sshd[1634]: Accepted publickey for core from 139.178.68.195 port 48720 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:24:27.387280 sshd[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:24:27.393894 systemd-logind[1485]: New session 5 of user core. Sep 12 19:24:27.405313 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 19:24:27.444076 coreos-metadata[1543]: Sep 12 19:24:27.443 WARN failed to locate config-drive, using the metadata service API instead Sep 12 19:24:27.467146 coreos-metadata[1543]: Sep 12 19:24:27.467 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Sep 12 19:24:27.502219 coreos-metadata[1543]: Sep 12 19:24:27.502 INFO Fetch successful Sep 12 19:24:27.502534 coreos-metadata[1543]: Sep 12 19:24:27.502 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Sep 12 19:24:27.531329 coreos-metadata[1543]: Sep 12 19:24:27.531 INFO Fetch successful Sep 12 19:24:27.533943 unknown[1543]: wrote ssh authorized keys file for user: core Sep 12 19:24:27.561751 update-ssh-keys[1673]: Updated "/home/core/.ssh/authorized_keys" Sep 12 19:24:27.562738 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Sep 12 19:24:27.566493 systemd[1]: Finished sshkeys.service. Sep 12 19:24:27.567912 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 19:24:27.568509 systemd[1]: Startup finished in 1.759s (kernel) + 15.212s (initrd) + 12.906s (userspace) = 29.879s. Sep 12 19:24:28.009308 sshd[1634]: pam_unix(sshd:session): session closed for user core Sep 12 19:24:28.013125 systemd[1]: sshd@2-10.230.43.118:22-139.178.68.195:48720.service: Deactivated successfully. Sep 12 19:24:28.015511 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 19:24:28.017632 systemd-logind[1485]: Session 5 logged out. Waiting for processes to exit. Sep 12 19:24:28.018994 systemd-logind[1485]: Removed session 5. Sep 12 19:24:33.842545 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 19:24:33.856280 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:24:34.205864 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:24:34.218478 (kubelet)[1687]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 19:24:34.309985 kubelet[1687]: E0912 19:24:34.309849 1687 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 19:24:34.314797 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 19:24:34.315272 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 19:24:38.175377 systemd[1]: Started sshd@3-10.230.43.118:22-139.178.68.195:53076.service - OpenSSH per-connection server daemon (139.178.68.195:53076). Sep 12 19:24:39.071603 sshd[1694]: Accepted publickey for core from 139.178.68.195 port 53076 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:24:39.073772 sshd[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:24:39.081695 systemd-logind[1485]: New session 6 of user core. Sep 12 19:24:39.088201 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 19:24:39.696514 sshd[1694]: pam_unix(sshd:session): session closed for user core Sep 12 19:24:39.701625 systemd[1]: sshd@3-10.230.43.118:22-139.178.68.195:53076.service: Deactivated successfully. Sep 12 19:24:39.704523 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 19:24:39.706260 systemd-logind[1485]: Session 6 logged out. Waiting for processes to exit. Sep 12 19:24:39.707688 systemd-logind[1485]: Removed session 6. Sep 12 19:24:39.861380 systemd[1]: Started sshd@4-10.230.43.118:22-139.178.68.195:53078.service - OpenSSH per-connection server daemon (139.178.68.195:53078). Sep 12 19:24:40.759501 sshd[1701]: Accepted publickey for core from 139.178.68.195 port 53078 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:24:40.761718 sshd[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:24:40.768087 systemd-logind[1485]: New session 7 of user core. Sep 12 19:24:40.776289 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 19:24:41.377122 sshd[1701]: pam_unix(sshd:session): session closed for user core Sep 12 19:24:41.383968 systemd[1]: sshd@4-10.230.43.118:22-139.178.68.195:53078.service: Deactivated successfully. Sep 12 19:24:41.386686 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 19:24:41.387677 systemd-logind[1485]: Session 7 logged out. Waiting for processes to exit. Sep 12 19:24:41.389574 systemd-logind[1485]: Removed session 7. Sep 12 19:24:41.543444 systemd[1]: Started sshd@5-10.230.43.118:22-139.178.68.195:58690.service - OpenSSH per-connection server daemon (139.178.68.195:58690). Sep 12 19:24:42.426600 sshd[1708]: Accepted publickey for core from 139.178.68.195 port 58690 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:24:42.428722 sshd[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:24:42.435315 systemd-logind[1485]: New session 8 of user core. Sep 12 19:24:42.442265 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 19:24:43.046766 sshd[1708]: pam_unix(sshd:session): session closed for user core Sep 12 19:24:43.051791 systemd[1]: sshd@5-10.230.43.118:22-139.178.68.195:58690.service: Deactivated successfully. Sep 12 19:24:43.053929 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 19:24:43.054814 systemd-logind[1485]: Session 8 logged out. Waiting for processes to exit. Sep 12 19:24:43.056275 systemd-logind[1485]: Removed session 8. Sep 12 19:24:43.204375 systemd[1]: Started sshd@6-10.230.43.118:22-139.178.68.195:58704.service - OpenSSH per-connection server daemon (139.178.68.195:58704). Sep 12 19:24:44.082531 sshd[1715]: Accepted publickey for core from 139.178.68.195 port 58704 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:24:44.084540 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:24:44.092694 systemd-logind[1485]: New session 9 of user core. Sep 12 19:24:44.109300 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 19:24:44.565289 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 19:24:44.570042 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 19:24:44.570521 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 19:24:44.575218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:24:44.587771 sudo[1718]: pam_unix(sudo:session): session closed for user root Sep 12 19:24:44.732353 sshd[1715]: pam_unix(sshd:session): session closed for user core Sep 12 19:24:44.737182 systemd[1]: sshd@6-10.230.43.118:22-139.178.68.195:58704.service: Deactivated successfully. Sep 12 19:24:44.739547 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 19:24:44.741674 systemd-logind[1485]: Session 9 logged out. Waiting for processes to exit. Sep 12 19:24:44.743677 systemd-logind[1485]: Removed session 9. Sep 12 19:24:44.849009 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:24:44.855174 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 19:24:44.894309 systemd[1]: Started sshd@7-10.230.43.118:22-139.178.68.195:58706.service - OpenSSH per-connection server daemon (139.178.68.195:58706). Sep 12 19:24:44.925299 kubelet[1730]: E0912 19:24:44.925227 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 19:24:44.928659 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 19:24:44.928945 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 19:24:45.793636 sshd[1736]: Accepted publickey for core from 139.178.68.195 port 58706 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:24:45.795909 sshd[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:24:45.803846 systemd-logind[1485]: New session 10 of user core. Sep 12 19:24:45.815246 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 19:24:46.273064 sudo[1742]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 19:24:46.273801 sudo[1742]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 19:24:46.279242 sudo[1742]: pam_unix(sudo:session): session closed for user root Sep 12 19:24:46.288091 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 12 19:24:46.288545 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 19:24:46.309368 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 12 19:24:46.312213 auditctl[1745]: No rules Sep 12 19:24:46.313089 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 19:24:46.313402 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 12 19:24:46.317188 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 12 19:24:46.358288 augenrules[1763]: No rules Sep 12 19:24:46.360631 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 12 19:24:46.362471 sudo[1741]: pam_unix(sudo:session): session closed for user root Sep 12 19:24:46.507417 sshd[1736]: pam_unix(sshd:session): session closed for user core Sep 12 19:24:46.512586 systemd[1]: sshd@7-10.230.43.118:22-139.178.68.195:58706.service: Deactivated successfully. Sep 12 19:24:46.515023 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 19:24:46.516025 systemd-logind[1485]: Session 10 logged out. Waiting for processes to exit. Sep 12 19:24:46.517453 systemd-logind[1485]: Removed session 10. Sep 12 19:24:46.659045 systemd[1]: Started sshd@8-10.230.43.118:22-139.178.68.195:58718.service - OpenSSH per-connection server daemon (139.178.68.195:58718). Sep 12 19:24:47.551779 sshd[1771]: Accepted publickey for core from 139.178.68.195 port 58718 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:24:47.553917 sshd[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:24:47.559829 systemd-logind[1485]: New session 11 of user core. Sep 12 19:24:47.569207 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 19:24:48.026480 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 19:24:48.027006 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 19:24:48.812755 (dockerd)[1790]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 19:24:48.813418 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 19:24:49.576672 dockerd[1790]: time="2025-09-12T19:24:49.576551151Z" level=info msg="Starting up" Sep 12 19:24:49.740498 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2189631214-merged.mount: Deactivated successfully. Sep 12 19:24:49.773884 dockerd[1790]: time="2025-09-12T19:24:49.773265557Z" level=info msg="Loading containers: start." Sep 12 19:24:49.937224 kernel: Initializing XFRM netlink socket Sep 12 19:24:50.047460 systemd-networkd[1427]: docker0: Link UP Sep 12 19:24:50.073164 dockerd[1790]: time="2025-09-12T19:24:50.073013596Z" level=info msg="Loading containers: done." Sep 12 19:24:50.109562 dockerd[1790]: time="2025-09-12T19:24:50.109471363Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 19:24:50.110857 dockerd[1790]: time="2025-09-12T19:24:50.110822492Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 12 19:24:50.111156 dockerd[1790]: time="2025-09-12T19:24:50.111118202Z" level=info msg="Daemon has completed initialization" Sep 12 19:24:50.112880 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2419648000-merged.mount: Deactivated successfully. Sep 12 19:24:50.156773 dockerd[1790]: time="2025-09-12T19:24:50.156644644Z" level=info msg="API listen on /run/docker.sock" Sep 12 19:24:50.158433 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 19:24:51.628859 containerd[1514]: time="2025-09-12T19:24:51.628658048Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 19:24:52.052446 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Sep 12 19:24:52.629611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3725607155.mount: Deactivated successfully. Sep 12 19:24:55.050795 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Sep 12 19:24:55.062215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:24:55.442354 containerd[1514]: time="2025-09-12T19:24:55.442088819Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:24:55.447136 containerd[1514]: time="2025-09-12T19:24:55.444899417Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=28837924" Sep 12 19:24:55.447136 containerd[1514]: time="2025-09-12T19:24:55.445889017Z" level=info msg="ImageCreate event name:\"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:24:55.454158 containerd[1514]: time="2025-09-12T19:24:55.454081813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:24:55.459703 containerd[1514]: time="2025-09-12T19:24:55.457640069Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"28834515\" in 3.828788043s" Sep 12 19:24:55.459703 containerd[1514]: time="2025-09-12T19:24:55.457787628Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:abd2b525baf428ffb8b8b7d1e09761dc5cdb7ed0c7896a9427e29e84f8eafc59\"" Sep 12 19:24:55.461716 containerd[1514]: time="2025-09-12T19:24:55.461678805Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 19:24:55.467108 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:24:55.485502 (kubelet)[1999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 19:24:55.707071 kubelet[1999]: E0912 19:24:55.706826 1999 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 19:24:55.709735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 19:24:55.710051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 19:24:58.047672 containerd[1514]: time="2025-09-12T19:24:58.045516255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:24:58.048777 containerd[1514]: time="2025-09-12T19:24:58.048725264Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=24787035" Sep 12 19:24:58.052028 containerd[1514]: time="2025-09-12T19:24:58.050382042Z" level=info msg="ImageCreate event name:\"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:24:58.058125 containerd[1514]: time="2025-09-12T19:24:58.056929929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:24:58.058926 containerd[1514]: time="2025-09-12T19:24:58.058059125Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"26421706\" in 2.596314834s" Sep 12 19:24:58.059079 containerd[1514]: time="2025-09-12T19:24:58.058932334Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:0debe32fbb7223500fcf8c312f2a568a5abd3ed9274d8ec6780cfb30b8861e91\"" Sep 12 19:24:58.061547 containerd[1514]: time="2025-09-12T19:24:58.061322211Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 19:25:00.476092 containerd[1514]: time="2025-09-12T19:25:00.475839621Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:00.478701 containerd[1514]: time="2025-09-12T19:25:00.478622623Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=19176297" Sep 12 19:25:00.481983 containerd[1514]: time="2025-09-12T19:25:00.479908846Z" level=info msg="ImageCreate event name:\"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:00.484916 containerd[1514]: time="2025-09-12T19:25:00.484880147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:00.487054 containerd[1514]: time="2025-09-12T19:25:00.487013344Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"20810986\" in 2.42564102s" Sep 12 19:25:00.487151 containerd[1514]: time="2025-09-12T19:25:00.487118570Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:6934c23b154fcb9bf54ed5913782de746735a49f4daa4732285915050cd44ad5\"" Sep 12 19:25:00.489509 containerd[1514]: time="2025-09-12T19:25:00.489480918Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 19:25:02.322724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1643018298.mount: Deactivated successfully. Sep 12 19:25:03.339225 containerd[1514]: time="2025-09-12T19:25:03.338111825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:03.340886 containerd[1514]: time="2025-09-12T19:25:03.340641295Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=30924214" Sep 12 19:25:03.341985 containerd[1514]: time="2025-09-12T19:25:03.341687963Z" level=info msg="ImageCreate event name:\"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:03.344990 containerd[1514]: time="2025-09-12T19:25:03.344827675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:03.346681 containerd[1514]: time="2025-09-12T19:25:03.346282890Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"30923225\" in 2.856753276s" Sep 12 19:25:03.346681 containerd[1514]: time="2025-09-12T19:25:03.346350334Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:fa3fdca615a501743d8deb39729a96e731312aac8d96accec061d5265360332f\"" Sep 12 19:25:03.348729 containerd[1514]: time="2025-09-12T19:25:03.348698079Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 19:25:03.990798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount14063640.mount: Deactivated successfully. Sep 12 19:25:04.700989 update_engine[1486]: I20250912 19:25:04.699323 1486 update_attempter.cc:509] Updating boot flags... Sep 12 19:25:04.836089 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2078) Sep 12 19:25:05.071693 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2078) Sep 12 19:25:05.800243 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Sep 12 19:25:05.817306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:25:05.852867 containerd[1514]: time="2025-09-12T19:25:05.852783287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:05.857018 containerd[1514]: time="2025-09-12T19:25:05.855186912Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Sep 12 19:25:05.857018 containerd[1514]: time="2025-09-12T19:25:05.856586859Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:05.861992 containerd[1514]: time="2025-09-12T19:25:05.861503081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:05.865520 containerd[1514]: time="2025-09-12T19:25:05.865313617Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.516560012s" Sep 12 19:25:05.865520 containerd[1514]: time="2025-09-12T19:25:05.865365283Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Sep 12 19:25:05.866669 containerd[1514]: time="2025-09-12T19:25:05.866601608Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 19:25:06.195654 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:25:06.205507 (kubelet)[2097]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 19:25:06.301830 kubelet[2097]: E0912 19:25:06.301634 2097 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 19:25:06.305154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 19:25:06.305615 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 19:25:06.508992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2246347562.mount: Deactivated successfully. Sep 12 19:25:06.515249 containerd[1514]: time="2025-09-12T19:25:06.515183900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:06.516993 containerd[1514]: time="2025-09-12T19:25:06.516684197Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Sep 12 19:25:06.517714 containerd[1514]: time="2025-09-12T19:25:06.517647728Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:06.520992 containerd[1514]: time="2025-09-12T19:25:06.520728066Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:06.522343 containerd[1514]: time="2025-09-12T19:25:06.522099431Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 655.461361ms" Sep 12 19:25:06.522343 containerd[1514]: time="2025-09-12T19:25:06.522141291Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Sep 12 19:25:06.523588 containerd[1514]: time="2025-09-12T19:25:06.523156823Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 19:25:07.257458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2393335147.mount: Deactivated successfully. Sep 12 19:25:11.231463 containerd[1514]: time="2025-09-12T19:25:11.231302638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:11.235007 containerd[1514]: time="2025-09-12T19:25:11.234295641Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57682064" Sep 12 19:25:11.235007 containerd[1514]: time="2025-09-12T19:25:11.234433175Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:11.243984 containerd[1514]: time="2025-09-12T19:25:11.241669869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:11.248606 containerd[1514]: time="2025-09-12T19:25:11.248528209Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 4.72530765s" Sep 12 19:25:11.248775 containerd[1514]: time="2025-09-12T19:25:11.248740972Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Sep 12 19:25:16.413353 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Sep 12 19:25:16.426354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:25:16.456365 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 19:25:16.456539 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 19:25:16.457423 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:25:16.467387 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:25:16.510658 systemd[1]: Reloading requested from client PID 2193 ('systemctl') (unit session-11.scope)... Sep 12 19:25:16.510706 systemd[1]: Reloading... Sep 12 19:25:16.715127 zram_generator::config[2235]: No configuration found. Sep 12 19:25:16.915277 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 19:25:17.025202 systemd[1]: Reloading finished in 513 ms. Sep 12 19:25:17.102375 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:25:17.107292 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:25:17.110458 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 19:25:17.110800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:25:17.124999 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:25:17.299614 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:25:17.311856 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 19:25:17.402413 kubelet[2301]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 19:25:17.402413 kubelet[2301]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 19:25:17.402413 kubelet[2301]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 19:25:17.403083 kubelet[2301]: I0912 19:25:17.402552 2301 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 19:25:18.028428 kubelet[2301]: I0912 19:25:18.028277 2301 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 19:25:18.028428 kubelet[2301]: I0912 19:25:18.028390 2301 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 19:25:18.029329 kubelet[2301]: I0912 19:25:18.029294 2301 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 19:25:18.066929 kubelet[2301]: E0912 19:25:18.066161 2301 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.43.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.43.118:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:25:18.066929 kubelet[2301]: I0912 19:25:18.066508 2301 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 19:25:18.094187 kubelet[2301]: E0912 19:25:18.094105 2301 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 19:25:18.094187 kubelet[2301]: I0912 19:25:18.094182 2301 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 19:25:18.107732 kubelet[2301]: I0912 19:25:18.107683 2301 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 19:25:18.110712 kubelet[2301]: I0912 19:25:18.110152 2301 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 19:25:18.110712 kubelet[2301]: I0912 19:25:18.110244 2301 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-gt1mb.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 19:25:18.113028 kubelet[2301]: I0912 19:25:18.113001 2301 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 19:25:18.113239 kubelet[2301]: I0912 19:25:18.113200 2301 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 19:25:18.114717 kubelet[2301]: I0912 19:25:18.114695 2301 state_mem.go:36] "Initialized new in-memory state store" Sep 12 19:25:18.122671 kubelet[2301]: I0912 19:25:18.122638 2301 kubelet.go:446] "Attempting to sync node with API server" Sep 12 19:25:18.122885 kubelet[2301]: I0912 19:25:18.122864 2301 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 19:25:18.123053 kubelet[2301]: I0912 19:25:18.123034 2301 kubelet.go:352] "Adding apiserver pod source" Sep 12 19:25:18.123209 kubelet[2301]: I0912 19:25:18.123186 2301 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 19:25:18.133221 kubelet[2301]: W0912 19:25:18.133069 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.43.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gt1mb.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.43.118:6443: connect: connection refused Sep 12 19:25:18.133221 kubelet[2301]: E0912 19:25:18.133196 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.43.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gt1mb.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.43.118:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:25:18.134984 kubelet[2301]: W0912 19:25:18.133720 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.43.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.43.118:6443: connect: connection refused Sep 12 19:25:18.134984 kubelet[2301]: E0912 19:25:18.133774 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.43.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.43.118:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:25:18.135843 kubelet[2301]: I0912 19:25:18.135798 2301 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 19:25:18.139845 kubelet[2301]: I0912 19:25:18.139787 2301 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 19:25:18.140036 kubelet[2301]: W0912 19:25:18.139947 2301 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 19:25:18.141601 kubelet[2301]: I0912 19:25:18.141563 2301 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 19:25:18.141699 kubelet[2301]: I0912 19:25:18.141630 2301 server.go:1287] "Started kubelet" Sep 12 19:25:18.147815 kubelet[2301]: I0912 19:25:18.147772 2301 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 19:25:18.149496 kubelet[2301]: I0912 19:25:18.149216 2301 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 19:25:18.151983 kubelet[2301]: I0912 19:25:18.150022 2301 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 19:25:18.151983 kubelet[2301]: I0912 19:25:18.150539 2301 server.go:479] "Adding debug handlers to kubelet server" Sep 12 19:25:18.154194 kubelet[2301]: E0912 19:25:18.151157 2301 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.43.118:6443/api/v1/namespaces/default/events\": dial tcp 10.230.43.118:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-gt1mb.gb1.brightbox.com.18649f7e02b743e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-gt1mb.gb1.brightbox.com,UID:srv-gt1mb.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-gt1mb.gb1.brightbox.com,},FirstTimestamp:2025-09-12 19:25:18.141596647 +0000 UTC m=+0.822146232,LastTimestamp:2025-09-12 19:25:18.141596647 +0000 UTC m=+0.822146232,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-gt1mb.gb1.brightbox.com,}" Sep 12 19:25:18.157976 kubelet[2301]: I0912 19:25:18.157417 2301 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 19:25:18.166143 kubelet[2301]: I0912 19:25:18.166065 2301 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 19:25:18.166693 kubelet[2301]: E0912 19:25:18.166666 2301 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-gt1mb.gb1.brightbox.com\" not found" Sep 12 19:25:18.167373 kubelet[2301]: I0912 19:25:18.167350 2301 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 19:25:18.168215 kubelet[2301]: I0912 19:25:18.168193 2301 reconciler.go:26] "Reconciler: start to sync state" Sep 12 19:25:18.171707 kubelet[2301]: I0912 19:25:18.171673 2301 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 19:25:18.174052 kubelet[2301]: W0912 19:25:18.173976 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.43.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.43.118:6443: connect: connection refused Sep 12 19:25:18.174162 kubelet[2301]: E0912 19:25:18.174067 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.43.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.43.118:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:25:18.174237 kubelet[2301]: E0912 19:25:18.174183 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.43.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gt1mb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.43.118:6443: connect: connection refused" interval="200ms" Sep 12 19:25:18.182120 kubelet[2301]: I0912 19:25:18.182086 2301 factory.go:221] Registration of the containerd container factory successfully Sep 12 19:25:18.182120 kubelet[2301]: I0912 19:25:18.182114 2301 factory.go:221] Registration of the systemd container factory successfully Sep 12 19:25:18.182260 kubelet[2301]: I0912 19:25:18.182236 2301 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 19:25:18.190521 kubelet[2301]: I0912 19:25:18.190460 2301 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 19:25:18.192772 kubelet[2301]: I0912 19:25:18.192192 2301 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 19:25:18.192772 kubelet[2301]: I0912 19:25:18.192248 2301 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 19:25:18.192772 kubelet[2301]: I0912 19:25:18.192294 2301 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 19:25:18.192772 kubelet[2301]: I0912 19:25:18.192312 2301 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 19:25:18.192772 kubelet[2301]: E0912 19:25:18.192421 2301 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 19:25:18.220077 kubelet[2301]: W0912 19:25:18.220007 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.43.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.43.118:6443: connect: connection refused Sep 12 19:25:18.220331 kubelet[2301]: E0912 19:25:18.220288 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.43.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.43.118:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:25:18.233826 kubelet[2301]: I0912 19:25:18.233781 2301 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 19:25:18.233826 kubelet[2301]: I0912 19:25:18.233819 2301 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 19:25:18.234029 kubelet[2301]: I0912 19:25:18.233856 2301 state_mem.go:36] "Initialized new in-memory state store" Sep 12 19:25:18.236090 kubelet[2301]: I0912 19:25:18.236044 2301 policy_none.go:49] "None policy: Start" Sep 12 19:25:18.236090 kubelet[2301]: I0912 19:25:18.236092 2301 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 19:25:18.236243 kubelet[2301]: I0912 19:25:18.236124 2301 state_mem.go:35] "Initializing new in-memory state store" Sep 12 19:25:18.246784 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 19:25:18.265206 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 19:25:18.267104 kubelet[2301]: E0912 19:25:18.267064 2301 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-gt1mb.gb1.brightbox.com\" not found" Sep 12 19:25:18.271473 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 19:25:18.290064 kubelet[2301]: I0912 19:25:18.289928 2301 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 19:25:18.292137 kubelet[2301]: I0912 19:25:18.292091 2301 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 19:25:18.292223 kubelet[2301]: I0912 19:25:18.292124 2301 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 19:25:18.298478 kubelet[2301]: I0912 19:25:18.298062 2301 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 19:25:18.299592 kubelet[2301]: E0912 19:25:18.299478 2301 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 19:25:18.299778 kubelet[2301]: E0912 19:25:18.299708 2301 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-gt1mb.gb1.brightbox.com\" not found" Sep 12 19:25:18.312345 systemd[1]: Created slice kubepods-burstable-pod003c4da00ad860716b3d84e5063de7cb.slice - libcontainer container kubepods-burstable-pod003c4da00ad860716b3d84e5063de7cb.slice. Sep 12 19:25:18.322547 kubelet[2301]: E0912 19:25:18.322199 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gt1mb.gb1.brightbox.com\" not found" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.324939 systemd[1]: Created slice kubepods-burstable-podf0071ac8c091cf88aecc98e8feaede1c.slice - libcontainer container kubepods-burstable-podf0071ac8c091cf88aecc98e8feaede1c.slice. Sep 12 19:25:18.335480 kubelet[2301]: E0912 19:25:18.335239 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gt1mb.gb1.brightbox.com\" not found" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.339239 systemd[1]: Created slice kubepods-burstable-pod954f70e73a3b855c8e7e2c010d71ef91.slice - libcontainer container kubepods-burstable-pod954f70e73a3b855c8e7e2c010d71ef91.slice. Sep 12 19:25:18.342028 kubelet[2301]: E0912 19:25:18.342003 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gt1mb.gb1.brightbox.com\" not found" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.375338 kubelet[2301]: E0912 19:25:18.375290 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.43.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gt1mb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.43.118:6443: connect: connection refused" interval="400ms" Sep 12 19:25:18.395831 kubelet[2301]: I0912 19:25:18.395783 2301 kubelet_node_status.go:75] "Attempting to register node" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.396364 kubelet[2301]: E0912 19:25:18.396320 2301 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.43.118:6443/api/v1/nodes\": dial tcp 10.230.43.118:6443: connect: connection refused" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.469205 kubelet[2301]: I0912 19:25:18.469108 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/003c4da00ad860716b3d84e5063de7cb-ca-certs\") pod \"kube-apiserver-srv-gt1mb.gb1.brightbox.com\" (UID: \"003c4da00ad860716b3d84e5063de7cb\") " pod="kube-system/kube-apiserver-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.469205 kubelet[2301]: I0912 19:25:18.469214 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/003c4da00ad860716b3d84e5063de7cb-usr-share-ca-certificates\") pod \"kube-apiserver-srv-gt1mb.gb1.brightbox.com\" (UID: \"003c4da00ad860716b3d84e5063de7cb\") " pod="kube-system/kube-apiserver-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.470132 kubelet[2301]: I0912 19:25:18.469252 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f0071ac8c091cf88aecc98e8feaede1c-k8s-certs\") pod \"kube-controller-manager-srv-gt1mb.gb1.brightbox.com\" (UID: \"f0071ac8c091cf88aecc98e8feaede1c\") " pod="kube-system/kube-controller-manager-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.470132 kubelet[2301]: I0912 19:25:18.469302 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/003c4da00ad860716b3d84e5063de7cb-k8s-certs\") pod \"kube-apiserver-srv-gt1mb.gb1.brightbox.com\" (UID: \"003c4da00ad860716b3d84e5063de7cb\") " pod="kube-system/kube-apiserver-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.470132 kubelet[2301]: I0912 19:25:18.469327 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f0071ac8c091cf88aecc98e8feaede1c-ca-certs\") pod \"kube-controller-manager-srv-gt1mb.gb1.brightbox.com\" (UID: \"f0071ac8c091cf88aecc98e8feaede1c\") " pod="kube-system/kube-controller-manager-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.470132 kubelet[2301]: I0912 19:25:18.469366 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f0071ac8c091cf88aecc98e8feaede1c-flexvolume-dir\") pod \"kube-controller-manager-srv-gt1mb.gb1.brightbox.com\" (UID: \"f0071ac8c091cf88aecc98e8feaede1c\") " pod="kube-system/kube-controller-manager-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.470132 kubelet[2301]: I0912 19:25:18.469416 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0071ac8c091cf88aecc98e8feaede1c-kubeconfig\") pod \"kube-controller-manager-srv-gt1mb.gb1.brightbox.com\" (UID: \"f0071ac8c091cf88aecc98e8feaede1c\") " pod="kube-system/kube-controller-manager-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.470409 kubelet[2301]: I0912 19:25:18.469444 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f0071ac8c091cf88aecc98e8feaede1c-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-gt1mb.gb1.brightbox.com\" (UID: \"f0071ac8c091cf88aecc98e8feaede1c\") " pod="kube-system/kube-controller-manager-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.470409 kubelet[2301]: I0912 19:25:18.469472 2301 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/954f70e73a3b855c8e7e2c010d71ef91-kubeconfig\") pod \"kube-scheduler-srv-gt1mb.gb1.brightbox.com\" (UID: \"954f70e73a3b855c8e7e2c010d71ef91\") " pod="kube-system/kube-scheduler-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.600284 kubelet[2301]: I0912 19:25:18.600223 2301 kubelet_node_status.go:75] "Attempting to register node" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.600737 kubelet[2301]: E0912 19:25:18.600704 2301 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.43.118:6443/api/v1/nodes\": dial tcp 10.230.43.118:6443: connect: connection refused" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:18.624466 containerd[1514]: time="2025-09-12T19:25:18.624252485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-gt1mb.gb1.brightbox.com,Uid:003c4da00ad860716b3d84e5063de7cb,Namespace:kube-system,Attempt:0,}" Sep 12 19:25:18.642933 containerd[1514]: time="2025-09-12T19:25:18.642330568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-gt1mb.gb1.brightbox.com,Uid:f0071ac8c091cf88aecc98e8feaede1c,Namespace:kube-system,Attempt:0,}" Sep 12 19:25:18.648366 containerd[1514]: time="2025-09-12T19:25:18.648198164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-gt1mb.gb1.brightbox.com,Uid:954f70e73a3b855c8e7e2c010d71ef91,Namespace:kube-system,Attempt:0,}" Sep 12 19:25:18.776712 kubelet[2301]: E0912 19:25:18.776638 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.43.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gt1mb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.43.118:6443: connect: connection refused" interval="800ms" Sep 12 19:25:19.004654 kubelet[2301]: I0912 19:25:19.004521 2301 kubelet_node_status.go:75] "Attempting to register node" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:19.005143 kubelet[2301]: E0912 19:25:19.004982 2301 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.43.118:6443/api/v1/nodes\": dial tcp 10.230.43.118:6443: connect: connection refused" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:19.255558 kubelet[2301]: W0912 19:25:19.255332 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.43.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.43.118:6443: connect: connection refused Sep 12 19:25:19.255558 kubelet[2301]: E0912 19:25:19.255410 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.43.118:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.43.118:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:25:19.324283 kubelet[2301]: W0912 19:25:19.324100 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.43.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gt1mb.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.43.118:6443: connect: connection refused Sep 12 19:25:19.324283 kubelet[2301]: E0912 19:25:19.324228 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.43.118:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-gt1mb.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.43.118:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:25:19.416567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1031276539.mount: Deactivated successfully. Sep 12 19:25:19.422984 containerd[1514]: time="2025-09-12T19:25:19.421194493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 19:25:19.422984 containerd[1514]: time="2025-09-12T19:25:19.422600090Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 19:25:19.423206 containerd[1514]: time="2025-09-12T19:25:19.423118182Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 19:25:19.424157 containerd[1514]: time="2025-09-12T19:25:19.424117661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Sep 12 19:25:19.425307 containerd[1514]: time="2025-09-12T19:25:19.425256037Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 19:25:19.426271 containerd[1514]: time="2025-09-12T19:25:19.426222525Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 12 19:25:19.427547 containerd[1514]: time="2025-09-12T19:25:19.427509165Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 19:25:19.433855 containerd[1514]: time="2025-09-12T19:25:19.433810370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 19:25:19.436289 kubelet[2301]: W0912 19:25:19.436154 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.43.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.43.118:6443: connect: connection refused Sep 12 19:25:19.436289 kubelet[2301]: E0912 19:25:19.436243 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.43.118:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.43.118:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:25:19.437153 containerd[1514]: time="2025-09-12T19:25:19.437105023Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 812.584604ms" Sep 12 19:25:19.439868 containerd[1514]: time="2025-09-12T19:25:19.439831271Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 791.498564ms" Sep 12 19:25:19.442411 containerd[1514]: time="2025-09-12T19:25:19.442329879Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 799.872084ms" Sep 12 19:25:19.482000 kubelet[2301]: W0912 19:25:19.480518 2301 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.43.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.43.118:6443: connect: connection refused Sep 12 19:25:19.482000 kubelet[2301]: E0912 19:25:19.480603 2301 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.43.118:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.43.118:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:25:19.579650 kubelet[2301]: E0912 19:25:19.579550 2301 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.43.118:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-gt1mb.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.43.118:6443: connect: connection refused" interval="1.6s" Sep 12 19:25:19.663849 containerd[1514]: time="2025-09-12T19:25:19.662941288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:25:19.663849 containerd[1514]: time="2025-09-12T19:25:19.663117953Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:25:19.663849 containerd[1514]: time="2025-09-12T19:25:19.663142661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:25:19.663849 containerd[1514]: time="2025-09-12T19:25:19.663319374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:25:19.680777 containerd[1514]: time="2025-09-12T19:25:19.680256081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:25:19.681052 containerd[1514]: time="2025-09-12T19:25:19.680992826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:25:19.681297 containerd[1514]: time="2025-09-12T19:25:19.681237008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:25:19.681775 containerd[1514]: time="2025-09-12T19:25:19.681650887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:25:19.682804 containerd[1514]: time="2025-09-12T19:25:19.682676787Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:25:19.684349 containerd[1514]: time="2025-09-12T19:25:19.684117980Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:25:19.684349 containerd[1514]: time="2025-09-12T19:25:19.684147465Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:25:19.684349 containerd[1514]: time="2025-09-12T19:25:19.684258916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:25:19.723519 systemd[1]: Started cri-containerd-2526735d4d890c524152b9e094ce85983dd4babcf95eb75e374b3b372af2c2d2.scope - libcontainer container 2526735d4d890c524152b9e094ce85983dd4babcf95eb75e374b3b372af2c2d2. Sep 12 19:25:19.736346 systemd[1]: Started cri-containerd-e909eebe4351dd203bafeb2daa0651ec523501664ac01d9a7f8fe07a1a52ef28.scope - libcontainer container e909eebe4351dd203bafeb2daa0651ec523501664ac01d9a7f8fe07a1a52ef28. Sep 12 19:25:19.859164 kubelet[2301]: I0912 19:25:19.857443 2301 kubelet_node_status.go:75] "Attempting to register node" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:19.864350 kubelet[2301]: E0912 19:25:19.864275 2301 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.43.118:6443/api/v1/nodes\": dial tcp 10.230.43.118:6443: connect: connection refused" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:19.879205 systemd[1]: Started cri-containerd-0eb44ff54e688f20e7200adcee3957fe37ef7471c4667402d10f56f093a0371f.scope - libcontainer container 0eb44ff54e688f20e7200adcee3957fe37ef7471c4667402d10f56f093a0371f. Sep 12 19:25:19.955913 containerd[1514]: time="2025-09-12T19:25:19.955860333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-gt1mb.gb1.brightbox.com,Uid:003c4da00ad860716b3d84e5063de7cb,Namespace:kube-system,Attempt:0,} returns sandbox id \"2526735d4d890c524152b9e094ce85983dd4babcf95eb75e374b3b372af2c2d2\"" Sep 12 19:25:19.987307 containerd[1514]: time="2025-09-12T19:25:19.986985013Z" level=info msg="CreateContainer within sandbox \"2526735d4d890c524152b9e094ce85983dd4babcf95eb75e374b3b372af2c2d2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 19:25:19.995081 containerd[1514]: time="2025-09-12T19:25:19.994842374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-gt1mb.gb1.brightbox.com,Uid:954f70e73a3b855c8e7e2c010d71ef91,Namespace:kube-system,Attempt:0,} returns sandbox id \"e909eebe4351dd203bafeb2daa0651ec523501664ac01d9a7f8fe07a1a52ef28\"" Sep 12 19:25:20.001507 containerd[1514]: time="2025-09-12T19:25:20.001221004Z" level=info msg="CreateContainer within sandbox \"e909eebe4351dd203bafeb2daa0651ec523501664ac01d9a7f8fe07a1a52ef28\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 19:25:20.006418 containerd[1514]: time="2025-09-12T19:25:20.006334354Z" level=info msg="CreateContainer within sandbox \"2526735d4d890c524152b9e094ce85983dd4babcf95eb75e374b3b372af2c2d2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d72b4919343be992db2363a8e2350daed2e7533e66e1cfb55cd316bc4d1e6e9c\"" Sep 12 19:25:20.008290 containerd[1514]: time="2025-09-12T19:25:20.007684175Z" level=info msg="StartContainer for \"d72b4919343be992db2363a8e2350daed2e7533e66e1cfb55cd316bc4d1e6e9c\"" Sep 12 19:25:20.014769 containerd[1514]: time="2025-09-12T19:25:20.014732411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-gt1mb.gb1.brightbox.com,Uid:f0071ac8c091cf88aecc98e8feaede1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0eb44ff54e688f20e7200adcee3957fe37ef7471c4667402d10f56f093a0371f\"" Sep 12 19:25:20.020143 containerd[1514]: time="2025-09-12T19:25:20.020071115Z" level=info msg="CreateContainer within sandbox \"0eb44ff54e688f20e7200adcee3957fe37ef7471c4667402d10f56f093a0371f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 19:25:20.027177 containerd[1514]: time="2025-09-12T19:25:20.027134761Z" level=info msg="CreateContainer within sandbox \"e909eebe4351dd203bafeb2daa0651ec523501664ac01d9a7f8fe07a1a52ef28\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"829c10b334ad2f199b6ea50c1ff19f8688f20532eaae0113ba8fa3c536daa601\"" Sep 12 19:25:20.028038 containerd[1514]: time="2025-09-12T19:25:20.028009290Z" level=info msg="StartContainer for \"829c10b334ad2f199b6ea50c1ff19f8688f20532eaae0113ba8fa3c536daa601\"" Sep 12 19:25:20.044117 containerd[1514]: time="2025-09-12T19:25:20.044053955Z" level=info msg="CreateContainer within sandbox \"0eb44ff54e688f20e7200adcee3957fe37ef7471c4667402d10f56f093a0371f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"517d738c3c6db5a7e9c1c43518384e4f9fd0a281aa367741a58f17b1fef1ea84\"" Sep 12 19:25:20.045729 containerd[1514]: time="2025-09-12T19:25:20.045692111Z" level=info msg="StartContainer for \"517d738c3c6db5a7e9c1c43518384e4f9fd0a281aa367741a58f17b1fef1ea84\"" Sep 12 19:25:20.073186 systemd[1]: Started cri-containerd-d72b4919343be992db2363a8e2350daed2e7533e66e1cfb55cd316bc4d1e6e9c.scope - libcontainer container d72b4919343be992db2363a8e2350daed2e7533e66e1cfb55cd316bc4d1e6e9c. Sep 12 19:25:20.103383 systemd[1]: Started cri-containerd-829c10b334ad2f199b6ea50c1ff19f8688f20532eaae0113ba8fa3c536daa601.scope - libcontainer container 829c10b334ad2f199b6ea50c1ff19f8688f20532eaae0113ba8fa3c536daa601. Sep 12 19:25:20.117204 systemd[1]: Started cri-containerd-517d738c3c6db5a7e9c1c43518384e4f9fd0a281aa367741a58f17b1fef1ea84.scope - libcontainer container 517d738c3c6db5a7e9c1c43518384e4f9fd0a281aa367741a58f17b1fef1ea84. Sep 12 19:25:20.207574 containerd[1514]: time="2025-09-12T19:25:20.205678665Z" level=info msg="StartContainer for \"d72b4919343be992db2363a8e2350daed2e7533e66e1cfb55cd316bc4d1e6e9c\" returns successfully" Sep 12 19:25:20.218151 containerd[1514]: time="2025-09-12T19:25:20.218084700Z" level=info msg="StartContainer for \"829c10b334ad2f199b6ea50c1ff19f8688f20532eaae0113ba8fa3c536daa601\" returns successfully" Sep 12 19:25:20.236279 kubelet[2301]: E0912 19:25:20.236233 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gt1mb.gb1.brightbox.com\" not found" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:20.240737 kubelet[2301]: E0912 19:25:20.240709 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gt1mb.gb1.brightbox.com\" not found" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:20.256389 containerd[1514]: time="2025-09-12T19:25:20.256216625Z" level=info msg="StartContainer for \"517d738c3c6db5a7e9c1c43518384e4f9fd0a281aa367741a58f17b1fef1ea84\" returns successfully" Sep 12 19:25:20.257831 kubelet[2301]: E0912 19:25:20.257749 2301 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.43.118:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.43.118:6443: connect: connection refused" logger="UnhandledError" Sep 12 19:25:21.254638 kubelet[2301]: E0912 19:25:21.254586 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gt1mb.gb1.brightbox.com\" not found" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:21.255916 kubelet[2301]: E0912 19:25:21.255877 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gt1mb.gb1.brightbox.com\" not found" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:21.256388 kubelet[2301]: E0912 19:25:21.256360 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gt1mb.gb1.brightbox.com\" not found" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:21.468948 kubelet[2301]: I0912 19:25:21.468903 2301 kubelet_node_status.go:75] "Attempting to register node" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:22.253629 kubelet[2301]: E0912 19:25:22.253550 2301 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-gt1mb.gb1.brightbox.com\" not found" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:23.136749 kubelet[2301]: I0912 19:25:23.136674 2301 apiserver.go:52] "Watching apiserver" Sep 12 19:25:23.161590 kubelet[2301]: E0912 19:25:23.161543 2301 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-gt1mb.gb1.brightbox.com\" not found" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:23.169049 kubelet[2301]: I0912 19:25:23.169007 2301 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 19:25:23.287071 kubelet[2301]: I0912 19:25:23.287019 2301 kubelet_node_status.go:78] "Successfully registered node" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:23.369090 kubelet[2301]: I0912 19:25:23.368415 2301 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:23.376781 kubelet[2301]: E0912 19:25:23.376746 2301 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-gt1mb.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:23.376781 kubelet[2301]: I0912 19:25:23.376782 2301 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:23.382002 kubelet[2301]: E0912 19:25:23.381795 2301 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-gt1mb.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:23.382002 kubelet[2301]: I0912 19:25:23.381874 2301 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:23.384191 kubelet[2301]: E0912 19:25:23.384121 2301 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-gt1mb.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:24.793946 kubelet[2301]: I0912 19:25:24.793113 2301 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:24.806229 kubelet[2301]: W0912 19:25:24.805106 2301 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 19:25:25.353192 systemd[1]: Reloading requested from client PID 2580 ('systemctl') (unit session-11.scope)... Sep 12 19:25:25.354146 systemd[1]: Reloading... Sep 12 19:25:25.564015 zram_generator::config[2619]: No configuration found. Sep 12 19:25:25.782231 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 12 19:25:25.919698 systemd[1]: Reloading finished in 564 ms. Sep 12 19:25:25.996665 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:25:26.006044 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 19:25:26.006464 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:25:26.006578 systemd[1]: kubelet.service: Consumed 1.395s CPU time, 128.0M memory peak, 0B memory swap peak. Sep 12 19:25:26.015422 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 19:25:26.396684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 19:25:26.415459 (kubelet)[2683]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 19:25:26.539221 kubelet[2683]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 19:25:26.539221 kubelet[2683]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 19:25:26.539221 kubelet[2683]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 19:25:26.544010 kubelet[2683]: I0912 19:25:26.542888 2683 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 19:25:26.561057 kubelet[2683]: I0912 19:25:26.560297 2683 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 19:25:26.561057 kubelet[2683]: I0912 19:25:26.560336 2683 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 19:25:26.561711 kubelet[2683]: I0912 19:25:26.561349 2683 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 19:25:26.568587 kubelet[2683]: I0912 19:25:26.568020 2683 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 19:25:26.577339 kubelet[2683]: I0912 19:25:26.577309 2683 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 19:25:26.589484 kubelet[2683]: E0912 19:25:26.589229 2683 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 12 19:25:26.589484 kubelet[2683]: I0912 19:25:26.589274 2683 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 12 19:25:26.596310 kubelet[2683]: I0912 19:25:26.596190 2683 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 19:25:26.599822 kubelet[2683]: I0912 19:25:26.599050 2683 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 19:25:26.599822 kubelet[2683]: I0912 19:25:26.599115 2683 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-gt1mb.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 19:25:26.599822 kubelet[2683]: I0912 19:25:26.599473 2683 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 19:25:26.599822 kubelet[2683]: I0912 19:25:26.599492 2683 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 19:25:26.602455 kubelet[2683]: I0912 19:25:26.599639 2683 state_mem.go:36] "Initialized new in-memory state store" Sep 12 19:25:26.602455 kubelet[2683]: I0912 19:25:26.599984 2683 kubelet.go:446] "Attempting to sync node with API server" Sep 12 19:25:26.602455 kubelet[2683]: I0912 19:25:26.600046 2683 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 19:25:26.602455 kubelet[2683]: I0912 19:25:26.600091 2683 kubelet.go:352] "Adding apiserver pod source" Sep 12 19:25:26.602455 kubelet[2683]: I0912 19:25:26.600125 2683 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 19:25:26.604064 kubelet[2683]: I0912 19:25:26.604039 2683 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 12 19:25:26.605055 kubelet[2683]: I0912 19:25:26.605034 2683 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 19:25:26.609683 kubelet[2683]: I0912 19:25:26.609030 2683 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 19:25:26.609913 kubelet[2683]: I0912 19:25:26.609892 2683 server.go:1287] "Started kubelet" Sep 12 19:25:26.617701 kubelet[2683]: I0912 19:25:26.614282 2683 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 19:25:26.617701 kubelet[2683]: I0912 19:25:26.616619 2683 server.go:479] "Adding debug handlers to kubelet server" Sep 12 19:25:26.638835 kubelet[2683]: I0912 19:25:26.637520 2683 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 19:25:26.638835 kubelet[2683]: I0912 19:25:26.638397 2683 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 19:25:26.651421 kubelet[2683]: I0912 19:25:26.651198 2683 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 19:25:26.680263 kubelet[2683]: I0912 19:25:26.679115 2683 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 19:25:26.681848 kubelet[2683]: I0912 19:25:26.681783 2683 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 19:25:26.687211 kubelet[2683]: E0912 19:25:26.686689 2683 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-gt1mb.gb1.brightbox.com\" not found" Sep 12 19:25:26.691637 kubelet[2683]: I0912 19:25:26.691606 2683 factory.go:221] Registration of the systemd container factory successfully Sep 12 19:25:26.691769 kubelet[2683]: I0912 19:25:26.691727 2683 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 19:25:26.699553 kubelet[2683]: E0912 19:25:26.697062 2683 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 19:25:26.699553 kubelet[2683]: I0912 19:25:26.697316 2683 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 19:25:26.699553 kubelet[2683]: I0912 19:25:26.698788 2683 reconciler.go:26] "Reconciler: start to sync state" Sep 12 19:25:26.706456 kubelet[2683]: I0912 19:25:26.705553 2683 factory.go:221] Registration of the containerd container factory successfully Sep 12 19:25:26.754438 kubelet[2683]: I0912 19:25:26.754345 2683 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 19:25:26.767914 kubelet[2683]: I0912 19:25:26.767504 2683 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 19:25:26.767914 kubelet[2683]: I0912 19:25:26.767687 2683 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 19:25:26.767914 kubelet[2683]: I0912 19:25:26.767851 2683 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 19:25:26.767914 kubelet[2683]: I0912 19:25:26.767882 2683 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 19:25:26.768369 kubelet[2683]: E0912 19:25:26.768087 2683 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 19:25:26.837513 kubelet[2683]: I0912 19:25:26.837110 2683 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 19:25:26.837513 kubelet[2683]: I0912 19:25:26.837159 2683 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 19:25:26.837513 kubelet[2683]: I0912 19:25:26.837210 2683 state_mem.go:36] "Initialized new in-memory state store" Sep 12 19:25:26.837764 kubelet[2683]: I0912 19:25:26.837550 2683 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 19:25:26.837764 kubelet[2683]: I0912 19:25:26.837595 2683 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 19:25:26.837764 kubelet[2683]: I0912 19:25:26.837662 2683 policy_none.go:49] "None policy: Start" Sep 12 19:25:26.837764 kubelet[2683]: I0912 19:25:26.837695 2683 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 19:25:26.837764 kubelet[2683]: I0912 19:25:26.837747 2683 state_mem.go:35] "Initializing new in-memory state store" Sep 12 19:25:26.838056 kubelet[2683]: I0912 19:25:26.837942 2683 state_mem.go:75] "Updated machine memory state" Sep 12 19:25:26.868638 kubelet[2683]: I0912 19:25:26.867870 2683 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 19:25:26.868638 kubelet[2683]: I0912 19:25:26.868302 2683 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 19:25:26.868638 kubelet[2683]: I0912 19:25:26.868331 2683 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 19:25:26.872008 kubelet[2683]: I0912 19:25:26.870496 2683 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 19:25:26.881093 kubelet[2683]: I0912 19:25:26.880759 2683 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:26.881857 kubelet[2683]: I0912 19:25:26.881442 2683 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:26.882461 kubelet[2683]: E0912 19:25:26.881944 2683 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 19:25:26.891957 kubelet[2683]: I0912 19:25:26.891925 2683 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:26.899475 kubelet[2683]: W0912 19:25:26.898903 2683 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 19:25:26.901408 kubelet[2683]: W0912 19:25:26.901384 2683 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 19:25:26.901747 kubelet[2683]: E0912 19:25:26.901438 2683 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-gt1mb.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:26.907155 kubelet[2683]: W0912 19:25:26.907111 2683 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 19:25:26.908466 kubelet[2683]: I0912 19:25:26.908169 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/954f70e73a3b855c8e7e2c010d71ef91-kubeconfig\") pod \"kube-scheduler-srv-gt1mb.gb1.brightbox.com\" (UID: \"954f70e73a3b855c8e7e2c010d71ef91\") " pod="kube-system/kube-scheduler-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:27.008610 kubelet[2683]: I0912 19:25:27.008555 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/003c4da00ad860716b3d84e5063de7cb-ca-certs\") pod \"kube-apiserver-srv-gt1mb.gb1.brightbox.com\" (UID: \"003c4da00ad860716b3d84e5063de7cb\") " pod="kube-system/kube-apiserver-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:27.008824 kubelet[2683]: I0912 19:25:27.008619 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/003c4da00ad860716b3d84e5063de7cb-usr-share-ca-certificates\") pod \"kube-apiserver-srv-gt1mb.gb1.brightbox.com\" (UID: \"003c4da00ad860716b3d84e5063de7cb\") " pod="kube-system/kube-apiserver-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:27.008824 kubelet[2683]: I0912 19:25:27.008669 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/003c4da00ad860716b3d84e5063de7cb-k8s-certs\") pod \"kube-apiserver-srv-gt1mb.gb1.brightbox.com\" (UID: \"003c4da00ad860716b3d84e5063de7cb\") " pod="kube-system/kube-apiserver-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:27.008824 kubelet[2683]: I0912 19:25:27.008720 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f0071ac8c091cf88aecc98e8feaede1c-ca-certs\") pod \"kube-controller-manager-srv-gt1mb.gb1.brightbox.com\" (UID: \"f0071ac8c091cf88aecc98e8feaede1c\") " pod="kube-system/kube-controller-manager-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:27.008824 kubelet[2683]: I0912 19:25:27.008761 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f0071ac8c091cf88aecc98e8feaede1c-flexvolume-dir\") pod \"kube-controller-manager-srv-gt1mb.gb1.brightbox.com\" (UID: \"f0071ac8c091cf88aecc98e8feaede1c\") " pod="kube-system/kube-controller-manager-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:27.008824 kubelet[2683]: I0912 19:25:27.008789 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f0071ac8c091cf88aecc98e8feaede1c-k8s-certs\") pod \"kube-controller-manager-srv-gt1mb.gb1.brightbox.com\" (UID: \"f0071ac8c091cf88aecc98e8feaede1c\") " pod="kube-system/kube-controller-manager-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:27.009131 kubelet[2683]: I0912 19:25:27.008814 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f0071ac8c091cf88aecc98e8feaede1c-kubeconfig\") pod \"kube-controller-manager-srv-gt1mb.gb1.brightbox.com\" (UID: \"f0071ac8c091cf88aecc98e8feaede1c\") " pod="kube-system/kube-controller-manager-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:27.009131 kubelet[2683]: I0912 19:25:27.008853 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f0071ac8c091cf88aecc98e8feaede1c-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-gt1mb.gb1.brightbox.com\" (UID: \"f0071ac8c091cf88aecc98e8feaede1c\") " pod="kube-system/kube-controller-manager-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:27.020026 kubelet[2683]: I0912 19:25:27.019990 2683 kubelet_node_status.go:75] "Attempting to register node" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:27.038158 kubelet[2683]: I0912 19:25:27.037779 2683 kubelet_node_status.go:124] "Node was previously registered" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:27.038158 kubelet[2683]: I0912 19:25:27.037898 2683 kubelet_node_status.go:78] "Successfully registered node" node="srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:27.603255 kubelet[2683]: I0912 19:25:27.601887 2683 apiserver.go:52] "Watching apiserver" Sep 12 19:25:27.697858 kubelet[2683]: I0912 19:25:27.697803 2683 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 19:25:27.812938 kubelet[2683]: I0912 19:25:27.811746 2683 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:27.822011 kubelet[2683]: W0912 19:25:27.820810 2683 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Sep 12 19:25:27.822011 kubelet[2683]: E0912 19:25:27.820877 2683 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-gt1mb.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-gt1mb.gb1.brightbox.com" Sep 12 19:25:27.873087 kubelet[2683]: I0912 19:25:27.871714 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-gt1mb.gb1.brightbox.com" podStartSLOduration=3.871666695 podStartE2EDuration="3.871666695s" podCreationTimestamp="2025-09-12 19:25:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 19:25:27.871667744 +0000 UTC m=+1.442877257" watchObservedRunningTime="2025-09-12 19:25:27.871666695 +0000 UTC m=+1.442876186" Sep 12 19:25:27.873087 kubelet[2683]: I0912 19:25:27.872738 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-gt1mb.gb1.brightbox.com" podStartSLOduration=1.872729914 podStartE2EDuration="1.872729914s" podCreationTimestamp="2025-09-12 19:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 19:25:27.857325335 +0000 UTC m=+1.428534840" watchObservedRunningTime="2025-09-12 19:25:27.872729914 +0000 UTC m=+1.443939420" Sep 12 19:25:27.907313 kubelet[2683]: I0912 19:25:27.907246 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-gt1mb.gb1.brightbox.com" podStartSLOduration=1.907224817 podStartE2EDuration="1.907224817s" podCreationTimestamp="2025-09-12 19:25:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 19:25:27.882620607 +0000 UTC m=+1.453830131" watchObservedRunningTime="2025-09-12 19:25:27.907224817 +0000 UTC m=+1.478434338" Sep 12 19:25:30.003440 kubelet[2683]: I0912 19:25:30.003179 2683 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 19:25:30.005667 kubelet[2683]: I0912 19:25:30.005395 2683 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 19:25:30.005866 containerd[1514]: time="2025-09-12T19:25:30.005081410Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 19:25:30.755412 systemd[1]: Created slice kubepods-besteffort-pod60c4cb43_a389_45b3_a8b1_10a4d00de9e5.slice - libcontainer container kubepods-besteffort-pod60c4cb43_a389_45b3_a8b1_10a4d00de9e5.slice. Sep 12 19:25:30.837048 kubelet[2683]: I0912 19:25:30.836797 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/60c4cb43-a389-45b3-a8b1-10a4d00de9e5-xtables-lock\") pod \"kube-proxy-np6lt\" (UID: \"60c4cb43-a389-45b3-a8b1-10a4d00de9e5\") " pod="kube-system/kube-proxy-np6lt" Sep 12 19:25:30.837048 kubelet[2683]: I0912 19:25:30.836883 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/60c4cb43-a389-45b3-a8b1-10a4d00de9e5-lib-modules\") pod \"kube-proxy-np6lt\" (UID: \"60c4cb43-a389-45b3-a8b1-10a4d00de9e5\") " pod="kube-system/kube-proxy-np6lt" Sep 12 19:25:30.837048 kubelet[2683]: I0912 19:25:30.836925 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf4kr\" (UniqueName: \"kubernetes.io/projected/60c4cb43-a389-45b3-a8b1-10a4d00de9e5-kube-api-access-wf4kr\") pod \"kube-proxy-np6lt\" (UID: \"60c4cb43-a389-45b3-a8b1-10a4d00de9e5\") " pod="kube-system/kube-proxy-np6lt" Sep 12 19:25:30.837048 kubelet[2683]: I0912 19:25:30.836989 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/60c4cb43-a389-45b3-a8b1-10a4d00de9e5-kube-proxy\") pod \"kube-proxy-np6lt\" (UID: \"60c4cb43-a389-45b3-a8b1-10a4d00de9e5\") " pod="kube-system/kube-proxy-np6lt" Sep 12 19:25:31.076491 containerd[1514]: time="2025-09-12T19:25:31.076395494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-np6lt,Uid:60c4cb43-a389-45b3-a8b1-10a4d00de9e5,Namespace:kube-system,Attempt:0,}" Sep 12 19:25:31.148082 containerd[1514]: time="2025-09-12T19:25:31.146929235Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:25:31.148470 containerd[1514]: time="2025-09-12T19:25:31.147656092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:25:31.148470 containerd[1514]: time="2025-09-12T19:25:31.147680518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:25:31.148470 containerd[1514]: time="2025-09-12T19:25:31.147865166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:25:31.169173 systemd[1]: Created slice kubepods-besteffort-pod95711825_3d2b_43da_9a7b_fe663dec4f67.slice - libcontainer container kubepods-besteffort-pod95711825_3d2b_43da_9a7b_fe663dec4f67.slice. Sep 12 19:25:31.206474 systemd[1]: Started cri-containerd-6b2f92f8e1dbdaa58a5b8d2faf6a691c99a22cfbc4ccea29cb47913e2f4cb0e7.scope - libcontainer container 6b2f92f8e1dbdaa58a5b8d2faf6a691c99a22cfbc4ccea29cb47913e2f4cb0e7. Sep 12 19:25:31.240875 kubelet[2683]: I0912 19:25:31.240394 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/95711825-3d2b-43da-9a7b-fe663dec4f67-var-lib-calico\") pod \"tigera-operator-755d956888-lxvms\" (UID: \"95711825-3d2b-43da-9a7b-fe663dec4f67\") " pod="tigera-operator/tigera-operator-755d956888-lxvms" Sep 12 19:25:31.241756 kubelet[2683]: I0912 19:25:31.241412 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlmxz\" (UniqueName: \"kubernetes.io/projected/95711825-3d2b-43da-9a7b-fe663dec4f67-kube-api-access-zlmxz\") pod \"tigera-operator-755d956888-lxvms\" (UID: \"95711825-3d2b-43da-9a7b-fe663dec4f67\") " pod="tigera-operator/tigera-operator-755d956888-lxvms" Sep 12 19:25:31.252402 containerd[1514]: time="2025-09-12T19:25:31.252313071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-np6lt,Uid:60c4cb43-a389-45b3-a8b1-10a4d00de9e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b2f92f8e1dbdaa58a5b8d2faf6a691c99a22cfbc4ccea29cb47913e2f4cb0e7\"" Sep 12 19:25:31.261381 containerd[1514]: time="2025-09-12T19:25:31.261332557Z" level=info msg="CreateContainer within sandbox \"6b2f92f8e1dbdaa58a5b8d2faf6a691c99a22cfbc4ccea29cb47913e2f4cb0e7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 19:25:31.298896 containerd[1514]: time="2025-09-12T19:25:31.298548142Z" level=info msg="CreateContainer within sandbox \"6b2f92f8e1dbdaa58a5b8d2faf6a691c99a22cfbc4ccea29cb47913e2f4cb0e7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e75f0205a7f70c5ec515aa6709405f9164c5f755fdf9120358e85431dd1f6c81\"" Sep 12 19:25:31.303162 containerd[1514]: time="2025-09-12T19:25:31.303126604Z" level=info msg="StartContainer for \"e75f0205a7f70c5ec515aa6709405f9164c5f755fdf9120358e85431dd1f6c81\"" Sep 12 19:25:31.358256 systemd[1]: Started cri-containerd-e75f0205a7f70c5ec515aa6709405f9164c5f755fdf9120358e85431dd1f6c81.scope - libcontainer container e75f0205a7f70c5ec515aa6709405f9164c5f755fdf9120358e85431dd1f6c81. Sep 12 19:25:31.413226 containerd[1514]: time="2025-09-12T19:25:31.412754358Z" level=info msg="StartContainer for \"e75f0205a7f70c5ec515aa6709405f9164c5f755fdf9120358e85431dd1f6c81\" returns successfully" Sep 12 19:25:31.478631 containerd[1514]: time="2025-09-12T19:25:31.477820603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-lxvms,Uid:95711825-3d2b-43da-9a7b-fe663dec4f67,Namespace:tigera-operator,Attempt:0,}" Sep 12 19:25:31.536342 containerd[1514]: time="2025-09-12T19:25:31.535355048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:25:31.538941 containerd[1514]: time="2025-09-12T19:25:31.537052273Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:25:31.538941 containerd[1514]: time="2025-09-12T19:25:31.538867448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:25:31.539724 containerd[1514]: time="2025-09-12T19:25:31.539274377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:25:31.575190 systemd[1]: Started cri-containerd-6e041b789e08425fa55894f3747f57fff5270efa90b97d8089bdddf221a15a59.scope - libcontainer container 6e041b789e08425fa55894f3747f57fff5270efa90b97d8089bdddf221a15a59. Sep 12 19:25:31.667204 containerd[1514]: time="2025-09-12T19:25:31.666749968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-755d956888-lxvms,Uid:95711825-3d2b-43da-9a7b-fe663dec4f67,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6e041b789e08425fa55894f3747f57fff5270efa90b97d8089bdddf221a15a59\"" Sep 12 19:25:31.672253 containerd[1514]: time="2025-09-12T19:25:31.671730081Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\"" Sep 12 19:25:31.861382 kubelet[2683]: I0912 19:25:31.860143 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-np6lt" podStartSLOduration=1.860095412 podStartE2EDuration="1.860095412s" podCreationTimestamp="2025-09-12 19:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 19:25:31.858269864 +0000 UTC m=+5.429479385" watchObservedRunningTime="2025-09-12 19:25:31.860095412 +0000 UTC m=+5.431304908" Sep 12 19:25:31.969363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount951081030.mount: Deactivated successfully. Sep 12 19:25:33.618550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4195786880.mount: Deactivated successfully. Sep 12 19:25:34.741471 containerd[1514]: time="2025-09-12T19:25:34.741402305Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:34.742684 containerd[1514]: time="2025-09-12T19:25:34.742141103Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.6: active requests=0, bytes read=25062609" Sep 12 19:25:34.746993 containerd[1514]: time="2025-09-12T19:25:34.744144781Z" level=info msg="ImageCreate event name:\"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:34.754668 containerd[1514]: time="2025-09-12T19:25:34.754612883Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:34.758197 containerd[1514]: time="2025-09-12T19:25:34.757245126Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.6\" with image id \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\", repo tag \"quay.io/tigera/operator:v1.38.6\", repo digest \"quay.io/tigera/operator@sha256:00a7a9b62f9b9a4e0856128b078539783b8352b07f707bff595cb604cc580f6e\", size \"25058604\" in 3.085106137s" Sep 12 19:25:34.758197 containerd[1514]: time="2025-09-12T19:25:34.757304548Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.6\" returns image reference \"sha256:1911afdd8478c6ca3036ff85614050d5d19acc0f0c3f6a5a7b3e34b38dd309c9\"" Sep 12 19:25:34.764567 containerd[1514]: time="2025-09-12T19:25:34.764507766Z" level=info msg="CreateContainer within sandbox \"6e041b789e08425fa55894f3747f57fff5270efa90b97d8089bdddf221a15a59\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Sep 12 19:25:34.793615 containerd[1514]: time="2025-09-12T19:25:34.793534620Z" level=info msg="CreateContainer within sandbox \"6e041b789e08425fa55894f3747f57fff5270efa90b97d8089bdddf221a15a59\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"913c2eeb23b72358fde0a5f7b7a6f4aab78e0b680931188e9a44bf5446a25519\"" Sep 12 19:25:34.795395 containerd[1514]: time="2025-09-12T19:25:34.795364000Z" level=info msg="StartContainer for \"913c2eeb23b72358fde0a5f7b7a6f4aab78e0b680931188e9a44bf5446a25519\"" Sep 12 19:25:34.865200 systemd[1]: Started cri-containerd-913c2eeb23b72358fde0a5f7b7a6f4aab78e0b680931188e9a44bf5446a25519.scope - libcontainer container 913c2eeb23b72358fde0a5f7b7a6f4aab78e0b680931188e9a44bf5446a25519. Sep 12 19:25:34.910106 containerd[1514]: time="2025-09-12T19:25:34.909841798Z" level=info msg="StartContainer for \"913c2eeb23b72358fde0a5f7b7a6f4aab78e0b680931188e9a44bf5446a25519\" returns successfully" Sep 12 19:25:35.863778 kubelet[2683]: I0912 19:25:35.862620 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-755d956888-lxvms" podStartSLOduration=1.772261958 podStartE2EDuration="4.862531938s" podCreationTimestamp="2025-09-12 19:25:31 +0000 UTC" firstStartedPulling="2025-09-12 19:25:31.669554271 +0000 UTC m=+5.240763763" lastFinishedPulling="2025-09-12 19:25:34.75982425 +0000 UTC m=+8.331033743" observedRunningTime="2025-09-12 19:25:35.862193414 +0000 UTC m=+9.433402920" watchObservedRunningTime="2025-09-12 19:25:35.862531938 +0000 UTC m=+9.433741443" Sep 12 19:25:42.822231 sudo[1774]: pam_unix(sudo:session): session closed for user root Sep 12 19:25:42.970499 sshd[1771]: pam_unix(sshd:session): session closed for user core Sep 12 19:25:42.987063 systemd-logind[1485]: Session 11 logged out. Waiting for processes to exit. Sep 12 19:25:42.989362 systemd[1]: sshd@8-10.230.43.118:22-139.178.68.195:58718.service: Deactivated successfully. Sep 12 19:25:42.998095 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 19:25:42.999590 systemd[1]: session-11.scope: Consumed 8.198s CPU time, 143.7M memory peak, 0B memory swap peak. Sep 12 19:25:43.001508 systemd-logind[1485]: Removed session 11. Sep 12 19:25:47.684347 systemd[1]: Created slice kubepods-besteffort-pod342ab1aa_f96c_4511_80bf_c84cefd565eb.slice - libcontainer container kubepods-besteffort-pod342ab1aa_f96c_4511_80bf_c84cefd565eb.slice. Sep 12 19:25:47.764324 kubelet[2683]: I0912 19:25:47.764059 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/342ab1aa-f96c-4511-80bf-c84cefd565eb-typha-certs\") pod \"calico-typha-54d789dc96-9jc5s\" (UID: \"342ab1aa-f96c-4511-80bf-c84cefd565eb\") " pod="calico-system/calico-typha-54d789dc96-9jc5s" Sep 12 19:25:47.764324 kubelet[2683]: I0912 19:25:47.764203 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/342ab1aa-f96c-4511-80bf-c84cefd565eb-tigera-ca-bundle\") pod \"calico-typha-54d789dc96-9jc5s\" (UID: \"342ab1aa-f96c-4511-80bf-c84cefd565eb\") " pod="calico-system/calico-typha-54d789dc96-9jc5s" Sep 12 19:25:47.764324 kubelet[2683]: I0912 19:25:47.764248 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9xzk\" (UniqueName: \"kubernetes.io/projected/342ab1aa-f96c-4511-80bf-c84cefd565eb-kube-api-access-h9xzk\") pod \"calico-typha-54d789dc96-9jc5s\" (UID: \"342ab1aa-f96c-4511-80bf-c84cefd565eb\") " pod="calico-system/calico-typha-54d789dc96-9jc5s" Sep 12 19:25:47.970390 systemd[1]: Created slice kubepods-besteffort-pod74777830_f10c_41a8_873d_03d2ee9112ce.slice - libcontainer container kubepods-besteffort-pod74777830_f10c_41a8_873d_03d2ee9112ce.slice. Sep 12 19:25:47.998982 containerd[1514]: time="2025-09-12T19:25:47.998813271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54d789dc96-9jc5s,Uid:342ab1aa-f96c-4511-80bf-c84cefd565eb,Namespace:calico-system,Attempt:0,}" Sep 12 19:25:48.066034 containerd[1514]: time="2025-09-12T19:25:48.065829863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:25:48.066698 containerd[1514]: time="2025-09-12T19:25:48.066077801Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:25:48.066698 containerd[1514]: time="2025-09-12T19:25:48.066164054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:25:48.067139 kubelet[2683]: I0912 19:25:48.066875 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/74777830-f10c-41a8-873d-03d2ee9112ce-xtables-lock\") pod \"calico-node-8wtkh\" (UID: \"74777830-f10c-41a8-873d-03d2ee9112ce\") " pod="calico-system/calico-node-8wtkh" Sep 12 19:25:48.067139 kubelet[2683]: I0912 19:25:48.066950 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/74777830-f10c-41a8-873d-03d2ee9112ce-policysync\") pod \"calico-node-8wtkh\" (UID: \"74777830-f10c-41a8-873d-03d2ee9112ce\") " pod="calico-system/calico-node-8wtkh" Sep 12 19:25:48.067139 kubelet[2683]: I0912 19:25:48.067023 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/74777830-f10c-41a8-873d-03d2ee9112ce-lib-modules\") pod \"calico-node-8wtkh\" (UID: \"74777830-f10c-41a8-873d-03d2ee9112ce\") " pod="calico-system/calico-node-8wtkh" Sep 12 19:25:48.067139 kubelet[2683]: I0912 19:25:48.067071 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/74777830-f10c-41a8-873d-03d2ee9112ce-var-run-calico\") pod \"calico-node-8wtkh\" (UID: \"74777830-f10c-41a8-873d-03d2ee9112ce\") " pod="calico-system/calico-node-8wtkh" Sep 12 19:25:48.067139 kubelet[2683]: I0912 19:25:48.067112 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/74777830-f10c-41a8-873d-03d2ee9112ce-cni-log-dir\") pod \"calico-node-8wtkh\" (UID: \"74777830-f10c-41a8-873d-03d2ee9112ce\") " pod="calico-system/calico-node-8wtkh" Sep 12 19:25:48.068419 kubelet[2683]: I0912 19:25:48.067138 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/74777830-f10c-41a8-873d-03d2ee9112ce-cni-net-dir\") pod \"calico-node-8wtkh\" (UID: \"74777830-f10c-41a8-873d-03d2ee9112ce\") " pod="calico-system/calico-node-8wtkh" Sep 12 19:25:48.068419 kubelet[2683]: I0912 19:25:48.067190 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g56ql\" (UniqueName: \"kubernetes.io/projected/74777830-f10c-41a8-873d-03d2ee9112ce-kube-api-access-g56ql\") pod \"calico-node-8wtkh\" (UID: \"74777830-f10c-41a8-873d-03d2ee9112ce\") " pod="calico-system/calico-node-8wtkh" Sep 12 19:25:48.068419 kubelet[2683]: I0912 19:25:48.067244 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/74777830-f10c-41a8-873d-03d2ee9112ce-cni-bin-dir\") pod \"calico-node-8wtkh\" (UID: \"74777830-f10c-41a8-873d-03d2ee9112ce\") " pod="calico-system/calico-node-8wtkh" Sep 12 19:25:48.068419 kubelet[2683]: I0912 19:25:48.067272 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/74777830-f10c-41a8-873d-03d2ee9112ce-flexvol-driver-host\") pod \"calico-node-8wtkh\" (UID: \"74777830-f10c-41a8-873d-03d2ee9112ce\") " pod="calico-system/calico-node-8wtkh" Sep 12 19:25:48.068419 kubelet[2683]: I0912 19:25:48.067297 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/74777830-f10c-41a8-873d-03d2ee9112ce-node-certs\") pod \"calico-node-8wtkh\" (UID: \"74777830-f10c-41a8-873d-03d2ee9112ce\") " pod="calico-system/calico-node-8wtkh" Sep 12 19:25:48.069178 containerd[1514]: time="2025-09-12T19:25:48.067561993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:25:48.069241 kubelet[2683]: I0912 19:25:48.067342 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74777830-f10c-41a8-873d-03d2ee9112ce-tigera-ca-bundle\") pod \"calico-node-8wtkh\" (UID: \"74777830-f10c-41a8-873d-03d2ee9112ce\") " pod="calico-system/calico-node-8wtkh" Sep 12 19:25:48.069241 kubelet[2683]: I0912 19:25:48.067370 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/74777830-f10c-41a8-873d-03d2ee9112ce-var-lib-calico\") pod \"calico-node-8wtkh\" (UID: \"74777830-f10c-41a8-873d-03d2ee9112ce\") " pod="calico-system/calico-node-8wtkh" Sep 12 19:25:48.169173 systemd[1]: Started cri-containerd-96fd1ef1edbe79f9d49107feb4fa836780f1c6bc49df383f2b251c0a7bce69cf.scope - libcontainer container 96fd1ef1edbe79f9d49107feb4fa836780f1c6bc49df383f2b251c0a7bce69cf. Sep 12 19:25:48.203729 kubelet[2683]: E0912 19:25:48.203626 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.203729 kubelet[2683]: W0912 19:25:48.203687 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.204663 kubelet[2683]: E0912 19:25:48.204381 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.207598 kubelet[2683]: E0912 19:25:48.207494 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.207598 kubelet[2683]: W0912 19:25:48.207518 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.207598 kubelet[2683]: E0912 19:25:48.207537 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.253862 kubelet[2683]: E0912 19:25:48.253623 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bxklx" podUID="06e002f4-3e23-487d-b3cb-f79cac263b04" Sep 12 19:25:48.277295 containerd[1514]: time="2025-09-12T19:25:48.277211585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8wtkh,Uid:74777830-f10c-41a8-873d-03d2ee9112ce,Namespace:calico-system,Attempt:0,}" Sep 12 19:25:48.345910 containerd[1514]: time="2025-09-12T19:25:48.345555444Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:25:48.345910 containerd[1514]: time="2025-09-12T19:25:48.345811987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:25:48.346381 containerd[1514]: time="2025-09-12T19:25:48.345882736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:25:48.349446 containerd[1514]: time="2025-09-12T19:25:48.348464808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:25:48.351722 kubelet[2683]: E0912 19:25:48.351630 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.351722 kubelet[2683]: W0912 19:25:48.351660 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.351722 kubelet[2683]: E0912 19:25:48.351715 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.352363 kubelet[2683]: E0912 19:25:48.352132 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.352363 kubelet[2683]: W0912 19:25:48.352151 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.352363 kubelet[2683]: E0912 19:25:48.352168 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.353442 kubelet[2683]: E0912 19:25:48.353265 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.353442 kubelet[2683]: W0912 19:25:48.353282 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.353442 kubelet[2683]: E0912 19:25:48.353298 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.354159 kubelet[2683]: E0912 19:25:48.353719 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.354159 kubelet[2683]: W0912 19:25:48.353734 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.354159 kubelet[2683]: E0912 19:25:48.353750 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.354159 kubelet[2683]: E0912 19:25:48.354054 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.354159 kubelet[2683]: W0912 19:25:48.354068 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.354159 kubelet[2683]: E0912 19:25:48.354083 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.355727 kubelet[2683]: E0912 19:25:48.354955 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.355727 kubelet[2683]: W0912 19:25:48.354999 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.355727 kubelet[2683]: E0912 19:25:48.355031 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.355727 kubelet[2683]: E0912 19:25:48.355268 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.355727 kubelet[2683]: W0912 19:25:48.355282 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.355727 kubelet[2683]: E0912 19:25:48.355298 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.355727 kubelet[2683]: E0912 19:25:48.355571 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.355727 kubelet[2683]: W0912 19:25:48.355585 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.355727 kubelet[2683]: E0912 19:25:48.355600 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.362392 kubelet[2683]: E0912 19:25:48.356405 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.362392 kubelet[2683]: W0912 19:25:48.356453 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.362392 kubelet[2683]: E0912 19:25:48.356479 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.362392 kubelet[2683]: E0912 19:25:48.361465 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.362392 kubelet[2683]: W0912 19:25:48.361505 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.362392 kubelet[2683]: E0912 19:25:48.361569 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.363998 kubelet[2683]: E0912 19:25:48.363063 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.363998 kubelet[2683]: W0912 19:25:48.363096 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.363998 kubelet[2683]: E0912 19:25:48.363113 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.363998 kubelet[2683]: E0912 19:25:48.363462 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.363998 kubelet[2683]: W0912 19:25:48.363476 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.363998 kubelet[2683]: E0912 19:25:48.363492 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.364317 kubelet[2683]: E0912 19:25:48.364295 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.364317 kubelet[2683]: W0912 19:25:48.364311 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.364430 kubelet[2683]: E0912 19:25:48.364327 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.365483 kubelet[2683]: E0912 19:25:48.365091 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.365483 kubelet[2683]: W0912 19:25:48.365111 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.365483 kubelet[2683]: E0912 19:25:48.365127 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.365677 kubelet[2683]: E0912 19:25:48.365526 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.365677 kubelet[2683]: W0912 19:25:48.365540 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.365677 kubelet[2683]: E0912 19:25:48.365556 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.368690 kubelet[2683]: E0912 19:25:48.367636 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.368690 kubelet[2683]: W0912 19:25:48.367657 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.368690 kubelet[2683]: E0912 19:25:48.367675 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.369091 kubelet[2683]: E0912 19:25:48.369039 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.369091 kubelet[2683]: W0912 19:25:48.369061 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.369091 kubelet[2683]: E0912 19:25:48.369078 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.371836 kubelet[2683]: E0912 19:25:48.371183 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.371836 kubelet[2683]: W0912 19:25:48.371218 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.371836 kubelet[2683]: E0912 19:25:48.371236 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.372067 kubelet[2683]: E0912 19:25:48.371862 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.372067 kubelet[2683]: W0912 19:25:48.371894 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.372067 kubelet[2683]: E0912 19:25:48.371914 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.375859 kubelet[2683]: E0912 19:25:48.375262 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.375859 kubelet[2683]: W0912 19:25:48.375296 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.375859 kubelet[2683]: E0912 19:25:48.375323 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.377322 kubelet[2683]: E0912 19:25:48.377270 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.377322 kubelet[2683]: W0912 19:25:48.377289 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.377322 kubelet[2683]: E0912 19:25:48.377317 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.377884 kubelet[2683]: I0912 19:25:48.377361 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/06e002f4-3e23-487d-b3cb-f79cac263b04-varrun\") pod \"csi-node-driver-bxklx\" (UID: \"06e002f4-3e23-487d-b3cb-f79cac263b04\") " pod="calico-system/csi-node-driver-bxklx" Sep 12 19:25:48.379760 kubelet[2683]: E0912 19:25:48.379312 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.379760 kubelet[2683]: W0912 19:25:48.379329 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.379760 kubelet[2683]: E0912 19:25:48.379355 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.382216 kubelet[2683]: E0912 19:25:48.382195 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.383037 kubelet[2683]: W0912 19:25:48.383013 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.384578 kubelet[2683]: E0912 19:25:48.383174 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.384871 kubelet[2683]: E0912 19:25:48.384851 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.385371 kubelet[2683]: W0912 19:25:48.384910 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.385371 kubelet[2683]: E0912 19:25:48.384931 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.385371 kubelet[2683]: I0912 19:25:48.384994 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcb4m\" (UniqueName: \"kubernetes.io/projected/06e002f4-3e23-487d-b3cb-f79cac263b04-kube-api-access-kcb4m\") pod \"csi-node-driver-bxklx\" (UID: \"06e002f4-3e23-487d-b3cb-f79cac263b04\") " pod="calico-system/csi-node-driver-bxklx" Sep 12 19:25:48.389185 kubelet[2683]: E0912 19:25:48.389047 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.389185 kubelet[2683]: W0912 19:25:48.389068 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.389185 kubelet[2683]: E0912 19:25:48.389098 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.389185 kubelet[2683]: I0912 19:25:48.389136 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/06e002f4-3e23-487d-b3cb-f79cac263b04-kubelet-dir\") pod \"csi-node-driver-bxklx\" (UID: \"06e002f4-3e23-487d-b3cb-f79cac263b04\") " pod="calico-system/csi-node-driver-bxklx" Sep 12 19:25:48.392955 kubelet[2683]: E0912 19:25:48.389908 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.392955 kubelet[2683]: W0912 19:25:48.389954 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.392955 kubelet[2683]: E0912 19:25:48.390016 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.392955 kubelet[2683]: E0912 19:25:48.390798 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.392955 kubelet[2683]: W0912 19:25:48.390814 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.392955 kubelet[2683]: E0912 19:25:48.390830 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.392955 kubelet[2683]: E0912 19:25:48.392059 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.392955 kubelet[2683]: W0912 19:25:48.392075 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.392955 kubelet[2683]: E0912 19:25:48.392092 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.393551 kubelet[2683]: I0912 19:25:48.392117 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/06e002f4-3e23-487d-b3cb-f79cac263b04-registration-dir\") pod \"csi-node-driver-bxklx\" (UID: \"06e002f4-3e23-487d-b3cb-f79cac263b04\") " pod="calico-system/csi-node-driver-bxklx" Sep 12 19:25:48.393551 kubelet[2683]: E0912 19:25:48.392383 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.393551 kubelet[2683]: W0912 19:25:48.392406 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.393551 kubelet[2683]: E0912 19:25:48.392423 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.393551 kubelet[2683]: I0912 19:25:48.392448 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/06e002f4-3e23-487d-b3cb-f79cac263b04-socket-dir\") pod \"csi-node-driver-bxklx\" (UID: \"06e002f4-3e23-487d-b3cb-f79cac263b04\") " pod="calico-system/csi-node-driver-bxklx" Sep 12 19:25:48.393551 kubelet[2683]: E0912 19:25:48.392765 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.393551 kubelet[2683]: W0912 19:25:48.392780 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.393551 kubelet[2683]: E0912 19:25:48.392795 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.395950 kubelet[2683]: E0912 19:25:48.394232 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.395950 kubelet[2683]: W0912 19:25:48.394248 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.395950 kubelet[2683]: E0912 19:25:48.394464 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.395950 kubelet[2683]: E0912 19:25:48.394648 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.395950 kubelet[2683]: W0912 19:25:48.394678 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.395950 kubelet[2683]: E0912 19:25:48.394715 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.395950 kubelet[2683]: E0912 19:25:48.395083 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.395950 kubelet[2683]: W0912 19:25:48.395097 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.396441 kubelet[2683]: E0912 19:25:48.396047 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.397645 kubelet[2683]: E0912 19:25:48.396590 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.397645 kubelet[2683]: W0912 19:25:48.396610 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.397645 kubelet[2683]: E0912 19:25:48.396628 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.398486 kubelet[2683]: E0912 19:25:48.398153 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.398486 kubelet[2683]: W0912 19:25:48.398173 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.398486 kubelet[2683]: E0912 19:25:48.398191 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.422540 systemd[1]: Started cri-containerd-e758dd923effd5c834c3dbe4d4d173658f78e74bc1d223b75f19af3ba5f6d3ea.scope - libcontainer container e758dd923effd5c834c3dbe4d4d173658f78e74bc1d223b75f19af3ba5f6d3ea. Sep 12 19:25:48.495226 kubelet[2683]: E0912 19:25:48.495173 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.495226 kubelet[2683]: W0912 19:25:48.495201 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.495226 kubelet[2683]: E0912 19:25:48.495226 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.496308 kubelet[2683]: E0912 19:25:48.496274 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.496308 kubelet[2683]: W0912 19:25:48.496305 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.496472 kubelet[2683]: E0912 19:25:48.496333 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.497225 kubelet[2683]: E0912 19:25:48.497160 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.497906 kubelet[2683]: W0912 19:25:48.497492 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.498076 kubelet[2683]: E0912 19:25:48.498008 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.499105 kubelet[2683]: E0912 19:25:48.498353 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.499105 kubelet[2683]: W0912 19:25:48.498368 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.499105 kubelet[2683]: E0912 19:25:48.498422 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.499105 kubelet[2683]: E0912 19:25:48.499091 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.499841 kubelet[2683]: W0912 19:25:48.499106 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.499841 kubelet[2683]: E0912 19:25:48.499567 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.500858 kubelet[2683]: E0912 19:25:48.500349 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.500858 kubelet[2683]: W0912 19:25:48.500368 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.501107 kubelet[2683]: E0912 19:25:48.501031 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.502011 kubelet[2683]: E0912 19:25:48.501594 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.502011 kubelet[2683]: W0912 19:25:48.501616 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.502011 kubelet[2683]: E0912 19:25:48.501848 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.503110 kubelet[2683]: E0912 19:25:48.503089 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.503110 kubelet[2683]: W0912 19:25:48.503109 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.503321 kubelet[2683]: E0912 19:25:48.503296 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.505025 kubelet[2683]: E0912 19:25:48.504312 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.505025 kubelet[2683]: W0912 19:25:48.504333 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.505025 kubelet[2683]: E0912 19:25:48.504674 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.505759 kubelet[2683]: E0912 19:25:48.505725 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.505759 kubelet[2683]: W0912 19:25:48.505746 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.507088 kubelet[2683]: E0912 19:25:48.505782 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.507088 kubelet[2683]: E0912 19:25:48.507075 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.507196 kubelet[2683]: W0912 19:25:48.507093 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.507725 kubelet[2683]: E0912 19:25:48.507400 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.507836 kubelet[2683]: E0912 19:25:48.507777 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.507836 kubelet[2683]: W0912 19:25:48.507791 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.510050 kubelet[2683]: E0912 19:25:48.508327 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.510225 kubelet[2683]: E0912 19:25:48.510066 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.510225 kubelet[2683]: W0912 19:25:48.510080 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.510225 kubelet[2683]: E0912 19:25:48.510126 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.510587 kubelet[2683]: E0912 19:25:48.510566 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.510587 kubelet[2683]: W0912 19:25:48.510586 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.510832 kubelet[2683]: E0912 19:25:48.510789 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.511767 kubelet[2683]: E0912 19:25:48.511744 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.511767 kubelet[2683]: W0912 19:25:48.511765 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.511933 kubelet[2683]: E0912 19:25:48.511801 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.513585 kubelet[2683]: E0912 19:25:48.513548 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.513585 kubelet[2683]: W0912 19:25:48.513577 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.515125 kubelet[2683]: E0912 19:25:48.515094 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.515125 kubelet[2683]: W0912 19:25:48.515116 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.515457 kubelet[2683]: E0912 19:25:48.515428 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.515541 kubelet[2683]: E0912 19:25:48.515457 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.515584 kubelet[2683]: E0912 19:25:48.515542 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.515584 kubelet[2683]: W0912 19:25:48.515554 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.516056 kubelet[2683]: E0912 19:25:48.516031 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.516056 kubelet[2683]: W0912 19:25:48.516052 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.517437 kubelet[2683]: E0912 19:25:48.517406 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.517437 kubelet[2683]: W0912 19:25:48.517427 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.518188 kubelet[2683]: E0912 19:25:48.518155 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.519788 kubelet[2683]: E0912 19:25:48.519762 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.519887 kubelet[2683]: E0912 19:25:48.519817 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.522808 kubelet[2683]: E0912 19:25:48.521044 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.522808 kubelet[2683]: W0912 19:25:48.521067 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.522808 kubelet[2683]: E0912 19:25:48.522727 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.523270 kubelet[2683]: E0912 19:25:48.523242 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.523270 kubelet[2683]: W0912 19:25:48.523269 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.524077 kubelet[2683]: E0912 19:25:48.524025 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.525093 kubelet[2683]: E0912 19:25:48.525069 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.525093 kubelet[2683]: W0912 19:25:48.525091 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.525359 kubelet[2683]: E0912 19:25:48.525172 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.525423 kubelet[2683]: E0912 19:25:48.525410 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.525487 kubelet[2683]: W0912 19:25:48.525424 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.525487 kubelet[2683]: E0912 19:25:48.525440 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.526386 kubelet[2683]: E0912 19:25:48.526284 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.526386 kubelet[2683]: W0912 19:25:48.526312 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.526386 kubelet[2683]: E0912 19:25:48.526331 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.557903 containerd[1514]: time="2025-09-12T19:25:48.557833382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8wtkh,Uid:74777830-f10c-41a8-873d-03d2ee9112ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"e758dd923effd5c834c3dbe4d4d173658f78e74bc1d223b75f19af3ba5f6d3ea\"" Sep 12 19:25:48.569672 kubelet[2683]: E0912 19:25:48.569634 2683 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Sep 12 19:25:48.569672 kubelet[2683]: W0912 19:25:48.569661 2683 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Sep 12 19:25:48.569907 kubelet[2683]: E0912 19:25:48.569708 2683 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Sep 12 19:25:48.571262 containerd[1514]: time="2025-09-12T19:25:48.571146742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\"" Sep 12 19:25:48.610710 containerd[1514]: time="2025-09-12T19:25:48.609448350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54d789dc96-9jc5s,Uid:342ab1aa-f96c-4511-80bf-c84cefd565eb,Namespace:calico-system,Attempt:0,} returns sandbox id \"96fd1ef1edbe79f9d49107feb4fa836780f1c6bc49df383f2b251c0a7bce69cf\"" Sep 12 19:25:48.904160 systemd[1]: run-containerd-runc-k8s.io-96fd1ef1edbe79f9d49107feb4fa836780f1c6bc49df383f2b251c0a7bce69cf-runc.yvL9u4.mount: Deactivated successfully. Sep 12 19:25:49.771056 kubelet[2683]: E0912 19:25:49.769510 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bxklx" podUID="06e002f4-3e23-487d-b3cb-f79cac263b04" Sep 12 19:25:50.377214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3390671158.mount: Deactivated successfully. Sep 12 19:25:50.590727 containerd[1514]: time="2025-09-12T19:25:50.589340794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:50.590727 containerd[1514]: time="2025-09-12T19:25:50.590625527Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3: active requests=0, bytes read=5939501" Sep 12 19:25:50.591479 containerd[1514]: time="2025-09-12T19:25:50.591427237Z" level=info msg="ImageCreate event name:\"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:50.595666 containerd[1514]: time="2025-09-12T19:25:50.595619389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:50.597227 containerd[1514]: time="2025-09-12T19:25:50.597180554Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" with image id \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:81bdfcd9dbd36624dc35354e8c181c75631ba40e6c7df5820f5f56cea36f0ef9\", size \"5939323\" in 2.025976205s" Sep 12 19:25:50.597374 containerd[1514]: time="2025-09-12T19:25:50.597330748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.3\" returns image reference \"sha256:4f2b088ed6fdfc6a97ac0650a4ba8171107d6656ce265c592e4c8423fd10e5c4\"" Sep 12 19:25:50.600366 containerd[1514]: time="2025-09-12T19:25:50.600327241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\"" Sep 12 19:25:50.602420 containerd[1514]: time="2025-09-12T19:25:50.602367000Z" level=info msg="CreateContainer within sandbox \"e758dd923effd5c834c3dbe4d4d173658f78e74bc1d223b75f19af3ba5f6d3ea\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Sep 12 19:25:50.631302 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2787118335.mount: Deactivated successfully. Sep 12 19:25:50.633468 containerd[1514]: time="2025-09-12T19:25:50.633397464Z" level=info msg="CreateContainer within sandbox \"e758dd923effd5c834c3dbe4d4d173658f78e74bc1d223b75f19af3ba5f6d3ea\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"cfdbf02a4462160b5791b0e3b4585ebb078876e3529870b4112b9f6e64a79bce\"" Sep 12 19:25:50.634469 containerd[1514]: time="2025-09-12T19:25:50.634422684Z" level=info msg="StartContainer for \"cfdbf02a4462160b5791b0e3b4585ebb078876e3529870b4112b9f6e64a79bce\"" Sep 12 19:25:50.703294 systemd[1]: Started cri-containerd-cfdbf02a4462160b5791b0e3b4585ebb078876e3529870b4112b9f6e64a79bce.scope - libcontainer container cfdbf02a4462160b5791b0e3b4585ebb078876e3529870b4112b9f6e64a79bce. Sep 12 19:25:50.777357 containerd[1514]: time="2025-09-12T19:25:50.777308028Z" level=info msg="StartContainer for \"cfdbf02a4462160b5791b0e3b4585ebb078876e3529870b4112b9f6e64a79bce\" returns successfully" Sep 12 19:25:50.798691 systemd[1]: cri-containerd-cfdbf02a4462160b5791b0e3b4585ebb078876e3529870b4112b9f6e64a79bce.scope: Deactivated successfully. Sep 12 19:25:50.867378 containerd[1514]: time="2025-09-12T19:25:50.866416723Z" level=info msg="shim disconnected" id=cfdbf02a4462160b5791b0e3b4585ebb078876e3529870b4112b9f6e64a79bce namespace=k8s.io Sep 12 19:25:50.867378 containerd[1514]: time="2025-09-12T19:25:50.867118274Z" level=warning msg="cleaning up after shim disconnected" id=cfdbf02a4462160b5791b0e3b4585ebb078876e3529870b4112b9f6e64a79bce namespace=k8s.io Sep 12 19:25:50.867378 containerd[1514]: time="2025-09-12T19:25:50.867159620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 19:25:51.257905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cfdbf02a4462160b5791b0e3b4585ebb078876e3529870b4112b9f6e64a79bce-rootfs.mount: Deactivated successfully. Sep 12 19:25:51.769450 kubelet[2683]: E0912 19:25:51.769359 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bxklx" podUID="06e002f4-3e23-487d-b3cb-f79cac263b04" Sep 12 19:25:53.769587 kubelet[2683]: E0912 19:25:53.769433 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bxklx" podUID="06e002f4-3e23-487d-b3cb-f79cac263b04" Sep 12 19:25:54.363256 containerd[1514]: time="2025-09-12T19:25:54.363142330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:54.364887 containerd[1514]: time="2025-09-12T19:25:54.364810213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.3: active requests=0, bytes read=33744548" Sep 12 19:25:54.365989 containerd[1514]: time="2025-09-12T19:25:54.365489199Z" level=info msg="ImageCreate event name:\"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:54.368672 containerd[1514]: time="2025-09-12T19:25:54.368632719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:54.369948 containerd[1514]: time="2025-09-12T19:25:54.369891351Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.3\" with image id \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:f4a3d61ffda9c98a53adeb412c5af404ca3727a3cc2d0b4ef28d197bdd47ecaa\", size \"35237243\" in 3.76951427s" Sep 12 19:25:54.370126 containerd[1514]: time="2025-09-12T19:25:54.370097062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.3\" returns image reference \"sha256:1d7bb7b0cce2924d35c7c26f6b6600409ea7c9535074c3d2e517ffbb3a0e0b36\"" Sep 12 19:25:54.372466 containerd[1514]: time="2025-09-12T19:25:54.372436484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\"" Sep 12 19:25:54.401430 containerd[1514]: time="2025-09-12T19:25:54.401352452Z" level=info msg="CreateContainer within sandbox \"96fd1ef1edbe79f9d49107feb4fa836780f1c6bc49df383f2b251c0a7bce69cf\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Sep 12 19:25:54.434896 containerd[1514]: time="2025-09-12T19:25:54.434754096Z" level=info msg="CreateContainer within sandbox \"96fd1ef1edbe79f9d49107feb4fa836780f1c6bc49df383f2b251c0a7bce69cf\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"345ac248ef27c8acb75083539df8b87d47a7662664c3f2b0c22000d47c6247a2\"" Sep 12 19:25:54.436619 containerd[1514]: time="2025-09-12T19:25:54.436575503Z" level=info msg="StartContainer for \"345ac248ef27c8acb75083539df8b87d47a7662664c3f2b0c22000d47c6247a2\"" Sep 12 19:25:54.543397 systemd[1]: Started cri-containerd-345ac248ef27c8acb75083539df8b87d47a7662664c3f2b0c22000d47c6247a2.scope - libcontainer container 345ac248ef27c8acb75083539df8b87d47a7662664c3f2b0c22000d47c6247a2. Sep 12 19:25:54.630571 containerd[1514]: time="2025-09-12T19:25:54.630286138Z" level=info msg="StartContainer for \"345ac248ef27c8acb75083539df8b87d47a7662664c3f2b0c22000d47c6247a2\" returns successfully" Sep 12 19:25:55.768853 kubelet[2683]: E0912 19:25:55.768688 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bxklx" podUID="06e002f4-3e23-487d-b3cb-f79cac263b04" Sep 12 19:25:55.944934 kubelet[2683]: I0912 19:25:55.944561 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 19:25:57.769126 kubelet[2683]: E0912 19:25:57.768832 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bxklx" podUID="06e002f4-3e23-487d-b3cb-f79cac263b04" Sep 12 19:25:59.767163 containerd[1514]: time="2025-09-12T19:25:59.766987185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:59.770380 kubelet[2683]: E0912 19:25:59.768365 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bxklx" podUID="06e002f4-3e23-487d-b3cb-f79cac263b04" Sep 12 19:25:59.771267 containerd[1514]: time="2025-09-12T19:25:59.770462902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.3: active requests=0, bytes read=70440613" Sep 12 19:25:59.773917 containerd[1514]: time="2025-09-12T19:25:59.772974621Z" level=info msg="ImageCreate event name:\"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:59.779007 containerd[1514]: time="2025-09-12T19:25:59.778955145Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:25:59.781044 containerd[1514]: time="2025-09-12T19:25:59.780428046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.3\" with image id \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:73d1e391050490d54e5bee8ff2b1a50a8be1746c98dc530361b00e8c0ab63f87\", size \"71933316\" in 5.407827186s" Sep 12 19:25:59.781500 containerd[1514]: time="2025-09-12T19:25:59.781185665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.3\" returns image reference \"sha256:034822460c2f667e1f4a7679c843cc35ce1bf2c25dec86f04e07fb403df7e458\"" Sep 12 19:25:59.790075 containerd[1514]: time="2025-09-12T19:25:59.790011779Z" level=info msg="CreateContainer within sandbox \"e758dd923effd5c834c3dbe4d4d173658f78e74bc1d223b75f19af3ba5f6d3ea\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Sep 12 19:25:59.859615 containerd[1514]: time="2025-09-12T19:25:59.859440813Z" level=info msg="CreateContainer within sandbox \"e758dd923effd5c834c3dbe4d4d173658f78e74bc1d223b75f19af3ba5f6d3ea\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e1cdcb60025481040b78df7ff2219464bf2214cdc3f25a6b1272b2f77ffaf699\"" Sep 12 19:25:59.862062 containerd[1514]: time="2025-09-12T19:25:59.861951137Z" level=info msg="StartContainer for \"e1cdcb60025481040b78df7ff2219464bf2214cdc3f25a6b1272b2f77ffaf699\"" Sep 12 19:25:59.935243 systemd[1]: Started cri-containerd-e1cdcb60025481040b78df7ff2219464bf2214cdc3f25a6b1272b2f77ffaf699.scope - libcontainer container e1cdcb60025481040b78df7ff2219464bf2214cdc3f25a6b1272b2f77ffaf699. Sep 12 19:26:00.021248 containerd[1514]: time="2025-09-12T19:26:00.020228649Z" level=info msg="StartContainer for \"e1cdcb60025481040b78df7ff2219464bf2214cdc3f25a6b1272b2f77ffaf699\" returns successfully" Sep 12 19:26:01.013563 kubelet[2683]: I0912 19:26:01.013398 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54d789dc96-9jc5s" podStartSLOduration=8.265578876 podStartE2EDuration="14.013268924s" podCreationTimestamp="2025-09-12 19:25:47 +0000 UTC" firstStartedPulling="2025-09-12 19:25:48.624066958 +0000 UTC m=+22.195276449" lastFinishedPulling="2025-09-12 19:25:54.371756986 +0000 UTC m=+27.942966497" observedRunningTime="2025-09-12 19:25:54.972293441 +0000 UTC m=+28.543502969" watchObservedRunningTime="2025-09-12 19:26:01.013268924 +0000 UTC m=+34.584478452" Sep 12 19:26:01.076956 systemd[1]: cri-containerd-e1cdcb60025481040b78df7ff2219464bf2214cdc3f25a6b1272b2f77ffaf699.scope: Deactivated successfully. Sep 12 19:26:01.161019 kubelet[2683]: I0912 19:26:01.160502 2683 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 19:26:01.195401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1cdcb60025481040b78df7ff2219464bf2214cdc3f25a6b1272b2f77ffaf699-rootfs.mount: Deactivated successfully. Sep 12 19:26:01.371993 containerd[1514]: time="2025-09-12T19:26:01.370847298Z" level=info msg="shim disconnected" id=e1cdcb60025481040b78df7ff2219464bf2214cdc3f25a6b1272b2f77ffaf699 namespace=k8s.io Sep 12 19:26:01.371993 containerd[1514]: time="2025-09-12T19:26:01.371055040Z" level=warning msg="cleaning up after shim disconnected" id=e1cdcb60025481040b78df7ff2219464bf2214cdc3f25a6b1272b2f77ffaf699 namespace=k8s.io Sep 12 19:26:01.371993 containerd[1514]: time="2025-09-12T19:26:01.371084076Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 19:26:01.443120 systemd[1]: Created slice kubepods-burstable-podd7b3e6e7_1f71_472a_a961_c100d7e8208f.slice - libcontainer container kubepods-burstable-podd7b3e6e7_1f71_472a_a961_c100d7e8208f.slice. Sep 12 19:26:01.461174 systemd[1]: Created slice kubepods-besteffort-pod7edf616c_f7f3_4891_b477_a2522e01e8c8.slice - libcontainer container kubepods-besteffort-pod7edf616c_f7f3_4891_b477_a2522e01e8c8.slice. Sep 12 19:26:01.477303 kubelet[2683]: W0912 19:26:01.476669 2683 reflector.go:569] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:srv-gt1mb.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-gt1mb.gb1.brightbox.com' and this object Sep 12 19:26:01.481628 kubelet[2683]: W0912 19:26:01.480791 2683 reflector.go:569] object-"calico-system"/"goldmane-key-pair": failed to list *v1.Secret: secrets "goldmane-key-pair" is forbidden: User "system:node:srv-gt1mb.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'srv-gt1mb.gb1.brightbox.com' and this object Sep 12 19:26:01.484342 systemd[1]: Created slice kubepods-burstable-podbfac3adb_2001_4e77_9843_4702a6abb198.slice - libcontainer container kubepods-burstable-podbfac3adb_2001_4e77_9843_4702a6abb198.slice. Sep 12 19:26:01.490785 kubelet[2683]: W0912 19:26:01.490739 2683 reflector.go:569] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-gt1mb.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-gt1mb.gb1.brightbox.com' and this object Sep 12 19:26:01.492227 kubelet[2683]: E0912 19:26:01.492182 2683 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-gt1mb.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-gt1mb.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 12 19:26:01.492597 kubelet[2683]: W0912 19:26:01.492426 2683 reflector.go:569] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:srv-gt1mb.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'srv-gt1mb.gb1.brightbox.com' and this object Sep 12 19:26:01.492597 kubelet[2683]: E0912 19:26:01.492456 2683 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:srv-gt1mb.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-gt1mb.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 12 19:26:01.493517 kubelet[2683]: E0912 19:26:01.493256 2683 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"goldmane-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"goldmane-key-pair\" is forbidden: User \"system:node:srv-gt1mb.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-gt1mb.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 12 19:26:01.493517 kubelet[2683]: W0912 19:26:01.493333 2683 reflector.go:569] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:srv-gt1mb.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node 'srv-gt1mb.gb1.brightbox.com' and this object Sep 12 19:26:01.493517 kubelet[2683]: E0912 19:26:01.493358 2683 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:srv-gt1mb.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'srv-gt1mb.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 12 19:26:01.493717 kubelet[2683]: E0912 19:26:01.493596 2683 reflector.go:166] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:srv-gt1mb.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-gt1mb.gb1.brightbox.com' and this object" logger="UnhandledError" Sep 12 19:26:01.512302 kubelet[2683]: I0912 19:26:01.512256 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ca95b62-d472-43b2-a369-e4f919c62dfe-whisker-backend-key-pair\") pod \"whisker-548bf8484-dn2jb\" (UID: \"3ca95b62-d472-43b2-a369-e4f919c62dfe\") " pod="calico-system/whisker-548bf8484-dn2jb" Sep 12 19:26:01.512594 kubelet[2683]: I0912 19:26:01.512543 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7edf616c-f7f3-4891-b477-a2522e01e8c8-tigera-ca-bundle\") pod \"calico-kube-controllers-6b79d5b75-txtkb\" (UID: \"7edf616c-f7f3-4891-b477-a2522e01e8c8\") " pod="calico-system/calico-kube-controllers-6b79d5b75-txtkb" Sep 12 19:26:01.513691 systemd[1]: Created slice kubepods-besteffort-pod3ca95b62_d472_43b2_a369_e4f919c62dfe.slice - libcontainer container kubepods-besteffort-pod3ca95b62_d472_43b2_a369_e4f919c62dfe.slice. Sep 12 19:26:01.515999 kubelet[2683]: I0912 19:26:01.515083 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1d596bad-4e4f-445e-bb83-18354a67cb67-calico-apiserver-certs\") pod \"calico-apiserver-746f98d4d8-b4lq7\" (UID: \"1d596bad-4e4f-445e-bb83-18354a67cb67\") " pod="calico-apiserver/calico-apiserver-746f98d4d8-b4lq7" Sep 12 19:26:01.515999 kubelet[2683]: I0912 19:26:01.515150 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6fa90bc5-cf93-4dc8-ab8e-d382c74b770b-config\") pod \"goldmane-54d579b49d-hntgn\" (UID: \"6fa90bc5-cf93-4dc8-ab8e-d382c74b770b\") " pod="calico-system/goldmane-54d579b49d-hntgn" Sep 12 19:26:01.515999 kubelet[2683]: I0912 19:26:01.515196 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6fa90bc5-cf93-4dc8-ab8e-d382c74b770b-goldmane-key-pair\") pod \"goldmane-54d579b49d-hntgn\" (UID: \"6fa90bc5-cf93-4dc8-ab8e-d382c74b770b\") " pod="calico-system/goldmane-54d579b49d-hntgn" Sep 12 19:26:01.515999 kubelet[2683]: I0912 19:26:01.515241 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf7xj\" (UniqueName: \"kubernetes.io/projected/6fa90bc5-cf93-4dc8-ab8e-d382c74b770b-kube-api-access-bf7xj\") pod \"goldmane-54d579b49d-hntgn\" (UID: \"6fa90bc5-cf93-4dc8-ab8e-d382c74b770b\") " pod="calico-system/goldmane-54d579b49d-hntgn" Sep 12 19:26:01.515999 kubelet[2683]: I0912 19:26:01.515283 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bfac3adb-2001-4e77-9843-4702a6abb198-config-volume\") pod \"coredns-668d6bf9bc-sz5ss\" (UID: \"bfac3adb-2001-4e77-9843-4702a6abb198\") " pod="kube-system/coredns-668d6bf9bc-sz5ss" Sep 12 19:26:01.516353 kubelet[2683]: I0912 19:26:01.515321 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmx96\" (UniqueName: \"kubernetes.io/projected/bfac3adb-2001-4e77-9843-4702a6abb198-kube-api-access-tmx96\") pod \"coredns-668d6bf9bc-sz5ss\" (UID: \"bfac3adb-2001-4e77-9843-4702a6abb198\") " pod="kube-system/coredns-668d6bf9bc-sz5ss" Sep 12 19:26:01.516353 kubelet[2683]: I0912 19:26:01.515366 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzzw9\" (UniqueName: \"kubernetes.io/projected/d7b3e6e7-1f71-472a-a961-c100d7e8208f-kube-api-access-nzzw9\") pod \"coredns-668d6bf9bc-qf4p7\" (UID: \"d7b3e6e7-1f71-472a-a961-c100d7e8208f\") " pod="kube-system/coredns-668d6bf9bc-qf4p7" Sep 12 19:26:01.516353 kubelet[2683]: I0912 19:26:01.515422 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476-calico-apiserver-certs\") pod \"calico-apiserver-746f98d4d8-55djp\" (UID: \"1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476\") " pod="calico-apiserver/calico-apiserver-746f98d4d8-55djp" Sep 12 19:26:01.516353 kubelet[2683]: I0912 19:26:01.515465 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p49n\" (UniqueName: \"kubernetes.io/projected/1d596bad-4e4f-445e-bb83-18354a67cb67-kube-api-access-8p49n\") pod \"calico-apiserver-746f98d4d8-b4lq7\" (UID: \"1d596bad-4e4f-445e-bb83-18354a67cb67\") " pod="calico-apiserver/calico-apiserver-746f98d4d8-b4lq7" Sep 12 19:26:01.516353 kubelet[2683]: I0912 19:26:01.515506 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hx6fh\" (UniqueName: \"kubernetes.io/projected/3ca95b62-d472-43b2-a369-e4f919c62dfe-kube-api-access-hx6fh\") pod \"whisker-548bf8484-dn2jb\" (UID: \"3ca95b62-d472-43b2-a369-e4f919c62dfe\") " pod="calico-system/whisker-548bf8484-dn2jb" Sep 12 19:26:01.516703 kubelet[2683]: I0912 19:26:01.515541 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxh72\" (UniqueName: \"kubernetes.io/projected/1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476-kube-api-access-jxh72\") pod \"calico-apiserver-746f98d4d8-55djp\" (UID: \"1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476\") " pod="calico-apiserver/calico-apiserver-746f98d4d8-55djp" Sep 12 19:26:01.516703 kubelet[2683]: I0912 19:26:01.515594 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9sj6\" (UniqueName: \"kubernetes.io/projected/7edf616c-f7f3-4891-b477-a2522e01e8c8-kube-api-access-z9sj6\") pod \"calico-kube-controllers-6b79d5b75-txtkb\" (UID: \"7edf616c-f7f3-4891-b477-a2522e01e8c8\") " pod="calico-system/calico-kube-controllers-6b79d5b75-txtkb" Sep 12 19:26:01.516703 kubelet[2683]: I0912 19:26:01.515635 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ca95b62-d472-43b2-a369-e4f919c62dfe-whisker-ca-bundle\") pod \"whisker-548bf8484-dn2jb\" (UID: \"3ca95b62-d472-43b2-a369-e4f919c62dfe\") " pod="calico-system/whisker-548bf8484-dn2jb" Sep 12 19:26:01.516703 kubelet[2683]: I0912 19:26:01.515678 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d7b3e6e7-1f71-472a-a961-c100d7e8208f-config-volume\") pod \"coredns-668d6bf9bc-qf4p7\" (UID: \"d7b3e6e7-1f71-472a-a961-c100d7e8208f\") " pod="kube-system/coredns-668d6bf9bc-qf4p7" Sep 12 19:26:01.516703 kubelet[2683]: I0912 19:26:01.515723 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6fa90bc5-cf93-4dc8-ab8e-d382c74b770b-goldmane-ca-bundle\") pod \"goldmane-54d579b49d-hntgn\" (UID: \"6fa90bc5-cf93-4dc8-ab8e-d382c74b770b\") " pod="calico-system/goldmane-54d579b49d-hntgn" Sep 12 19:26:01.533279 systemd[1]: Created slice kubepods-besteffort-pod1a1e8a9d_b2d9_49ef_ad3f_47a3e6130476.slice - libcontainer container kubepods-besteffort-pod1a1e8a9d_b2d9_49ef_ad3f_47a3e6130476.slice. Sep 12 19:26:01.562289 systemd[1]: Created slice kubepods-besteffort-pod1d596bad_4e4f_445e_bb83_18354a67cb67.slice - libcontainer container kubepods-besteffort-pod1d596bad_4e4f_445e_bb83_18354a67cb67.slice. Sep 12 19:26:01.577518 systemd[1]: Created slice kubepods-besteffort-pod6fa90bc5_cf93_4dc8_ab8e_d382c74b770b.slice - libcontainer container kubepods-besteffort-pod6fa90bc5_cf93_4dc8_ab8e_d382c74b770b.slice. Sep 12 19:26:01.766865 containerd[1514]: time="2025-09-12T19:26:01.765351344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qf4p7,Uid:d7b3e6e7-1f71-472a-a961-c100d7e8208f,Namespace:kube-system,Attempt:0,}" Sep 12 19:26:01.770714 containerd[1514]: time="2025-09-12T19:26:01.770024463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b79d5b75-txtkb,Uid:7edf616c-f7f3-4891-b477-a2522e01e8c8,Namespace:calico-system,Attempt:0,}" Sep 12 19:26:01.783907 systemd[1]: Created slice kubepods-besteffort-pod06e002f4_3e23_487d_b3cb_f79cac263b04.slice - libcontainer container kubepods-besteffort-pod06e002f4_3e23_487d_b3cb_f79cac263b04.slice. Sep 12 19:26:01.788470 containerd[1514]: time="2025-09-12T19:26:01.788185799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bxklx,Uid:06e002f4-3e23-487d-b3cb-f79cac263b04,Namespace:calico-system,Attempt:0,}" Sep 12 19:26:01.813918 containerd[1514]: time="2025-09-12T19:26:01.813868228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sz5ss,Uid:bfac3adb-2001-4e77-9843-4702a6abb198,Namespace:kube-system,Attempt:0,}" Sep 12 19:26:01.986735 containerd[1514]: time="2025-09-12T19:26:01.986679649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\"" Sep 12 19:26:02.194162 containerd[1514]: time="2025-09-12T19:26:02.194096010Z" level=error msg="Failed to destroy network for sandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.194390 containerd[1514]: time="2025-09-12T19:26:02.194090565Z" level=error msg="Failed to destroy network for sandbox \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.197081 containerd[1514]: time="2025-09-12T19:26:02.194118018Z" level=error msg="Failed to destroy network for sandbox \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.205148 containerd[1514]: time="2025-09-12T19:26:02.204034700Z" level=error msg="encountered an error cleaning up failed sandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.205148 containerd[1514]: time="2025-09-12T19:26:02.204165357Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sz5ss,Uid:bfac3adb-2001-4e77-9843-4702a6abb198,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.209690 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c-shm.mount: Deactivated successfully. Sep 12 19:26:02.209859 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6-shm.mount: Deactivated successfully. Sep 12 19:26:02.210294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f-shm.mount: Deactivated successfully. Sep 12 19:26:02.214980 containerd[1514]: time="2025-09-12T19:26:02.211946166Z" level=error msg="encountered an error cleaning up failed sandbox \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.214980 containerd[1514]: time="2025-09-12T19:26:02.212059721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bxklx,Uid:06e002f4-3e23-487d-b3cb-f79cac263b04,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.216929 containerd[1514]: time="2025-09-12T19:26:02.216891329Z" level=error msg="encountered an error cleaning up failed sandbox \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.217864 kubelet[2683]: E0912 19:26:02.217783 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.218377 kubelet[2683]: E0912 19:26:02.217995 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bxklx" Sep 12 19:26:02.218377 kubelet[2683]: E0912 19:26:02.218058 2683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bxklx" Sep 12 19:26:02.218377 kubelet[2683]: E0912 19:26:02.218154 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bxklx_calico-system(06e002f4-3e23-487d-b3cb-f79cac263b04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bxklx_calico-system(06e002f4-3e23-487d-b3cb-f79cac263b04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bxklx" podUID="06e002f4-3e23-487d-b3cb-f79cac263b04" Sep 12 19:26:02.218685 containerd[1514]: time="2025-09-12T19:26:02.218645510Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b79d5b75-txtkb,Uid:7edf616c-f7f3-4891-b477-a2522e01e8c8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.219392 kubelet[2683]: E0912 19:26:02.219338 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.219476 kubelet[2683]: E0912 19:26:02.219413 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sz5ss" Sep 12 19:26:02.219476 kubelet[2683]: E0912 19:26:02.219443 2683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-sz5ss" Sep 12 19:26:02.219590 kubelet[2683]: E0912 19:26:02.219482 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-sz5ss_kube-system(bfac3adb-2001-4e77-9843-4702a6abb198)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-sz5ss_kube-system(bfac3adb-2001-4e77-9843-4702a6abb198)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sz5ss" podUID="bfac3adb-2001-4e77-9843-4702a6abb198" Sep 12 19:26:02.219690 kubelet[2683]: E0912 19:26:02.219597 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.219690 kubelet[2683]: E0912 19:26:02.219639 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b79d5b75-txtkb" Sep 12 19:26:02.219690 kubelet[2683]: E0912 19:26:02.219667 2683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6b79d5b75-txtkb" Sep 12 19:26:02.219909 kubelet[2683]: E0912 19:26:02.219712 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6b79d5b75-txtkb_calico-system(7edf616c-f7f3-4891-b477-a2522e01e8c8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6b79d5b75-txtkb_calico-system(7edf616c-f7f3-4891-b477-a2522e01e8c8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b79d5b75-txtkb" podUID="7edf616c-f7f3-4891-b477-a2522e01e8c8" Sep 12 19:26:02.220646 containerd[1514]: time="2025-09-12T19:26:02.220597109Z" level=error msg="Failed to destroy network for sandbox \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.224243 containerd[1514]: time="2025-09-12T19:26:02.224168448Z" level=error msg="encountered an error cleaning up failed sandbox \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.224328 containerd[1514]: time="2025-09-12T19:26:02.224272954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qf4p7,Uid:d7b3e6e7-1f71-472a-a961-c100d7e8208f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.224860 kubelet[2683]: E0912 19:26:02.224725 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.225224 kubelet[2683]: E0912 19:26:02.224783 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qf4p7" Sep 12 19:26:02.225224 kubelet[2683]: E0912 19:26:02.225018 2683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-qf4p7" Sep 12 19:26:02.225224 kubelet[2683]: E0912 19:26:02.225078 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-qf4p7_kube-system(d7b3e6e7-1f71-472a-a961-c100d7e8208f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-qf4p7_kube-system(d7b3e6e7-1f71-472a-a961-c100d7e8208f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qf4p7" podUID="d7b3e6e7-1f71-472a-a961-c100d7e8208f" Sep 12 19:26:02.226665 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a-shm.mount: Deactivated successfully. Sep 12 19:26:02.485921 containerd[1514]: time="2025-09-12T19:26:02.485768886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hntgn,Uid:6fa90bc5-cf93-4dc8-ab8e-d382c74b770b,Namespace:calico-system,Attempt:0,}" Sep 12 19:26:02.590888 containerd[1514]: time="2025-09-12T19:26:02.590665240Z" level=error msg="Failed to destroy network for sandbox \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.591808 containerd[1514]: time="2025-09-12T19:26:02.591479148Z" level=error msg="encountered an error cleaning up failed sandbox \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.591808 containerd[1514]: time="2025-09-12T19:26:02.591562540Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hntgn,Uid:6fa90bc5-cf93-4dc8-ab8e-d382c74b770b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.592186 kubelet[2683]: E0912 19:26:02.591841 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.592186 kubelet[2683]: E0912 19:26:02.591936 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hntgn" Sep 12 19:26:02.592186 kubelet[2683]: E0912 19:26:02.591991 2683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-54d579b49d-hntgn" Sep 12 19:26:02.592471 kubelet[2683]: E0912 19:26:02.592087 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-54d579b49d-hntgn_calico-system(6fa90bc5-cf93-4dc8-ab8e-d382c74b770b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-54d579b49d-hntgn_calico-system(6fa90bc5-cf93-4dc8-ab8e-d382c74b770b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-hntgn" podUID="6fa90bc5-cf93-4dc8-ab8e-d382c74b770b" Sep 12 19:26:02.621632 kubelet[2683]: E0912 19:26:02.621284 2683 configmap.go:193] Couldn't get configMap calico-system/whisker-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Sep 12 19:26:02.631272 kubelet[2683]: E0912 19:26:02.630672 2683 secret.go:189] Couldn't get secret calico-system/whisker-backend-key-pair: failed to sync secret cache: timed out waiting for the condition Sep 12 19:26:02.636272 kubelet[2683]: E0912 19:26:02.636161 2683 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3ca95b62-d472-43b2-a369-e4f919c62dfe-whisker-ca-bundle podName:3ca95b62-d472-43b2-a369-e4f919c62dfe nodeName:}" failed. No retries permitted until 2025-09-12 19:26:03.133306535 +0000 UTC m=+36.704516032 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-ca-bundle" (UniqueName: "kubernetes.io/configmap/3ca95b62-d472-43b2-a369-e4f919c62dfe-whisker-ca-bundle") pod "whisker-548bf8484-dn2jb" (UID: "3ca95b62-d472-43b2-a369-e4f919c62dfe") : failed to sync configmap cache: timed out waiting for the condition Sep 12 19:26:02.636272 kubelet[2683]: E0912 19:26:02.636231 2683 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ca95b62-d472-43b2-a369-e4f919c62dfe-whisker-backend-key-pair podName:3ca95b62-d472-43b2-a369-e4f919c62dfe nodeName:}" failed. No retries permitted until 2025-09-12 19:26:03.13621596 +0000 UTC m=+36.707425464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "whisker-backend-key-pair" (UniqueName: "kubernetes.io/secret/3ca95b62-d472-43b2-a369-e4f919c62dfe-whisker-backend-key-pair") pod "whisker-548bf8484-dn2jb" (UID: "3ca95b62-d472-43b2-a369-e4f919c62dfe") : failed to sync secret cache: timed out waiting for the condition Sep 12 19:26:02.764284 containerd[1514]: time="2025-09-12T19:26:02.764127497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-746f98d4d8-55djp,Uid:1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476,Namespace:calico-apiserver,Attempt:0,}" Sep 12 19:26:02.770341 containerd[1514]: time="2025-09-12T19:26:02.770011005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-746f98d4d8-b4lq7,Uid:1d596bad-4e4f-445e-bb83-18354a67cb67,Namespace:calico-apiserver,Attempt:0,}" Sep 12 19:26:02.880353 containerd[1514]: time="2025-09-12T19:26:02.880240497Z" level=error msg="Failed to destroy network for sandbox \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.880911 containerd[1514]: time="2025-09-12T19:26:02.880707720Z" level=error msg="encountered an error cleaning up failed sandbox \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.880911 containerd[1514]: time="2025-09-12T19:26:02.880805532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-746f98d4d8-b4lq7,Uid:1d596bad-4e4f-445e-bb83-18354a67cb67,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.881246 kubelet[2683]: E0912 19:26:02.881143 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.881353 kubelet[2683]: E0912 19:26:02.881248 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-746f98d4d8-b4lq7" Sep 12 19:26:02.881353 kubelet[2683]: E0912 19:26:02.881288 2683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-746f98d4d8-b4lq7" Sep 12 19:26:02.881472 kubelet[2683]: E0912 19:26:02.881400 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-746f98d4d8-b4lq7_calico-apiserver(1d596bad-4e4f-445e-bb83-18354a67cb67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-746f98d4d8-b4lq7_calico-apiserver(1d596bad-4e4f-445e-bb83-18354a67cb67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-746f98d4d8-b4lq7" podUID="1d596bad-4e4f-445e-bb83-18354a67cb67" Sep 12 19:26:02.885061 containerd[1514]: time="2025-09-12T19:26:02.885012020Z" level=error msg="Failed to destroy network for sandbox \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.885533 containerd[1514]: time="2025-09-12T19:26:02.885492018Z" level=error msg="encountered an error cleaning up failed sandbox \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.885641 containerd[1514]: time="2025-09-12T19:26:02.885561977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-746f98d4d8-55djp,Uid:1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.886044 kubelet[2683]: E0912 19:26:02.885996 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:02.886128 kubelet[2683]: E0912 19:26:02.886078 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-746f98d4d8-55djp" Sep 12 19:26:02.886128 kubelet[2683]: E0912 19:26:02.886118 2683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-746f98d4d8-55djp" Sep 12 19:26:02.886299 kubelet[2683]: E0912 19:26:02.886183 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-746f98d4d8-55djp_calico-apiserver(1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-746f98d4d8-55djp_calico-apiserver(1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-746f98d4d8-55djp" podUID="1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476" Sep 12 19:26:02.979264 kubelet[2683]: I0912 19:26:02.979210 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:26:02.983397 kubelet[2683]: I0912 19:26:02.983367 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Sep 12 19:26:02.998093 kubelet[2683]: I0912 19:26:02.997950 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Sep 12 19:26:03.009627 kubelet[2683]: I0912 19:26:03.009565 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Sep 12 19:26:03.015944 kubelet[2683]: I0912 19:26:03.015787 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Sep 12 19:26:03.023992 kubelet[2683]: I0912 19:26:03.023954 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Sep 12 19:26:03.043735 containerd[1514]: time="2025-09-12T19:26:03.042568567Z" level=info msg="StopPodSandbox for \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\"" Sep 12 19:26:03.044018 containerd[1514]: time="2025-09-12T19:26:03.043959760Z" level=info msg="StopPodSandbox for \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\"" Sep 12 19:26:03.044722 containerd[1514]: time="2025-09-12T19:26:03.044694283Z" level=info msg="Ensure that sandbox c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8 in task-service has been cleanup successfully" Sep 12 19:26:03.044905 containerd[1514]: time="2025-09-12T19:26:03.044862555Z" level=info msg="StopPodSandbox for \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\"" Sep 12 19:26:03.045165 containerd[1514]: time="2025-09-12T19:26:03.045132264Z" level=info msg="Ensure that sandbox a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a in task-service has been cleanup successfully" Sep 12 19:26:03.048219 containerd[1514]: time="2025-09-12T19:26:03.044723793Z" level=info msg="Ensure that sandbox e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015 in task-service has been cleanup successfully" Sep 12 19:26:03.049265 containerd[1514]: time="2025-09-12T19:26:03.044776670Z" level=info msg="StopPodSandbox for \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\"" Sep 12 19:26:03.050125 containerd[1514]: time="2025-09-12T19:26:03.050095914Z" level=info msg="StopPodSandbox for \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\"" Sep 12 19:26:03.050828 containerd[1514]: time="2025-09-12T19:26:03.050464454Z" level=info msg="Ensure that sandbox 98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26 in task-service has been cleanup successfully" Sep 12 19:26:03.051527 containerd[1514]: time="2025-09-12T19:26:03.051490597Z" level=info msg="Ensure that sandbox 63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c in task-service has been cleanup successfully" Sep 12 19:26:03.054014 containerd[1514]: time="2025-09-12T19:26:03.044816752Z" level=info msg="StopPodSandbox for \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\"" Sep 12 19:26:03.054014 containerd[1514]: time="2025-09-12T19:26:03.053813084Z" level=info msg="Ensure that sandbox 31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f in task-service has been cleanup successfully" Sep 12 19:26:03.060693 kubelet[2683]: I0912 19:26:03.060665 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Sep 12 19:26:03.064785 containerd[1514]: time="2025-09-12T19:26:03.064557745Z" level=info msg="StopPodSandbox for \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\"" Sep 12 19:26:03.065985 containerd[1514]: time="2025-09-12T19:26:03.065466669Z" level=info msg="Ensure that sandbox d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6 in task-service has been cleanup successfully" Sep 12 19:26:03.207485 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8-shm.mount: Deactivated successfully. Sep 12 19:26:03.231470 containerd[1514]: time="2025-09-12T19:26:03.231226580Z" level=error msg="StopPodSandbox for \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\" failed" error="failed to destroy network for sandbox \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:03.231943 kubelet[2683]: E0912 19:26:03.231608 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Sep 12 19:26:03.231943 kubelet[2683]: E0912 19:26:03.231718 2683 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8"} Sep 12 19:26:03.231943 kubelet[2683]: E0912 19:26:03.231841 2683 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6fa90bc5-cf93-4dc8-ab8e-d382c74b770b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:26:03.231943 kubelet[2683]: E0912 19:26:03.231874 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6fa90bc5-cf93-4dc8-ab8e-d382c74b770b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-54d579b49d-hntgn" podUID="6fa90bc5-cf93-4dc8-ab8e-d382c74b770b" Sep 12 19:26:03.238517 containerd[1514]: time="2025-09-12T19:26:03.237833864Z" level=error msg="StopPodSandbox for \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\" failed" error="failed to destroy network for sandbox \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:03.238712 kubelet[2683]: E0912 19:26:03.238307 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Sep 12 19:26:03.238712 kubelet[2683]: E0912 19:26:03.238392 2683 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015"} Sep 12 19:26:03.238712 kubelet[2683]: E0912 19:26:03.238439 2683 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1d596bad-4e4f-445e-bb83-18354a67cb67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:26:03.238712 kubelet[2683]: E0912 19:26:03.238627 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1d596bad-4e4f-445e-bb83-18354a67cb67\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-746f98d4d8-b4lq7" podUID="1d596bad-4e4f-445e-bb83-18354a67cb67" Sep 12 19:26:03.241202 containerd[1514]: time="2025-09-12T19:26:03.241056369Z" level=error msg="StopPodSandbox for \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\" failed" error="failed to destroy network for sandbox \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:03.241356 kubelet[2683]: E0912 19:26:03.241233 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Sep 12 19:26:03.241356 kubelet[2683]: E0912 19:26:03.241277 2683 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a"} Sep 12 19:26:03.241356 kubelet[2683]: E0912 19:26:03.241323 2683 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d7b3e6e7-1f71-472a-a961-c100d7e8208f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:26:03.241986 kubelet[2683]: E0912 19:26:03.241352 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d7b3e6e7-1f71-472a-a961-c100d7e8208f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-qf4p7" podUID="d7b3e6e7-1f71-472a-a961-c100d7e8208f" Sep 12 19:26:03.248971 containerd[1514]: time="2025-09-12T19:26:03.248728211Z" level=error msg="StopPodSandbox for \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\" failed" error="failed to destroy network for sandbox \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:03.249356 kubelet[2683]: E0912 19:26:03.249099 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Sep 12 19:26:03.249356 kubelet[2683]: E0912 19:26:03.249169 2683 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6"} Sep 12 19:26:03.249356 kubelet[2683]: E0912 19:26:03.249232 2683 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"06e002f4-3e23-487d-b3cb-f79cac263b04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:26:03.249356 kubelet[2683]: E0912 19:26:03.249275 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"06e002f4-3e23-487d-b3cb-f79cac263b04\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bxklx" podUID="06e002f4-3e23-487d-b3cb-f79cac263b04" Sep 12 19:26:03.249666 containerd[1514]: time="2025-09-12T19:26:03.249232182Z" level=error msg="StopPodSandbox for \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\" failed" error="failed to destroy network for sandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:03.249741 kubelet[2683]: E0912 19:26:03.249415 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:26:03.249741 kubelet[2683]: E0912 19:26:03.249481 2683 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c"} Sep 12 19:26:03.249741 kubelet[2683]: E0912 19:26:03.249540 2683 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bfac3adb-2001-4e77-9843-4702a6abb198\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:26:03.249741 kubelet[2683]: E0912 19:26:03.249595 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bfac3adb-2001-4e77-9843-4702a6abb198\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sz5ss" podUID="bfac3adb-2001-4e77-9843-4702a6abb198" Sep 12 19:26:03.254643 containerd[1514]: time="2025-09-12T19:26:03.254567101Z" level=error msg="StopPodSandbox for \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\" failed" error="failed to destroy network for sandbox \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:03.255109 kubelet[2683]: E0912 19:26:03.254848 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Sep 12 19:26:03.255109 kubelet[2683]: E0912 19:26:03.254913 2683 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f"} Sep 12 19:26:03.255109 kubelet[2683]: E0912 19:26:03.254950 2683 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7edf616c-f7f3-4891-b477-a2522e01e8c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:26:03.255109 kubelet[2683]: E0912 19:26:03.255034 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7edf616c-f7f3-4891-b477-a2522e01e8c8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6b79d5b75-txtkb" podUID="7edf616c-f7f3-4891-b477-a2522e01e8c8" Sep 12 19:26:03.260121 containerd[1514]: time="2025-09-12T19:26:03.260066828Z" level=error msg="StopPodSandbox for \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\" failed" error="failed to destroy network for sandbox \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:03.260449 kubelet[2683]: E0912 19:26:03.260413 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Sep 12 19:26:03.260536 kubelet[2683]: E0912 19:26:03.260460 2683 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26"} Sep 12 19:26:03.260536 kubelet[2683]: E0912 19:26:03.260508 2683 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:26:03.260713 kubelet[2683]: E0912 19:26:03.260537 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-746f98d4d8-55djp" podUID="1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476" Sep 12 19:26:03.327713 containerd[1514]: time="2025-09-12T19:26:03.327648641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548bf8484-dn2jb,Uid:3ca95b62-d472-43b2-a369-e4f919c62dfe,Namespace:calico-system,Attempt:0,}" Sep 12 19:26:03.434691 containerd[1514]: time="2025-09-12T19:26:03.434462021Z" level=error msg="Failed to destroy network for sandbox \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:03.437802 containerd[1514]: time="2025-09-12T19:26:03.437571244Z" level=error msg="encountered an error cleaning up failed sandbox \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:03.437802 containerd[1514]: time="2025-09-12T19:26:03.437717037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-548bf8484-dn2jb,Uid:3ca95b62-d472-43b2-a369-e4f919c62dfe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:03.437979 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2-shm.mount: Deactivated successfully. Sep 12 19:26:03.439997 kubelet[2683]: E0912 19:26:03.439333 2683 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:03.439997 kubelet[2683]: E0912 19:26:03.439523 2683 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-548bf8484-dn2jb" Sep 12 19:26:03.439997 kubelet[2683]: E0912 19:26:03.439564 2683 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-548bf8484-dn2jb" Sep 12 19:26:03.440278 kubelet[2683]: E0912 19:26:03.439667 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-548bf8484-dn2jb_calico-system(3ca95b62-d472-43b2-a369-e4f919c62dfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-548bf8484-dn2jb_calico-system(3ca95b62-d472-43b2-a369-e4f919c62dfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-548bf8484-dn2jb" podUID="3ca95b62-d472-43b2-a369-e4f919c62dfe" Sep 12 19:26:04.067490 kubelet[2683]: I0912 19:26:04.064834 2683 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Sep 12 19:26:04.067796 containerd[1514]: time="2025-09-12T19:26:04.067190903Z" level=info msg="StopPodSandbox for \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\"" Sep 12 19:26:04.067796 containerd[1514]: time="2025-09-12T19:26:04.067716079Z" level=info msg="Ensure that sandbox c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2 in task-service has been cleanup successfully" Sep 12 19:26:04.108217 containerd[1514]: time="2025-09-12T19:26:04.108106733Z" level=error msg="StopPodSandbox for \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\" failed" error="failed to destroy network for sandbox \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:04.108625 kubelet[2683]: E0912 19:26:04.108537 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Sep 12 19:26:04.109202 kubelet[2683]: E0912 19:26:04.108642 2683 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2"} Sep 12 19:26:04.109202 kubelet[2683]: E0912 19:26:04.108705 2683 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3ca95b62-d472-43b2-a369-e4f919c62dfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:26:04.109202 kubelet[2683]: E0912 19:26:04.108738 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3ca95b62-d472-43b2-a369-e4f919c62dfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-548bf8484-dn2jb" podUID="3ca95b62-d472-43b2-a369-e4f919c62dfe" Sep 12 19:26:13.112666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3488591018.mount: Deactivated successfully. Sep 12 19:26:13.233814 containerd[1514]: time="2025-09-12T19:26:13.233467604Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.3\" with image id \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\", size \"157078201\" in 11.242150946s" Sep 12 19:26:13.233814 containerd[1514]: time="2025-09-12T19:26:13.233574129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.3\" returns image reference \"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\"" Sep 12 19:26:13.233814 containerd[1514]: time="2025-09-12T19:26:13.208854325Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.3: active requests=0, bytes read=157078339" Sep 12 19:26:13.239840 containerd[1514]: time="2025-09-12T19:26:13.239756583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:13.306381 containerd[1514]: time="2025-09-12T19:26:13.306242770Z" level=info msg="ImageCreate event name:\"sha256:ce9c4ac0f175f22c56e80844e65379d9ebe1d8a4e2bbb38dc1db0f53a8826f0f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:13.307370 containerd[1514]: time="2025-09-12T19:26:13.307275588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:bcb8146fcaeced1e1c88fad3eaa697f1680746bd23c3e7e8d4535bc484c6f2a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:13.314975 containerd[1514]: time="2025-09-12T19:26:13.314776011Z" level=info msg="CreateContainer within sandbox \"e758dd923effd5c834c3dbe4d4d173658f78e74bc1d223b75f19af3ba5f6d3ea\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Sep 12 19:26:13.373171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2331772216.mount: Deactivated successfully. Sep 12 19:26:13.384723 containerd[1514]: time="2025-09-12T19:26:13.384390119Z" level=info msg="CreateContainer within sandbox \"e758dd923effd5c834c3dbe4d4d173658f78e74bc1d223b75f19af3ba5f6d3ea\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c383f88254193a7d19ac6dbae967646674ca12e6c8980093c97773fb662cfdf9\"" Sep 12 19:26:13.386159 containerd[1514]: time="2025-09-12T19:26:13.386120648Z" level=info msg="StartContainer for \"c383f88254193a7d19ac6dbae967646674ca12e6c8980093c97773fb662cfdf9\"" Sep 12 19:26:13.542373 systemd[1]: Started cri-containerd-c383f88254193a7d19ac6dbae967646674ca12e6c8980093c97773fb662cfdf9.scope - libcontainer container c383f88254193a7d19ac6dbae967646674ca12e6c8980093c97773fb662cfdf9. Sep 12 19:26:13.661449 containerd[1514]: time="2025-09-12T19:26:13.661315075Z" level=info msg="StartContainer for \"c383f88254193a7d19ac6dbae967646674ca12e6c8980093c97773fb662cfdf9\" returns successfully" Sep 12 19:26:13.773955 containerd[1514]: time="2025-09-12T19:26:13.773870541Z" level=info msg="StopPodSandbox for \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\"" Sep 12 19:26:13.856132 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Sep 12 19:26:13.860078 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Sep 12 19:26:13.895703 containerd[1514]: time="2025-09-12T19:26:13.895482678Z" level=error msg="StopPodSandbox for \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\" failed" error="failed to destroy network for sandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Sep 12 19:26:13.897122 kubelet[2683]: E0912 19:26:13.896122 2683 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:26:13.897122 kubelet[2683]: E0912 19:26:13.896291 2683 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c"} Sep 12 19:26:13.897122 kubelet[2683]: E0912 19:26:13.896386 2683 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bfac3adb-2001-4e77-9843-4702a6abb198\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Sep 12 19:26:13.897122 kubelet[2683]: E0912 19:26:13.896449 2683 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bfac3adb-2001-4e77-9843-4702a6abb198\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-sz5ss" podUID="bfac3adb-2001-4e77-9843-4702a6abb198" Sep 12 19:26:14.158986 containerd[1514]: time="2025-09-12T19:26:14.158900703Z" level=info msg="StopPodSandbox for \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\"" Sep 12 19:26:14.266720 kubelet[2683]: I0912 19:26:14.266433 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8wtkh" podStartSLOduration=2.59387478 podStartE2EDuration="27.266356509s" podCreationTimestamp="2025-09-12 19:25:47 +0000 UTC" firstStartedPulling="2025-09-12 19:25:48.564765018 +0000 UTC m=+22.135974510" lastFinishedPulling="2025-09-12 19:26:13.237246741 +0000 UTC m=+46.808456239" observedRunningTime="2025-09-12 19:26:14.262270091 +0000 UTC m=+47.833479605" watchObservedRunningTime="2025-09-12 19:26:14.266356509 +0000 UTC m=+47.837566019" Sep 12 19:26:14.716771 containerd[1514]: 2025-09-12 19:26:14.411 [INFO][3857] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Sep 12 19:26:14.716771 containerd[1514]: 2025-09-12 19:26:14.412 [INFO][3857] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" iface="eth0" netns="/var/run/netns/cni-3f287d13-0d6b-31c4-3ccd-e0dc3d088fa0" Sep 12 19:26:14.716771 containerd[1514]: 2025-09-12 19:26:14.413 [INFO][3857] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" iface="eth0" netns="/var/run/netns/cni-3f287d13-0d6b-31c4-3ccd-e0dc3d088fa0" Sep 12 19:26:14.716771 containerd[1514]: 2025-09-12 19:26:14.414 [INFO][3857] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" iface="eth0" netns="/var/run/netns/cni-3f287d13-0d6b-31c4-3ccd-e0dc3d088fa0" Sep 12 19:26:14.716771 containerd[1514]: 2025-09-12 19:26:14.414 [INFO][3857] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Sep 12 19:26:14.716771 containerd[1514]: 2025-09-12 19:26:14.414 [INFO][3857] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Sep 12 19:26:14.716771 containerd[1514]: 2025-09-12 19:26:14.685 [INFO][3883] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" HandleID="k8s-pod-network.c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Workload="srv--gt1mb.gb1.brightbox.com-k8s-whisker--548bf8484--dn2jb-eth0" Sep 12 19:26:14.716771 containerd[1514]: 2025-09-12 19:26:14.687 [INFO][3883] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:14.716771 containerd[1514]: 2025-09-12 19:26:14.687 [INFO][3883] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:14.716771 containerd[1514]: 2025-09-12 19:26:14.704 [WARNING][3883] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" HandleID="k8s-pod-network.c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Workload="srv--gt1mb.gb1.brightbox.com-k8s-whisker--548bf8484--dn2jb-eth0" Sep 12 19:26:14.716771 containerd[1514]: 2025-09-12 19:26:14.704 [INFO][3883] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" HandleID="k8s-pod-network.c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Workload="srv--gt1mb.gb1.brightbox.com-k8s-whisker--548bf8484--dn2jb-eth0" Sep 12 19:26:14.716771 containerd[1514]: 2025-09-12 19:26:14.709 [INFO][3883] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:14.716771 containerd[1514]: 2025-09-12 19:26:14.713 [INFO][3857] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Sep 12 19:26:14.719558 containerd[1514]: time="2025-09-12T19:26:14.718204994Z" level=info msg="TearDown network for sandbox \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\" successfully" Sep 12 19:26:14.719558 containerd[1514]: time="2025-09-12T19:26:14.718254585Z" level=info msg="StopPodSandbox for \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\" returns successfully" Sep 12 19:26:14.729341 systemd[1]: run-netns-cni\x2d3f287d13\x2d0d6b\x2d31c4\x2d3ccd\x2de0dc3d088fa0.mount: Deactivated successfully. Sep 12 19:26:14.769428 containerd[1514]: time="2025-09-12T19:26:14.769172869Z" level=info msg="StopPodSandbox for \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\"" Sep 12 19:26:14.941821 kubelet[2683]: I0912 19:26:14.939373 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ca95b62-d472-43b2-a369-e4f919c62dfe-whisker-backend-key-pair\") pod \"3ca95b62-d472-43b2-a369-e4f919c62dfe\" (UID: \"3ca95b62-d472-43b2-a369-e4f919c62dfe\") " Sep 12 19:26:14.942575 kubelet[2683]: I0912 19:26:14.942535 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hx6fh\" (UniqueName: \"kubernetes.io/projected/3ca95b62-d472-43b2-a369-e4f919c62dfe-kube-api-access-hx6fh\") pod \"3ca95b62-d472-43b2-a369-e4f919c62dfe\" (UID: \"3ca95b62-d472-43b2-a369-e4f919c62dfe\") " Sep 12 19:26:14.948426 kubelet[2683]: I0912 19:26:14.947804 2683 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ca95b62-d472-43b2-a369-e4f919c62dfe-whisker-ca-bundle\") pod \"3ca95b62-d472-43b2-a369-e4f919c62dfe\" (UID: \"3ca95b62-d472-43b2-a369-e4f919c62dfe\") " Sep 12 19:26:14.982538 kubelet[2683]: I0912 19:26:14.981333 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ca95b62-d472-43b2-a369-e4f919c62dfe-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3ca95b62-d472-43b2-a369-e4f919c62dfe" (UID: "3ca95b62-d472-43b2-a369-e4f919c62dfe"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 19:26:14.986152 systemd[1]: var-lib-kubelet-pods-3ca95b62\x2dd472\x2d43b2\x2da369\x2de4f919c62dfe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhx6fh.mount: Deactivated successfully. Sep 12 19:26:14.994002 kubelet[2683]: I0912 19:26:14.993353 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ca95b62-d472-43b2-a369-e4f919c62dfe-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3ca95b62-d472-43b2-a369-e4f919c62dfe" (UID: "3ca95b62-d472-43b2-a369-e4f919c62dfe"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 19:26:14.996788 kubelet[2683]: I0912 19:26:14.995690 2683 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ca95b62-d472-43b2-a369-e4f919c62dfe-kube-api-access-hx6fh" (OuterVolumeSpecName: "kube-api-access-hx6fh") pod "3ca95b62-d472-43b2-a369-e4f919c62dfe" (UID: "3ca95b62-d472-43b2-a369-e4f919c62dfe"). InnerVolumeSpecName "kube-api-access-hx6fh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 19:26:14.999638 systemd[1]: var-lib-kubelet-pods-3ca95b62\x2dd472\x2d43b2\x2da369\x2de4f919c62dfe-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Sep 12 19:26:15.016200 containerd[1514]: 2025-09-12 19:26:14.858 [INFO][3904] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Sep 12 19:26:15.016200 containerd[1514]: 2025-09-12 19:26:14.858 [INFO][3904] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" iface="eth0" netns="/var/run/netns/cni-131a6e8c-4802-2eab-aa93-1fbcd871ff2f" Sep 12 19:26:15.016200 containerd[1514]: 2025-09-12 19:26:14.859 [INFO][3904] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" iface="eth0" netns="/var/run/netns/cni-131a6e8c-4802-2eab-aa93-1fbcd871ff2f" Sep 12 19:26:15.016200 containerd[1514]: 2025-09-12 19:26:14.860 [INFO][3904] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" iface="eth0" netns="/var/run/netns/cni-131a6e8c-4802-2eab-aa93-1fbcd871ff2f" Sep 12 19:26:15.016200 containerd[1514]: 2025-09-12 19:26:14.860 [INFO][3904] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Sep 12 19:26:15.016200 containerd[1514]: 2025-09-12 19:26:14.861 [INFO][3904] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Sep 12 19:26:15.016200 containerd[1514]: 2025-09-12 19:26:14.961 [INFO][3912] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" HandleID="k8s-pod-network.e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:15.016200 containerd[1514]: 2025-09-12 19:26:14.961 [INFO][3912] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:15.016200 containerd[1514]: 2025-09-12 19:26:14.961 [INFO][3912] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:15.016200 containerd[1514]: 2025-09-12 19:26:15.002 [WARNING][3912] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" HandleID="k8s-pod-network.e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:15.016200 containerd[1514]: 2025-09-12 19:26:15.002 [INFO][3912] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" HandleID="k8s-pod-network.e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:15.016200 containerd[1514]: 2025-09-12 19:26:15.007 [INFO][3912] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:15.016200 containerd[1514]: 2025-09-12 19:26:15.012 [INFO][3904] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Sep 12 19:26:15.017471 containerd[1514]: time="2025-09-12T19:26:15.016621902Z" level=info msg="TearDown network for sandbox \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\" successfully" Sep 12 19:26:15.017471 containerd[1514]: time="2025-09-12T19:26:15.016753701Z" level=info msg="StopPodSandbox for \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\" returns successfully" Sep 12 19:26:15.020864 containerd[1514]: time="2025-09-12T19:26:15.020146480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-746f98d4d8-b4lq7,Uid:1d596bad-4e4f-445e-bb83-18354a67cb67,Namespace:calico-apiserver,Attempt:1,}" Sep 12 19:26:15.049413 kubelet[2683]: I0912 19:26:15.049227 2683 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3ca95b62-d472-43b2-a369-e4f919c62dfe-whisker-backend-key-pair\") on node \"srv-gt1mb.gb1.brightbox.com\" DevicePath \"\"" Sep 12 19:26:15.049413 kubelet[2683]: I0912 19:26:15.049346 2683 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hx6fh\" (UniqueName: \"kubernetes.io/projected/3ca95b62-d472-43b2-a369-e4f919c62dfe-kube-api-access-hx6fh\") on node \"srv-gt1mb.gb1.brightbox.com\" DevicePath \"\"" Sep 12 19:26:15.049413 kubelet[2683]: I0912 19:26:15.049377 2683 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ca95b62-d472-43b2-a369-e4f919c62dfe-whisker-ca-bundle\") on node \"srv-gt1mb.gb1.brightbox.com\" DevicePath \"\"" Sep 12 19:26:15.113449 systemd[1]: run-netns-cni\x2d131a6e8c\x2d4802\x2d2eab\x2daa93\x2d1fbcd871ff2f.mount: Deactivated successfully. Sep 12 19:26:15.212132 systemd[1]: Removed slice kubepods-besteffort-pod3ca95b62_d472_43b2_a369_e4f919c62dfe.slice - libcontainer container kubepods-besteffort-pod3ca95b62_d472_43b2_a369_e4f919c62dfe.slice. Sep 12 19:26:15.318790 systemd-networkd[1427]: cali75f9f21643a: Link UP Sep 12 19:26:15.323056 systemd-networkd[1427]: cali75f9f21643a: Gained carrier Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.082 [INFO][3933] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.097 [INFO][3933] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0 calico-apiserver-746f98d4d8- calico-apiserver 1d596bad-4e4f-445e-bb83-18354a67cb67 880 0 2025-09-12 19:25:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:746f98d4d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-gt1mb.gb1.brightbox.com calico-apiserver-746f98d4d8-b4lq7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali75f9f21643a [] [] }} ContainerID="483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-b4lq7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-" Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.098 [INFO][3933] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-b4lq7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.154 [INFO][3945] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" HandleID="k8s-pod-network.483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.155 [INFO][3945] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" HandleID="k8s-pod-network.483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-gt1mb.gb1.brightbox.com", "pod":"calico-apiserver-746f98d4d8-b4lq7", "timestamp":"2025-09-12 19:26:15.15494076 +0000 UTC"}, Hostname:"srv-gt1mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.155 [INFO][3945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.155 [INFO][3945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.155 [INFO][3945] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gt1mb.gb1.brightbox.com' Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.170 [INFO][3945] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.183 [INFO][3945] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.189 [INFO][3945] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.192 [INFO][3945] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.200 [INFO][3945] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.200 [INFO][3945] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.209 [INFO][3945] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.222 [INFO][3945] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.242 [INFO][3945] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.1/26] block=192.168.35.0/26 handle="k8s-pod-network.483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.242 [INFO][3945] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.1/26] handle="k8s-pod-network.483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.242 [INFO][3945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:15.387288 containerd[1514]: 2025-09-12 19:26:15.242 [INFO][3945] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.1/26] IPv6=[] ContainerID="483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" HandleID="k8s-pod-network.483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:15.393163 containerd[1514]: 2025-09-12 19:26:15.260 [INFO][3933] cni-plugin/k8s.go 418: Populated endpoint ContainerID="483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-b4lq7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0", GenerateName:"calico-apiserver-746f98d4d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d596bad-4e4f-445e-bb83-18354a67cb67", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"746f98d4d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-746f98d4d8-b4lq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali75f9f21643a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:15.393163 containerd[1514]: 2025-09-12 19:26:15.261 [INFO][3933] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.1/32] ContainerID="483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-b4lq7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:15.393163 containerd[1514]: 2025-09-12 19:26:15.261 [INFO][3933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali75f9f21643a ContainerID="483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-b4lq7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:15.393163 containerd[1514]: 2025-09-12 19:26:15.320 [INFO][3933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-b4lq7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:15.393163 containerd[1514]: 2025-09-12 19:26:15.326 [INFO][3933] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-b4lq7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0", GenerateName:"calico-apiserver-746f98d4d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d596bad-4e4f-445e-bb83-18354a67cb67", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"746f98d4d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d", Pod:"calico-apiserver-746f98d4d8-b4lq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali75f9f21643a", MAC:"e6:2f:64:d5:ce:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:15.393163 containerd[1514]: 2025-09-12 19:26:15.383 [INFO][3933] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-b4lq7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:15.460041 containerd[1514]: time="2025-09-12T19:26:15.458863135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:26:15.460041 containerd[1514]: time="2025-09-12T19:26:15.459017282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:26:15.460041 containerd[1514]: time="2025-09-12T19:26:15.459071265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:15.460041 containerd[1514]: time="2025-09-12T19:26:15.459240623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:15.486828 systemd[1]: Created slice kubepods-besteffort-pod8b085e5f_6688_4f76_b10b_dfccd4a66edf.slice - libcontainer container kubepods-besteffort-pod8b085e5f_6688_4f76_b10b_dfccd4a66edf.slice. Sep 12 19:26:15.534577 systemd[1]: run-containerd-runc-k8s.io-483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d-runc.i9QpH3.mount: Deactivated successfully. Sep 12 19:26:15.547484 systemd[1]: Started cri-containerd-483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d.scope - libcontainer container 483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d. Sep 12 19:26:15.555476 kubelet[2683]: I0912 19:26:15.553802 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b085e5f-6688-4f76-b10b-dfccd4a66edf-whisker-ca-bundle\") pod \"whisker-655866dbc-vrgnp\" (UID: \"8b085e5f-6688-4f76-b10b-dfccd4a66edf\") " pod="calico-system/whisker-655866dbc-vrgnp" Sep 12 19:26:15.555476 kubelet[2683]: I0912 19:26:15.554008 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2zrk\" (UniqueName: \"kubernetes.io/projected/8b085e5f-6688-4f76-b10b-dfccd4a66edf-kube-api-access-z2zrk\") pod \"whisker-655866dbc-vrgnp\" (UID: \"8b085e5f-6688-4f76-b10b-dfccd4a66edf\") " pod="calico-system/whisker-655866dbc-vrgnp" Sep 12 19:26:15.555476 kubelet[2683]: I0912 19:26:15.554073 2683 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8b085e5f-6688-4f76-b10b-dfccd4a66edf-whisker-backend-key-pair\") pod \"whisker-655866dbc-vrgnp\" (UID: \"8b085e5f-6688-4f76-b10b-dfccd4a66edf\") " pod="calico-system/whisker-655866dbc-vrgnp" Sep 12 19:26:15.648014 kubelet[2683]: I0912 19:26:15.647528 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 19:26:15.678894 containerd[1514]: time="2025-09-12T19:26:15.678724536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-746f98d4d8-b4lq7,Uid:1d596bad-4e4f-445e-bb83-18354a67cb67,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d\"" Sep 12 19:26:15.699397 containerd[1514]: time="2025-09-12T19:26:15.699038824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 19:26:15.811341 containerd[1514]: time="2025-09-12T19:26:15.810804215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-655866dbc-vrgnp,Uid:8b085e5f-6688-4f76-b10b-dfccd4a66edf,Namespace:calico-system,Attempt:0,}" Sep 12 19:26:16.139596 systemd-networkd[1427]: cali5335fe75270: Link UP Sep 12 19:26:16.141446 systemd-networkd[1427]: cali5335fe75270: Gained carrier Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:15.916 [INFO][4026] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:15.938 [INFO][4026] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gt1mb.gb1.brightbox.com-k8s-whisker--655866dbc--vrgnp-eth0 whisker-655866dbc- calico-system 8b085e5f-6688-4f76-b10b-dfccd4a66edf 899 0 2025-09-12 19:26:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:655866dbc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-gt1mb.gb1.brightbox.com whisker-655866dbc-vrgnp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali5335fe75270 [] [] }} ContainerID="d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" Namespace="calico-system" Pod="whisker-655866dbc-vrgnp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-whisker--655866dbc--vrgnp-" Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:15.939 [INFO][4026] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" Namespace="calico-system" Pod="whisker-655866dbc-vrgnp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-whisker--655866dbc--vrgnp-eth0" Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.023 [INFO][4039] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" HandleID="k8s-pod-network.d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" Workload="srv--gt1mb.gb1.brightbox.com-k8s-whisker--655866dbc--vrgnp-eth0" Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.023 [INFO][4039] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" HandleID="k8s-pod-network.d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" Workload="srv--gt1mb.gb1.brightbox.com-k8s-whisker--655866dbc--vrgnp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000323860), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gt1mb.gb1.brightbox.com", "pod":"whisker-655866dbc-vrgnp", "timestamp":"2025-09-12 19:26:16.023297901 +0000 UTC"}, Hostname:"srv-gt1mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.023 [INFO][4039] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.023 [INFO][4039] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.023 [INFO][4039] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gt1mb.gb1.brightbox.com' Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.041 [INFO][4039] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.055 [INFO][4039] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.063 [INFO][4039] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.069 [INFO][4039] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.076 [INFO][4039] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.076 [INFO][4039] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.078 [INFO][4039] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.098 [INFO][4039] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.111 [INFO][4039] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.2/26] block=192.168.35.0/26 handle="k8s-pod-network.d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.111 [INFO][4039] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.2/26] handle="k8s-pod-network.d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.111 [INFO][4039] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:16.185309 containerd[1514]: 2025-09-12 19:26:16.111 [INFO][4039] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.2/26] IPv6=[] ContainerID="d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" HandleID="k8s-pod-network.d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" Workload="srv--gt1mb.gb1.brightbox.com-k8s-whisker--655866dbc--vrgnp-eth0" Sep 12 19:26:16.195266 containerd[1514]: 2025-09-12 19:26:16.118 [INFO][4026] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" Namespace="calico-system" Pod="whisker-655866dbc-vrgnp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-whisker--655866dbc--vrgnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-whisker--655866dbc--vrgnp-eth0", GenerateName:"whisker-655866dbc-", Namespace:"calico-system", SelfLink:"", UID:"8b085e5f-6688-4f76-b10b-dfccd4a66edf", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 26, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"655866dbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"", Pod:"whisker-655866dbc-vrgnp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5335fe75270", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:16.195266 containerd[1514]: 2025-09-12 19:26:16.118 [INFO][4026] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.2/32] ContainerID="d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" Namespace="calico-system" Pod="whisker-655866dbc-vrgnp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-whisker--655866dbc--vrgnp-eth0" Sep 12 19:26:16.195266 containerd[1514]: 2025-09-12 19:26:16.118 [INFO][4026] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5335fe75270 ContainerID="d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" Namespace="calico-system" Pod="whisker-655866dbc-vrgnp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-whisker--655866dbc--vrgnp-eth0" Sep 12 19:26:16.195266 containerd[1514]: 2025-09-12 19:26:16.143 [INFO][4026] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" Namespace="calico-system" Pod="whisker-655866dbc-vrgnp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-whisker--655866dbc--vrgnp-eth0" Sep 12 19:26:16.195266 containerd[1514]: 2025-09-12 19:26:16.143 [INFO][4026] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" Namespace="calico-system" Pod="whisker-655866dbc-vrgnp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-whisker--655866dbc--vrgnp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-whisker--655866dbc--vrgnp-eth0", GenerateName:"whisker-655866dbc-", Namespace:"calico-system", SelfLink:"", UID:"8b085e5f-6688-4f76-b10b-dfccd4a66edf", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 26, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"655866dbc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a", Pod:"whisker-655866dbc-vrgnp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.35.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali5335fe75270", MAC:"ea:3a:bf:a3:24:4e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:16.195266 containerd[1514]: 2025-09-12 19:26:16.166 [INFO][4026] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a" Namespace="calico-system" Pod="whisker-655866dbc-vrgnp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-whisker--655866dbc--vrgnp-eth0" Sep 12 19:26:16.324270 containerd[1514]: time="2025-09-12T19:26:16.323179796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:26:16.324270 containerd[1514]: time="2025-09-12T19:26:16.323348179Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:26:16.324756 containerd[1514]: time="2025-09-12T19:26:16.323372645Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:16.326931 containerd[1514]: time="2025-09-12T19:26:16.326818687Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:16.390216 systemd[1]: Started cri-containerd-d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a.scope - libcontainer container d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a. Sep 12 19:26:16.554528 containerd[1514]: time="2025-09-12T19:26:16.554387897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-655866dbc-vrgnp,Uid:8b085e5f-6688-4f76-b10b-dfccd4a66edf,Namespace:calico-system,Attempt:0,} returns sandbox id \"d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a\"" Sep 12 19:26:16.790356 kubelet[2683]: I0912 19:26:16.790161 2683 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ca95b62-d472-43b2-a369-e4f919c62dfe" path="/var/lib/kubelet/pods/3ca95b62-d472-43b2-a369-e4f919c62dfe/volumes" Sep 12 19:26:16.794144 containerd[1514]: time="2025-09-12T19:26:16.793346795Z" level=info msg="StopPodSandbox for \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\"" Sep 12 19:26:17.059631 containerd[1514]: 2025-09-12 19:26:16.907 [INFO][4206] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Sep 12 19:26:17.059631 containerd[1514]: 2025-09-12 19:26:16.907 [INFO][4206] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" iface="eth0" netns="/var/run/netns/cni-23ed370c-84a9-f91c-be78-c78b61ef7aac" Sep 12 19:26:17.059631 containerd[1514]: 2025-09-12 19:26:16.908 [INFO][4206] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" iface="eth0" netns="/var/run/netns/cni-23ed370c-84a9-f91c-be78-c78b61ef7aac" Sep 12 19:26:17.059631 containerd[1514]: 2025-09-12 19:26:16.912 [INFO][4206] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" iface="eth0" netns="/var/run/netns/cni-23ed370c-84a9-f91c-be78-c78b61ef7aac" Sep 12 19:26:17.059631 containerd[1514]: 2025-09-12 19:26:16.912 [INFO][4206] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Sep 12 19:26:17.059631 containerd[1514]: 2025-09-12 19:26:16.912 [INFO][4206] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Sep 12 19:26:17.059631 containerd[1514]: 2025-09-12 19:26:17.036 [INFO][4214] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" HandleID="k8s-pod-network.d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Workload="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:17.059631 containerd[1514]: 2025-09-12 19:26:17.036 [INFO][4214] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:17.059631 containerd[1514]: 2025-09-12 19:26:17.036 [INFO][4214] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:17.059631 containerd[1514]: 2025-09-12 19:26:17.048 [WARNING][4214] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" HandleID="k8s-pod-network.d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Workload="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:17.059631 containerd[1514]: 2025-09-12 19:26:17.048 [INFO][4214] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" HandleID="k8s-pod-network.d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Workload="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:17.059631 containerd[1514]: 2025-09-12 19:26:17.050 [INFO][4214] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:17.059631 containerd[1514]: 2025-09-12 19:26:17.054 [INFO][4206] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Sep 12 19:26:17.066985 containerd[1514]: time="2025-09-12T19:26:17.062130416Z" level=info msg="TearDown network for sandbox \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\" successfully" Sep 12 19:26:17.066985 containerd[1514]: time="2025-09-12T19:26:17.062180656Z" level=info msg="StopPodSandbox for \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\" returns successfully" Sep 12 19:26:17.066985 containerd[1514]: time="2025-09-12T19:26:17.064379657Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bxklx,Uid:06e002f4-3e23-487d-b3cb-f79cac263b04,Namespace:calico-system,Attempt:1,}" Sep 12 19:26:17.066841 systemd[1]: run-netns-cni\x2d23ed370c\x2d84a9\x2df91c\x2dbe78\x2dc78b61ef7aac.mount: Deactivated successfully. Sep 12 19:26:17.323540 systemd-networkd[1427]: cali75f9f21643a: Gained IPv6LL Sep 12 19:26:17.381668 systemd-networkd[1427]: cali5335fe75270: Gained IPv6LL Sep 12 19:26:17.489557 systemd-networkd[1427]: cali4e42488e5d8: Link UP Sep 12 19:26:17.489948 systemd-networkd[1427]: cali4e42488e5d8: Gained carrier Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.202 [INFO][4221] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.242 [INFO][4221] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0 csi-node-driver- calico-system 06e002f4-3e23-487d-b3cb-f79cac263b04 918 0 2025-09-12 19:25:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6c96d95cc7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-gt1mb.gb1.brightbox.com csi-node-driver-bxklx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4e42488e5d8 [] [] }} ContainerID="1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" Namespace="calico-system" Pod="csi-node-driver-bxklx" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-" Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.242 [INFO][4221] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" Namespace="calico-system" Pod="csi-node-driver-bxklx" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.368 [INFO][4239] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" HandleID="k8s-pod-network.1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" Workload="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.370 [INFO][4239] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" HandleID="k8s-pod-network.1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" Workload="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001224c0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gt1mb.gb1.brightbox.com", "pod":"csi-node-driver-bxklx", "timestamp":"2025-09-12 19:26:17.368547234 +0000 UTC"}, Hostname:"srv-gt1mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.370 [INFO][4239] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.370 [INFO][4239] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.370 [INFO][4239] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gt1mb.gb1.brightbox.com' Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.397 [INFO][4239] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.427 [INFO][4239] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.440 [INFO][4239] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.444 [INFO][4239] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.451 [INFO][4239] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.451 [INFO][4239] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.453 [INFO][4239] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129 Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.461 [INFO][4239] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.473 [INFO][4239] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.3/26] block=192.168.35.0/26 handle="k8s-pod-network.1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.473 [INFO][4239] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.3/26] handle="k8s-pod-network.1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.473 [INFO][4239] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:17.573655 containerd[1514]: 2025-09-12 19:26:17.474 [INFO][4239] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.3/26] IPv6=[] ContainerID="1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" HandleID="k8s-pod-network.1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" Workload="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:17.576133 containerd[1514]: 2025-09-12 19:26:17.483 [INFO][4221] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" Namespace="calico-system" Pod="csi-node-driver-bxklx" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06e002f4-3e23-487d-b3cb-f79cac263b04", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-bxklx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e42488e5d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:17.576133 containerd[1514]: 2025-09-12 19:26:17.483 [INFO][4221] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.3/32] ContainerID="1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" Namespace="calico-system" Pod="csi-node-driver-bxklx" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:17.576133 containerd[1514]: 2025-09-12 19:26:17.483 [INFO][4221] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e42488e5d8 ContainerID="1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" Namespace="calico-system" Pod="csi-node-driver-bxklx" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:17.576133 containerd[1514]: 2025-09-12 19:26:17.493 [INFO][4221] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" Namespace="calico-system" Pod="csi-node-driver-bxklx" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:17.576133 containerd[1514]: 2025-09-12 19:26:17.506 [INFO][4221] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" Namespace="calico-system" Pod="csi-node-driver-bxklx" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06e002f4-3e23-487d-b3cb-f79cac263b04", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129", Pod:"csi-node-driver-bxklx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e42488e5d8", MAC:"ba:86:57:f7:09:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:17.576133 containerd[1514]: 2025-09-12 19:26:17.558 [INFO][4221] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129" Namespace="calico-system" Pod="csi-node-driver-bxklx" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:17.654811 containerd[1514]: time="2025-09-12T19:26:17.653715555Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:26:17.654811 containerd[1514]: time="2025-09-12T19:26:17.654708006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:26:17.655255 containerd[1514]: time="2025-09-12T19:26:17.654786181Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:17.655255 containerd[1514]: time="2025-09-12T19:26:17.655041894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:17.715160 systemd[1]: Started cri-containerd-1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129.scope - libcontainer container 1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129. Sep 12 19:26:17.772809 containerd[1514]: time="2025-09-12T19:26:17.772355280Z" level=info msg="StopPodSandbox for \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\"" Sep 12 19:26:17.773041 kernel: bpftool[4324]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Sep 12 19:26:17.790634 containerd[1514]: time="2025-09-12T19:26:17.790584590Z" level=info msg="StopPodSandbox for \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\"" Sep 12 19:26:17.796624 containerd[1514]: time="2025-09-12T19:26:17.796569273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bxklx,Uid:06e002f4-3e23-487d-b3cb-f79cac263b04,Namespace:calico-system,Attempt:1,} returns sandbox id \"1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129\"" Sep 12 19:26:18.144144 containerd[1514]: 2025-09-12 19:26:17.939 [INFO][4345] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Sep 12 19:26:18.144144 containerd[1514]: 2025-09-12 19:26:17.940 [INFO][4345] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" iface="eth0" netns="/var/run/netns/cni-3e47d9a7-b80e-c1ed-455c-30e97f7affd6" Sep 12 19:26:18.144144 containerd[1514]: 2025-09-12 19:26:17.941 [INFO][4345] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" iface="eth0" netns="/var/run/netns/cni-3e47d9a7-b80e-c1ed-455c-30e97f7affd6" Sep 12 19:26:18.144144 containerd[1514]: 2025-09-12 19:26:17.941 [INFO][4345] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" iface="eth0" netns="/var/run/netns/cni-3e47d9a7-b80e-c1ed-455c-30e97f7affd6" Sep 12 19:26:18.144144 containerd[1514]: 2025-09-12 19:26:17.942 [INFO][4345] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Sep 12 19:26:18.144144 containerd[1514]: 2025-09-12 19:26:17.942 [INFO][4345] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Sep 12 19:26:18.144144 containerd[1514]: 2025-09-12 19:26:18.099 [INFO][4363] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" HandleID="k8s-pod-network.c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Workload="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:18.144144 containerd[1514]: 2025-09-12 19:26:18.100 [INFO][4363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:18.144144 containerd[1514]: 2025-09-12 19:26:18.100 [INFO][4363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:18.144144 containerd[1514]: 2025-09-12 19:26:18.122 [WARNING][4363] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" HandleID="k8s-pod-network.c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Workload="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:18.144144 containerd[1514]: 2025-09-12 19:26:18.123 [INFO][4363] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" HandleID="k8s-pod-network.c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Workload="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:18.144144 containerd[1514]: 2025-09-12 19:26:18.126 [INFO][4363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:18.144144 containerd[1514]: 2025-09-12 19:26:18.133 [INFO][4345] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Sep 12 19:26:18.152991 containerd[1514]: time="2025-09-12T19:26:18.145406233Z" level=info msg="TearDown network for sandbox \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\" successfully" Sep 12 19:26:18.152991 containerd[1514]: time="2025-09-12T19:26:18.145446000Z" level=info msg="StopPodSandbox for \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\" returns successfully" Sep 12 19:26:18.152991 containerd[1514]: time="2025-09-12T19:26:18.149307076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hntgn,Uid:6fa90bc5-cf93-4dc8-ab8e-d382c74b770b,Namespace:calico-system,Attempt:1,}" Sep 12 19:26:18.151502 systemd[1]: run-netns-cni\x2d3e47d9a7\x2db80e\x2dc1ed\x2d455c\x2d30e97f7affd6.mount: Deactivated successfully. Sep 12 19:26:18.220664 containerd[1514]: 2025-09-12 19:26:17.976 [INFO][4353] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Sep 12 19:26:18.220664 containerd[1514]: 2025-09-12 19:26:17.980 [INFO][4353] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" iface="eth0" netns="/var/run/netns/cni-94b70eff-d96e-f62a-bb77-bdc91a6a8645" Sep 12 19:26:18.220664 containerd[1514]: 2025-09-12 19:26:17.982 [INFO][4353] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" iface="eth0" netns="/var/run/netns/cni-94b70eff-d96e-f62a-bb77-bdc91a6a8645" Sep 12 19:26:18.220664 containerd[1514]: 2025-09-12 19:26:17.982 [INFO][4353] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" iface="eth0" netns="/var/run/netns/cni-94b70eff-d96e-f62a-bb77-bdc91a6a8645" Sep 12 19:26:18.220664 containerd[1514]: 2025-09-12 19:26:17.982 [INFO][4353] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Sep 12 19:26:18.220664 containerd[1514]: 2025-09-12 19:26:17.982 [INFO][4353] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Sep 12 19:26:18.220664 containerd[1514]: 2025-09-12 19:26:18.132 [INFO][4368] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" HandleID="k8s-pod-network.98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:18.220664 containerd[1514]: 2025-09-12 19:26:18.132 [INFO][4368] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:18.220664 containerd[1514]: 2025-09-12 19:26:18.133 [INFO][4368] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:18.220664 containerd[1514]: 2025-09-12 19:26:18.161 [WARNING][4368] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" HandleID="k8s-pod-network.98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:18.220664 containerd[1514]: 2025-09-12 19:26:18.173 [INFO][4368] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" HandleID="k8s-pod-network.98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:18.220664 containerd[1514]: 2025-09-12 19:26:18.199 [INFO][4368] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:18.220664 containerd[1514]: 2025-09-12 19:26:18.207 [INFO][4353] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Sep 12 19:26:18.223412 containerd[1514]: time="2025-09-12T19:26:18.223375685Z" level=info msg="TearDown network for sandbox \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\" successfully" Sep 12 19:26:18.223573 containerd[1514]: time="2025-09-12T19:26:18.223544889Z" level=info msg="StopPodSandbox for \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\" returns successfully" Sep 12 19:26:18.229580 systemd[1]: run-netns-cni\x2d94b70eff\x2dd96e\x2df62a\x2dbb77\x2dbdc91a6a8645.mount: Deactivated successfully. Sep 12 19:26:18.234845 containerd[1514]: time="2025-09-12T19:26:18.234804863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-746f98d4d8-55djp,Uid:1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476,Namespace:calico-apiserver,Attempt:1,}" Sep 12 19:26:18.518078 systemd-networkd[1427]: vxlan.calico: Link UP Sep 12 19:26:18.518103 systemd-networkd[1427]: vxlan.calico: Gained carrier Sep 12 19:26:18.702869 systemd-networkd[1427]: calid4172306ac4: Link UP Sep 12 19:26:18.704272 systemd-networkd[1427]: calid4172306ac4: Gained carrier Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.357 [INFO][4377] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0 goldmane-54d579b49d- calico-system 6fa90bc5-cf93-4dc8-ab8e-d382c74b770b 929 0 2025-09-12 19:25:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:54d579b49d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-gt1mb.gb1.brightbox.com goldmane-54d579b49d-hntgn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calid4172306ac4 [] [] }} ContainerID="3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" Namespace="calico-system" Pod="goldmane-54d579b49d-hntgn" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-" Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.358 [INFO][4377] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" Namespace="calico-system" Pod="goldmane-54d579b49d-hntgn" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.561 [INFO][4415] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" HandleID="k8s-pod-network.3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" Workload="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.561 [INFO][4415] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" HandleID="k8s-pod-network.3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" Workload="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003107b0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gt1mb.gb1.brightbox.com", "pod":"goldmane-54d579b49d-hntgn", "timestamp":"2025-09-12 19:26:18.561506715 +0000 UTC"}, Hostname:"srv-gt1mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.561 [INFO][4415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.561 [INFO][4415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.561 [INFO][4415] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gt1mb.gb1.brightbox.com' Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.583 [INFO][4415] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.597 [INFO][4415] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.611 [INFO][4415] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.620 [INFO][4415] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.626 [INFO][4415] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.626 [INFO][4415] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.629 [INFO][4415] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232 Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.643 [INFO][4415] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.672 [INFO][4415] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.4/26] block=192.168.35.0/26 handle="k8s-pod-network.3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.672 [INFO][4415] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.4/26] handle="k8s-pod-network.3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.673 [INFO][4415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:18.743554 containerd[1514]: 2025-09-12 19:26:18.673 [INFO][4415] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.4/26] IPv6=[] ContainerID="3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" HandleID="k8s-pod-network.3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" Workload="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:18.745130 containerd[1514]: 2025-09-12 19:26:18.694 [INFO][4377] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" Namespace="calico-system" Pod="goldmane-54d579b49d-hntgn" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6fa90bc5-cf93-4dc8-ab8e-d382c74b770b", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-54d579b49d-hntgn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid4172306ac4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:18.745130 containerd[1514]: 2025-09-12 19:26:18.695 [INFO][4377] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.4/32] ContainerID="3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" Namespace="calico-system" Pod="goldmane-54d579b49d-hntgn" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:18.745130 containerd[1514]: 2025-09-12 19:26:18.695 [INFO][4377] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid4172306ac4 ContainerID="3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" Namespace="calico-system" Pod="goldmane-54d579b49d-hntgn" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:18.745130 containerd[1514]: 2025-09-12 19:26:18.705 [INFO][4377] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" Namespace="calico-system" Pod="goldmane-54d579b49d-hntgn" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:18.745130 containerd[1514]: 2025-09-12 19:26:18.706 [INFO][4377] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" Namespace="calico-system" Pod="goldmane-54d579b49d-hntgn" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6fa90bc5-cf93-4dc8-ab8e-d382c74b770b", ResourceVersion:"929", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232", Pod:"goldmane-54d579b49d-hntgn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid4172306ac4", MAC:"96:93:fc:10:13:ee", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:18.745130 containerd[1514]: 2025-09-12 19:26:18.738 [INFO][4377] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232" Namespace="calico-system" Pod="goldmane-54d579b49d-hntgn" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:18.777426 containerd[1514]: time="2025-09-12T19:26:18.775728078Z" level=info msg="StopPodSandbox for \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\"" Sep 12 19:26:18.779945 containerd[1514]: time="2025-09-12T19:26:18.778496982Z" level=info msg="StopPodSandbox for \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\"" Sep 12 19:26:18.866954 systemd-networkd[1427]: cali51094904563: Link UP Sep 12 19:26:18.873366 systemd-networkd[1427]: cali51094904563: Gained carrier Sep 12 19:26:18.881180 containerd[1514]: time="2025-09-12T19:26:18.869738540Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:26:18.881180 containerd[1514]: time="2025-09-12T19:26:18.869848822Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:26:18.881180 containerd[1514]: time="2025-09-12T19:26:18.869868390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:18.881180 containerd[1514]: time="2025-09-12T19:26:18.870154759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.437 [INFO][4391] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0 calico-apiserver-746f98d4d8- calico-apiserver 1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476 930 0 2025-09-12 19:25:43 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:746f98d4d8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-gt1mb.gb1.brightbox.com calico-apiserver-746f98d4d8-55djp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali51094904563 [] [] }} ContainerID="4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-55djp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-" Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.441 [INFO][4391] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-55djp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.609 [INFO][4422] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" HandleID="k8s-pod-network.4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.610 [INFO][4422] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" HandleID="k8s-pod-network.4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003905c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-gt1mb.gb1.brightbox.com", "pod":"calico-apiserver-746f98d4d8-55djp", "timestamp":"2025-09-12 19:26:18.609321014 +0000 UTC"}, Hostname:"srv-gt1mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.610 [INFO][4422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.674 [INFO][4422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.675 [INFO][4422] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gt1mb.gb1.brightbox.com' Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.734 [INFO][4422] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.764 [INFO][4422] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.793 [INFO][4422] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.802 [INFO][4422] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.807 [INFO][4422] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.808 [INFO][4422] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.812 [INFO][4422] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.827 [INFO][4422] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.844 [INFO][4422] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.5/26] block=192.168.35.0/26 handle="k8s-pod-network.4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.845 [INFO][4422] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.5/26] handle="k8s-pod-network.4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.845 [INFO][4422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:18.924421 containerd[1514]: 2025-09-12 19:26:18.846 [INFO][4422] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.5/26] IPv6=[] ContainerID="4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" HandleID="k8s-pod-network.4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:18.925774 containerd[1514]: 2025-09-12 19:26:18.851 [INFO][4391] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-55djp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0", GenerateName:"calico-apiserver-746f98d4d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"746f98d4d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-746f98d4d8-55djp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali51094904563", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:18.925774 containerd[1514]: 2025-09-12 19:26:18.852 [INFO][4391] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.5/32] ContainerID="4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-55djp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:18.925774 containerd[1514]: 2025-09-12 19:26:18.852 [INFO][4391] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51094904563 ContainerID="4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-55djp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:18.925774 containerd[1514]: 2025-09-12 19:26:18.877 [INFO][4391] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-55djp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:18.925774 containerd[1514]: 2025-09-12 19:26:18.883 [INFO][4391] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-55djp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0", GenerateName:"calico-apiserver-746f98d4d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"746f98d4d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf", Pod:"calico-apiserver-746f98d4d8-55djp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali51094904563", MAC:"3a:fe:85:29:b1:f2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:18.925774 containerd[1514]: 2025-09-12 19:26:18.914 [INFO][4391] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf" Namespace="calico-apiserver" Pod="calico-apiserver-746f98d4d8-55djp" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:18.961249 systemd[1]: Started cri-containerd-3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232.scope - libcontainer container 3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232. Sep 12 19:26:19.005910 containerd[1514]: time="2025-09-12T19:26:18.996320173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:26:19.006190 containerd[1514]: time="2025-09-12T19:26:19.005752586Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:26:19.006508 containerd[1514]: time="2025-09-12T19:26:19.006298875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:19.027195 containerd[1514]: time="2025-09-12T19:26:19.007181167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:19.195224 systemd[1]: Started cri-containerd-4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf.scope - libcontainer container 4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf. Sep 12 19:26:19.224151 systemd[1]: run-containerd-runc-k8s.io-4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf-runc.R8FWay.mount: Deactivated successfully. Sep 12 19:26:19.306663 containerd[1514]: 2025-09-12 19:26:19.002 [INFO][4491] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Sep 12 19:26:19.306663 containerd[1514]: 2025-09-12 19:26:19.002 [INFO][4491] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" iface="eth0" netns="/var/run/netns/cni-8c96c3f0-a908-aa4f-0d83-336a673e38aa" Sep 12 19:26:19.306663 containerd[1514]: 2025-09-12 19:26:19.003 [INFO][4491] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" iface="eth0" netns="/var/run/netns/cni-8c96c3f0-a908-aa4f-0d83-336a673e38aa" Sep 12 19:26:19.306663 containerd[1514]: 2025-09-12 19:26:19.003 [INFO][4491] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" iface="eth0" netns="/var/run/netns/cni-8c96c3f0-a908-aa4f-0d83-336a673e38aa" Sep 12 19:26:19.306663 containerd[1514]: 2025-09-12 19:26:19.004 [INFO][4491] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Sep 12 19:26:19.306663 containerd[1514]: 2025-09-12 19:26:19.004 [INFO][4491] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Sep 12 19:26:19.306663 containerd[1514]: 2025-09-12 19:26:19.189 [INFO][4556] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" HandleID="k8s-pod-network.a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:19.306663 containerd[1514]: 2025-09-12 19:26:19.189 [INFO][4556] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:19.306663 containerd[1514]: 2025-09-12 19:26:19.190 [INFO][4556] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:19.306663 containerd[1514]: 2025-09-12 19:26:19.253 [WARNING][4556] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" HandleID="k8s-pod-network.a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:19.306663 containerd[1514]: 2025-09-12 19:26:19.253 [INFO][4556] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" HandleID="k8s-pod-network.a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:19.306663 containerd[1514]: 2025-09-12 19:26:19.257 [INFO][4556] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:19.306663 containerd[1514]: 2025-09-12 19:26:19.291 [INFO][4491] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Sep 12 19:26:19.317792 systemd[1]: run-netns-cni\x2d8c96c3f0\x2da908\x2daa4f\x2d0d83\x2d336a673e38aa.mount: Deactivated successfully. Sep 12 19:26:19.320240 containerd[1514]: time="2025-09-12T19:26:19.320196974Z" level=info msg="TearDown network for sandbox \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\" successfully" Sep 12 19:26:19.321035 containerd[1514]: time="2025-09-12T19:26:19.320995970Z" level=info msg="StopPodSandbox for \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\" returns successfully" Sep 12 19:26:19.324613 containerd[1514]: time="2025-09-12T19:26:19.324573396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qf4p7,Uid:d7b3e6e7-1f71-472a-a961-c100d7e8208f,Namespace:kube-system,Attempt:1,}" Sep 12 19:26:19.365633 systemd-networkd[1427]: cali4e42488e5d8: Gained IPv6LL Sep 12 19:26:19.374485 containerd[1514]: time="2025-09-12T19:26:19.372407347Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-54d579b49d-hntgn,Uid:6fa90bc5-cf93-4dc8-ab8e-d382c74b770b,Namespace:calico-system,Attempt:1,} returns sandbox id \"3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232\"" Sep 12 19:26:19.387850 containerd[1514]: 2025-09-12 19:26:18.999 [INFO][4490] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Sep 12 19:26:19.387850 containerd[1514]: 2025-09-12 19:26:18.999 [INFO][4490] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" iface="eth0" netns="/var/run/netns/cni-5b838799-b945-d77a-ff35-17514fd9d0ea" Sep 12 19:26:19.387850 containerd[1514]: 2025-09-12 19:26:19.002 [INFO][4490] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" iface="eth0" netns="/var/run/netns/cni-5b838799-b945-d77a-ff35-17514fd9d0ea" Sep 12 19:26:19.387850 containerd[1514]: 2025-09-12 19:26:19.005 [INFO][4490] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" iface="eth0" netns="/var/run/netns/cni-5b838799-b945-d77a-ff35-17514fd9d0ea" Sep 12 19:26:19.387850 containerd[1514]: 2025-09-12 19:26:19.005 [INFO][4490] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Sep 12 19:26:19.387850 containerd[1514]: 2025-09-12 19:26:19.005 [INFO][4490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Sep 12 19:26:19.387850 containerd[1514]: 2025-09-12 19:26:19.325 [INFO][4557] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" HandleID="k8s-pod-network.31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:19.387850 containerd[1514]: 2025-09-12 19:26:19.325 [INFO][4557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:19.387850 containerd[1514]: 2025-09-12 19:26:19.325 [INFO][4557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:19.387850 containerd[1514]: 2025-09-12 19:26:19.340 [WARNING][4557] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" HandleID="k8s-pod-network.31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:19.387850 containerd[1514]: 2025-09-12 19:26:19.340 [INFO][4557] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" HandleID="k8s-pod-network.31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:19.387850 containerd[1514]: 2025-09-12 19:26:19.347 [INFO][4557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:19.387850 containerd[1514]: 2025-09-12 19:26:19.363 [INFO][4490] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Sep 12 19:26:19.389690 containerd[1514]: time="2025-09-12T19:26:19.389655704Z" level=info msg="TearDown network for sandbox \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\" successfully" Sep 12 19:26:19.389827 containerd[1514]: time="2025-09-12T19:26:19.389792584Z" level=info msg="StopPodSandbox for \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\" returns successfully" Sep 12 19:26:19.393229 containerd[1514]: time="2025-09-12T19:26:19.393195663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b79d5b75-txtkb,Uid:7edf616c-f7f3-4891-b477-a2522e01e8c8,Namespace:calico-system,Attempt:1,}" Sep 12 19:26:19.866232 systemd-networkd[1427]: caliaf632028ac5: Link UP Sep 12 19:26:19.867970 systemd-networkd[1427]: caliaf632028ac5: Gained carrier Sep 12 19:26:19.874423 containerd[1514]: time="2025-09-12T19:26:19.874272422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-746f98d4d8-55djp,Uid:1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf\"" Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.492 [INFO][4601] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0 coredns-668d6bf9bc- kube-system d7b3e6e7-1f71-472a-a961-c100d7e8208f 941 0 2025-09-12 19:25:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-gt1mb.gb1.brightbox.com coredns-668d6bf9bc-qf4p7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaf632028ac5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" Namespace="kube-system" Pod="coredns-668d6bf9bc-qf4p7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-" Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.493 [INFO][4601] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" Namespace="kube-system" Pod="coredns-668d6bf9bc-qf4p7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.642 [INFO][4627] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" HandleID="k8s-pod-network.5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.644 [INFO][4627] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" HandleID="k8s-pod-network.5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000412130), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-gt1mb.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-qf4p7", "timestamp":"2025-09-12 19:26:19.6425688 +0000 UTC"}, Hostname:"srv-gt1mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.645 [INFO][4627] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.645 [INFO][4627] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.646 [INFO][4627] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gt1mb.gb1.brightbox.com' Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.676 [INFO][4627] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.705 [INFO][4627] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.730 [INFO][4627] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.740 [INFO][4627] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.757 [INFO][4627] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.761 [INFO][4627] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.774 [INFO][4627] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.792 [INFO][4627] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.822 [INFO][4627] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.6/26] block=192.168.35.0/26 handle="k8s-pod-network.5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.823 [INFO][4627] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.6/26] handle="k8s-pod-network.5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.823 [INFO][4627] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:19.928674 containerd[1514]: 2025-09-12 19:26:19.823 [INFO][4627] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.6/26] IPv6=[] ContainerID="5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" HandleID="k8s-pod-network.5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:19.934673 containerd[1514]: 2025-09-12 19:26:19.842 [INFO][4601] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" Namespace="kube-system" Pod="coredns-668d6bf9bc-qf4p7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d7b3e6e7-1f71-472a-a961-c100d7e8208f", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-qf4p7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf632028ac5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:19.934673 containerd[1514]: 2025-09-12 19:26:19.843 [INFO][4601] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.6/32] ContainerID="5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" Namespace="kube-system" Pod="coredns-668d6bf9bc-qf4p7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:19.934673 containerd[1514]: 2025-09-12 19:26:19.843 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaf632028ac5 ContainerID="5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" Namespace="kube-system" Pod="coredns-668d6bf9bc-qf4p7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:19.934673 containerd[1514]: 2025-09-12 19:26:19.867 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" Namespace="kube-system" Pod="coredns-668d6bf9bc-qf4p7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:19.934673 containerd[1514]: 2025-09-12 19:26:19.868 [INFO][4601] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" Namespace="kube-system" Pod="coredns-668d6bf9bc-qf4p7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d7b3e6e7-1f71-472a-a961-c100d7e8208f", ResourceVersion:"941", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb", Pod:"coredns-668d6bf9bc-qf4p7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf632028ac5", MAC:"fe:00:ca:4d:4e:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:19.934673 containerd[1514]: 2025-09-12 19:26:19.918 [INFO][4601] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb" Namespace="kube-system" Pod="coredns-668d6bf9bc-qf4p7" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:20.015284 containerd[1514]: time="2025-09-12T19:26:20.010625228Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:26:20.015284 containerd[1514]: time="2025-09-12T19:26:20.010736929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:26:20.015284 containerd[1514]: time="2025-09-12T19:26:20.010791334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:20.015284 containerd[1514]: time="2025-09-12T19:26:20.010934807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:20.064114 systemd-networkd[1427]: cali4aa31d8c5b8: Link UP Sep 12 19:26:20.071335 systemd-networkd[1427]: cali4aa31d8c5b8: Gained carrier Sep 12 19:26:20.092211 systemd[1]: Started cri-containerd-5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb.scope - libcontainer container 5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb. Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:19.559 [INFO][4613] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0 calico-kube-controllers-6b79d5b75- calico-system 7edf616c-f7f3-4891-b477-a2522e01e8c8 940 0 2025-09-12 19:25:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6b79d5b75 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-gt1mb.gb1.brightbox.com calico-kube-controllers-6b79d5b75-txtkb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4aa31d8c5b8 [] [] }} ContainerID="b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" Namespace="calico-system" Pod="calico-kube-controllers-6b79d5b75-txtkb" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-" Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:19.560 [INFO][4613] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" Namespace="calico-system" Pod="calico-kube-controllers-6b79d5b75-txtkb" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:19.754 [INFO][4633] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" HandleID="k8s-pod-network.b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:19.757 [INFO][4633] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" HandleID="k8s-pod-network.b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a6600), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-gt1mb.gb1.brightbox.com", "pod":"calico-kube-controllers-6b79d5b75-txtkb", "timestamp":"2025-09-12 19:26:19.753424231 +0000 UTC"}, Hostname:"srv-gt1mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:19.757 [INFO][4633] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:19.829 [INFO][4633] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:19.829 [INFO][4633] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gt1mb.gb1.brightbox.com' Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:19.862 [INFO][4633] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:19.914 [INFO][4633] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:19.945 [INFO][4633] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:19.953 [INFO][4633] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:19.971 [INFO][4633] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:19.971 [INFO][4633] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:19.987 [INFO][4633] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:20.011 [INFO][4633] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:20.037 [INFO][4633] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.7/26] block=192.168.35.0/26 handle="k8s-pod-network.b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:20.037 [INFO][4633] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.7/26] handle="k8s-pod-network.b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:20.038 [INFO][4633] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:20.121080 containerd[1514]: 2025-09-12 19:26:20.038 [INFO][4633] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.7/26] IPv6=[] ContainerID="b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" HandleID="k8s-pod-network.b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:20.122614 containerd[1514]: 2025-09-12 19:26:20.044 [INFO][4613] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" Namespace="calico-system" Pod="calico-kube-controllers-6b79d5b75-txtkb" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0", GenerateName:"calico-kube-controllers-6b79d5b75-", Namespace:"calico-system", SelfLink:"", UID:"7edf616c-f7f3-4891-b477-a2522e01e8c8", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b79d5b75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-6b79d5b75-txtkb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4aa31d8c5b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:20.122614 containerd[1514]: 2025-09-12 19:26:20.046 [INFO][4613] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.7/32] ContainerID="b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" Namespace="calico-system" Pod="calico-kube-controllers-6b79d5b75-txtkb" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:20.122614 containerd[1514]: 2025-09-12 19:26:20.046 [INFO][4613] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4aa31d8c5b8 ContainerID="b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" Namespace="calico-system" Pod="calico-kube-controllers-6b79d5b75-txtkb" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:20.122614 containerd[1514]: 2025-09-12 19:26:20.087 [INFO][4613] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" Namespace="calico-system" Pod="calico-kube-controllers-6b79d5b75-txtkb" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:20.122614 containerd[1514]: 2025-09-12 19:26:20.095 [INFO][4613] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" Namespace="calico-system" Pod="calico-kube-controllers-6b79d5b75-txtkb" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0", GenerateName:"calico-kube-controllers-6b79d5b75-", Namespace:"calico-system", SelfLink:"", UID:"7edf616c-f7f3-4891-b477-a2522e01e8c8", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b79d5b75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d", Pod:"calico-kube-controllers-6b79d5b75-txtkb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4aa31d8c5b8", MAC:"de:b6:63:a3:7c:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:20.122614 containerd[1514]: 2025-09-12 19:26:20.115 [INFO][4613] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d" Namespace="calico-system" Pod="calico-kube-controllers-6b79d5b75-txtkb" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:20.155433 systemd[1]: run-netns-cni\x2d5b838799\x2db945\x2dd77a\x2dff35\x2d17514fd9d0ea.mount: Deactivated successfully. Sep 12 19:26:20.228649 containerd[1514]: time="2025-09-12T19:26:20.228132651Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:26:20.228649 containerd[1514]: time="2025-09-12T19:26:20.228216894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:26:20.228649 containerd[1514]: time="2025-09-12T19:26:20.228241937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:20.228649 containerd[1514]: time="2025-09-12T19:26:20.228465255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:20.262162 systemd-networkd[1427]: cali51094904563: Gained IPv6LL Sep 12 19:26:20.279193 systemd[1]: Started cri-containerd-b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d.scope - libcontainer container b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d. Sep 12 19:26:20.323903 containerd[1514]: time="2025-09-12T19:26:20.323843711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-qf4p7,Uid:d7b3e6e7-1f71-472a-a961-c100d7e8208f,Namespace:kube-system,Attempt:1,} returns sandbox id \"5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb\"" Sep 12 19:26:20.325244 systemd-networkd[1427]: vxlan.calico: Gained IPv6LL Sep 12 19:26:20.339446 containerd[1514]: time="2025-09-12T19:26:20.338536231Z" level=info msg="CreateContainer within sandbox \"5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 19:26:20.408792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2108232676.mount: Deactivated successfully. Sep 12 19:26:20.416218 containerd[1514]: time="2025-09-12T19:26:20.415629174Z" level=info msg="CreateContainer within sandbox \"5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"15ae00065e861e96b43d55e992f07012db5972b06bd99ab6ceeaf1d46496b789\"" Sep 12 19:26:20.420145 containerd[1514]: time="2025-09-12T19:26:20.420097948Z" level=info msg="StartContainer for \"15ae00065e861e96b43d55e992f07012db5972b06bd99ab6ceeaf1d46496b789\"" Sep 12 19:26:20.511039 containerd[1514]: time="2025-09-12T19:26:20.510790410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6b79d5b75-txtkb,Uid:7edf616c-f7f3-4891-b477-a2522e01e8c8,Namespace:calico-system,Attempt:1,} returns sandbox id \"b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d\"" Sep 12 19:26:20.521425 systemd[1]: Started cri-containerd-15ae00065e861e96b43d55e992f07012db5972b06bd99ab6ceeaf1d46496b789.scope - libcontainer container 15ae00065e861e96b43d55e992f07012db5972b06bd99ab6ceeaf1d46496b789. Sep 12 19:26:20.646668 systemd-networkd[1427]: calid4172306ac4: Gained IPv6LL Sep 12 19:26:20.656076 containerd[1514]: time="2025-09-12T19:26:20.654853941Z" level=info msg="StartContainer for \"15ae00065e861e96b43d55e992f07012db5972b06bd99ab6ceeaf1d46496b789\" returns successfully" Sep 12 19:26:21.155977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3530038595.mount: Deactivated successfully. Sep 12 19:26:21.346765 kubelet[2683]: I0912 19:26:21.346631 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-qf4p7" podStartSLOduration=50.346552717 podStartE2EDuration="50.346552717s" podCreationTimestamp="2025-09-12 19:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 19:26:21.344527078 +0000 UTC m=+54.915736594" watchObservedRunningTime="2025-09-12 19:26:21.346552717 +0000 UTC m=+54.917762231" Sep 12 19:26:21.413443 systemd-networkd[1427]: cali4aa31d8c5b8: Gained IPv6LL Sep 12 19:26:21.798701 systemd-networkd[1427]: caliaf632028ac5: Gained IPv6LL Sep 12 19:26:22.020612 containerd[1514]: time="2025-09-12T19:26:22.020549601Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:22.021934 containerd[1514]: time="2025-09-12T19:26:22.021888856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=47333864" Sep 12 19:26:22.022642 containerd[1514]: time="2025-09-12T19:26:22.022264517Z" level=info msg="ImageCreate event name:\"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:22.025737 containerd[1514]: time="2025-09-12T19:26:22.025165754Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:22.026472 containerd[1514]: time="2025-09-12T19:26:22.026432533Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 6.327301014s" Sep 12 19:26:22.026566 containerd[1514]: time="2025-09-12T19:26:22.026486693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 19:26:22.031131 containerd[1514]: time="2025-09-12T19:26:22.030946243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\"" Sep 12 19:26:22.034316 containerd[1514]: time="2025-09-12T19:26:22.034097947Z" level=info msg="CreateContainer within sandbox \"483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 19:26:22.068065 containerd[1514]: time="2025-09-12T19:26:22.067864475Z" level=info msg="CreateContainer within sandbox \"483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"99827b3442d9fe987e2ec51490fcd177655b68905daf038f495bcdd159d3f0a5\"" Sep 12 19:26:22.070650 containerd[1514]: time="2025-09-12T19:26:22.070501697Z" level=info msg="StartContainer for \"99827b3442d9fe987e2ec51490fcd177655b68905daf038f495bcdd159d3f0a5\"" Sep 12 19:26:22.190264 systemd[1]: Started cri-containerd-99827b3442d9fe987e2ec51490fcd177655b68905daf038f495bcdd159d3f0a5.scope - libcontainer container 99827b3442d9fe987e2ec51490fcd177655b68905daf038f495bcdd159d3f0a5. Sep 12 19:26:22.275310 containerd[1514]: time="2025-09-12T19:26:22.273748544Z" level=info msg="StartContainer for \"99827b3442d9fe987e2ec51490fcd177655b68905daf038f495bcdd159d3f0a5\" returns successfully" Sep 12 19:26:23.434736 kubelet[2683]: I0912 19:26:23.434685 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 19:26:23.966297 containerd[1514]: time="2025-09-12T19:26:23.966215283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:23.970120 containerd[1514]: time="2025-09-12T19:26:23.970000333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.3: active requests=0, bytes read=4661291" Sep 12 19:26:23.971982 containerd[1514]: time="2025-09-12T19:26:23.970695071Z" level=info msg="ImageCreate event name:\"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:23.976309 containerd[1514]: time="2025-09-12T19:26:23.976275912Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:23.996800 containerd[1514]: time="2025-09-12T19:26:23.996749086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.3\" with image id \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:e7113761fc7633d515882f0d48b5c8d0b8e62f3f9d34823f2ee194bb16d2ec44\", size \"6153986\" in 1.965616973s" Sep 12 19:26:24.003795 containerd[1514]: time="2025-09-12T19:26:24.003673834Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.3\" returns image reference \"sha256:9a4eedeed4a531acefb7f5d0a1b7e3856b1a9a24d9e7d25deef2134d7a734c2d\"" Sep 12 19:26:24.010105 containerd[1514]: time="2025-09-12T19:26:24.009933831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\"" Sep 12 19:26:24.012690 containerd[1514]: time="2025-09-12T19:26:24.012619982Z" level=info msg="CreateContainer within sandbox \"d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Sep 12 19:26:24.038519 containerd[1514]: time="2025-09-12T19:26:24.038375478Z" level=info msg="CreateContainer within sandbox \"d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"5f5203f4287367cbd9026044908d29cf86630d11a01664d2b1514dbcb6340d97\"" Sep 12 19:26:24.040533 containerd[1514]: time="2025-09-12T19:26:24.039522542Z" level=info msg="StartContainer for \"5f5203f4287367cbd9026044908d29cf86630d11a01664d2b1514dbcb6340d97\"" Sep 12 19:26:24.118274 systemd[1]: Started cri-containerd-5f5203f4287367cbd9026044908d29cf86630d11a01664d2b1514dbcb6340d97.scope - libcontainer container 5f5203f4287367cbd9026044908d29cf86630d11a01664d2b1514dbcb6340d97. Sep 12 19:26:24.205080 containerd[1514]: time="2025-09-12T19:26:24.204685965Z" level=info msg="StartContainer for \"5f5203f4287367cbd9026044908d29cf86630d11a01664d2b1514dbcb6340d97\" returns successfully" Sep 12 19:26:26.171590 containerd[1514]: time="2025-09-12T19:26:26.171273083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:26.179151 containerd[1514]: time="2025-09-12T19:26:26.179005621Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.3: active requests=0, bytes read=8760527" Sep 12 19:26:26.181567 containerd[1514]: time="2025-09-12T19:26:26.180528839Z" level=info msg="ImageCreate event name:\"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:26.184614 containerd[1514]: time="2025-09-12T19:26:26.184543219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:26.185982 containerd[1514]: time="2025-09-12T19:26:26.185733786Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.3\" with image id \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:f22c88018d8b58c4ef0052f594b216a13bd6852166ac131a538c5ab2fba23bb2\", size \"10253230\" in 2.174537051s" Sep 12 19:26:26.185982 containerd[1514]: time="2025-09-12T19:26:26.185788846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.3\" returns image reference \"sha256:666f4e02e75c30547109a06ed75b415a990a970811173aa741379cfaac4d9dd7\"" Sep 12 19:26:26.188374 containerd[1514]: time="2025-09-12T19:26:26.188319732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\"" Sep 12 19:26:26.190769 containerd[1514]: time="2025-09-12T19:26:26.190716235Z" level=info msg="CreateContainer within sandbox \"1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Sep 12 19:26:26.224402 containerd[1514]: time="2025-09-12T19:26:26.224349768Z" level=info msg="CreateContainer within sandbox \"1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a0265f72f02f4ed6e1a4f0abe2db27793221df5530463924aeecdcabccf260a8\"" Sep 12 19:26:26.228565 containerd[1514]: time="2025-09-12T19:26:26.226173864Z" level=info msg="StartContainer for \"a0265f72f02f4ed6e1a4f0abe2db27793221df5530463924aeecdcabccf260a8\"" Sep 12 19:26:26.288240 systemd[1]: Started cri-containerd-a0265f72f02f4ed6e1a4f0abe2db27793221df5530463924aeecdcabccf260a8.scope - libcontainer container a0265f72f02f4ed6e1a4f0abe2db27793221df5530463924aeecdcabccf260a8. Sep 12 19:26:26.335496 containerd[1514]: time="2025-09-12T19:26:26.335275861Z" level=info msg="StartContainer for \"a0265f72f02f4ed6e1a4f0abe2db27793221df5530463924aeecdcabccf260a8\" returns successfully" Sep 12 19:26:26.738010 containerd[1514]: time="2025-09-12T19:26:26.737565625Z" level=info msg="StopPodSandbox for \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\"" Sep 12 19:26:26.788995 containerd[1514]: time="2025-09-12T19:26:26.788450131Z" level=info msg="StopPodSandbox for \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\"" Sep 12 19:26:26.947199 kubelet[2683]: I0912 19:26:26.946431 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-746f98d4d8-b4lq7" podStartSLOduration=37.614452951 podStartE2EDuration="43.946366074s" podCreationTimestamp="2025-09-12 19:25:43 +0000 UTC" firstStartedPulling="2025-09-12 19:26:15.698302196 +0000 UTC m=+49.269511687" lastFinishedPulling="2025-09-12 19:26:22.030215301 +0000 UTC m=+55.601424810" observedRunningTime="2025-09-12 19:26:22.33717786 +0000 UTC m=+55.908387365" watchObservedRunningTime="2025-09-12 19:26:26.946366074 +0000 UTC m=+60.517575580" Sep 12 19:26:27.051780 containerd[1514]: 2025-09-12 19:26:26.919 [WARNING][4973] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6fa90bc5-cf93-4dc8-ab8e-d382c74b770b", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232", Pod:"goldmane-54d579b49d-hntgn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid4172306ac4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:27.051780 containerd[1514]: 2025-09-12 19:26:26.923 [INFO][4973] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Sep 12 19:26:27.051780 containerd[1514]: 2025-09-12 19:26:26.924 [INFO][4973] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" iface="eth0" netns="" Sep 12 19:26:27.051780 containerd[1514]: 2025-09-12 19:26:26.924 [INFO][4973] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Sep 12 19:26:27.051780 containerd[1514]: 2025-09-12 19:26:26.924 [INFO][4973] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Sep 12 19:26:27.051780 containerd[1514]: 2025-09-12 19:26:26.997 [INFO][4998] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" HandleID="k8s-pod-network.c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Workload="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:27.051780 containerd[1514]: 2025-09-12 19:26:26.999 [INFO][4998] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:27.051780 containerd[1514]: 2025-09-12 19:26:26.999 [INFO][4998] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:27.051780 containerd[1514]: 2025-09-12 19:26:27.040 [WARNING][4998] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" HandleID="k8s-pod-network.c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Workload="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:27.051780 containerd[1514]: 2025-09-12 19:26:27.040 [INFO][4998] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" HandleID="k8s-pod-network.c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Workload="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:27.051780 containerd[1514]: 2025-09-12 19:26:27.044 [INFO][4998] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:27.051780 containerd[1514]: 2025-09-12 19:26:27.049 [INFO][4973] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Sep 12 19:26:27.053569 containerd[1514]: time="2025-09-12T19:26:27.051884721Z" level=info msg="TearDown network for sandbox \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\" successfully" Sep 12 19:26:27.053569 containerd[1514]: time="2025-09-12T19:26:27.051929713Z" level=info msg="StopPodSandbox for \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\" returns successfully" Sep 12 19:26:27.068291 containerd[1514]: 2025-09-12 19:26:26.944 [INFO][4988] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:26:27.068291 containerd[1514]: 2025-09-12 19:26:26.946 [INFO][4988] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" iface="eth0" netns="/var/run/netns/cni-07c25ba2-df5a-7389-2dd0-e6c62c64d3ff" Sep 12 19:26:27.068291 containerd[1514]: 2025-09-12 19:26:26.948 [INFO][4988] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" iface="eth0" netns="/var/run/netns/cni-07c25ba2-df5a-7389-2dd0-e6c62c64d3ff" Sep 12 19:26:27.068291 containerd[1514]: 2025-09-12 19:26:26.954 [INFO][4988] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" iface="eth0" netns="/var/run/netns/cni-07c25ba2-df5a-7389-2dd0-e6c62c64d3ff" Sep 12 19:26:27.068291 containerd[1514]: 2025-09-12 19:26:26.954 [INFO][4988] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:26:27.068291 containerd[1514]: 2025-09-12 19:26:26.955 [INFO][4988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:26:27.068291 containerd[1514]: 2025-09-12 19:26:27.030 [INFO][5004] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" HandleID="k8s-pod-network.63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:26:27.068291 containerd[1514]: 2025-09-12 19:26:27.032 [INFO][5004] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:27.068291 containerd[1514]: 2025-09-12 19:26:27.044 [INFO][5004] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:27.068291 containerd[1514]: 2025-09-12 19:26:27.059 [WARNING][5004] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" HandleID="k8s-pod-network.63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:26:27.068291 containerd[1514]: 2025-09-12 19:26:27.059 [INFO][5004] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" HandleID="k8s-pod-network.63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:26:27.068291 containerd[1514]: 2025-09-12 19:26:27.063 [INFO][5004] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:27.068291 containerd[1514]: 2025-09-12 19:26:27.065 [INFO][4988] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:26:27.074433 containerd[1514]: time="2025-09-12T19:26:27.069300180Z" level=info msg="TearDown network for sandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\" successfully" Sep 12 19:26:27.074433 containerd[1514]: time="2025-09-12T19:26:27.069340763Z" level=info msg="StopPodSandbox for \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\" returns successfully" Sep 12 19:26:27.074433 containerd[1514]: time="2025-09-12T19:26:27.073112532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sz5ss,Uid:bfac3adb-2001-4e77-9843-4702a6abb198,Namespace:kube-system,Attempt:1,}" Sep 12 19:26:27.076007 systemd[1]: run-netns-cni\x2d07c25ba2\x2ddf5a\x2d7389\x2d2dd0\x2de6c62c64d3ff.mount: Deactivated successfully. Sep 12 19:26:27.143857 containerd[1514]: time="2025-09-12T19:26:27.143782258Z" level=info msg="RemovePodSandbox for \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\"" Sep 12 19:26:27.152140 containerd[1514]: time="2025-09-12T19:26:27.152039350Z" level=info msg="Forcibly stopping sandbox \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\"" Sep 12 19:26:27.343950 systemd-networkd[1427]: calied865fc9803: Link UP Sep 12 19:26:27.345658 systemd-networkd[1427]: calied865fc9803: Gained carrier Sep 12 19:26:27.364067 containerd[1514]: 2025-09-12 19:26:27.245 [WARNING][5032] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0", GenerateName:"goldmane-54d579b49d-", Namespace:"calico-system", SelfLink:"", UID:"6fa90bc5-cf93-4dc8-ab8e-d382c74b770b", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"54d579b49d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232", Pod:"goldmane-54d579b49d-hntgn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.35.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calid4172306ac4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:27.364067 containerd[1514]: 2025-09-12 19:26:27.245 [INFO][5032] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Sep 12 19:26:27.364067 containerd[1514]: 2025-09-12 19:26:27.245 [INFO][5032] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" iface="eth0" netns="" Sep 12 19:26:27.364067 containerd[1514]: 2025-09-12 19:26:27.245 [INFO][5032] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Sep 12 19:26:27.364067 containerd[1514]: 2025-09-12 19:26:27.246 [INFO][5032] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Sep 12 19:26:27.364067 containerd[1514]: 2025-09-12 19:26:27.309 [INFO][5046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" HandleID="k8s-pod-network.c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Workload="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:27.364067 containerd[1514]: 2025-09-12 19:26:27.309 [INFO][5046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:27.364067 containerd[1514]: 2025-09-12 19:26:27.327 [INFO][5046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:27.364067 containerd[1514]: 2025-09-12 19:26:27.347 [WARNING][5046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" HandleID="k8s-pod-network.c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Workload="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:27.364067 containerd[1514]: 2025-09-12 19:26:27.347 [INFO][5046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" HandleID="k8s-pod-network.c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Workload="srv--gt1mb.gb1.brightbox.com-k8s-goldmane--54d579b49d--hntgn-eth0" Sep 12 19:26:27.364067 containerd[1514]: 2025-09-12 19:26:27.359 [INFO][5046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:27.364067 containerd[1514]: 2025-09-12 19:26:27.361 [INFO][5032] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8" Sep 12 19:26:27.370882 containerd[1514]: time="2025-09-12T19:26:27.364439985Z" level=info msg="TearDown network for sandbox \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\" successfully" Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.197 [INFO][5013] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0 coredns-668d6bf9bc- kube-system bfac3adb-2001-4e77-9843-4702a6abb198 992 0 2025-09-12 19:25:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-gt1mb.gb1.brightbox.com coredns-668d6bf9bc-sz5ss eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calied865fc9803 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" Namespace="kube-system" Pod="coredns-668d6bf9bc-sz5ss" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-" Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.198 [INFO][5013] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" Namespace="kube-system" Pod="coredns-668d6bf9bc-sz5ss" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.258 [INFO][5038] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" HandleID="k8s-pod-network.b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.259 [INFO][5038] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" HandleID="k8s-pod-network.b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d59c0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-gt1mb.gb1.brightbox.com", "pod":"coredns-668d6bf9bc-sz5ss", "timestamp":"2025-09-12 19:26:27.258559377 +0000 UTC"}, Hostname:"srv-gt1mb.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.259 [INFO][5038] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.259 [INFO][5038] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.259 [INFO][5038] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-gt1mb.gb1.brightbox.com' Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.270 [INFO][5038] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.278 [INFO][5038] ipam/ipam.go 394: Looking up existing affinities for host host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.290 [INFO][5038] ipam/ipam.go 511: Trying affinity for 192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.296 [INFO][5038] ipam/ipam.go 158: Attempting to load block cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.304 [INFO][5038] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.35.0/26 host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.304 [INFO][5038] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.35.0/26 handle="k8s-pod-network.b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.310 [INFO][5038] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548 Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.317 [INFO][5038] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.35.0/26 handle="k8s-pod-network.b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.326 [INFO][5038] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.35.8/26] block=192.168.35.0/26 handle="k8s-pod-network.b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.327 [INFO][5038] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.35.8/26] handle="k8s-pod-network.b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" host="srv-gt1mb.gb1.brightbox.com" Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.327 [INFO][5038] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:27.386325 containerd[1514]: 2025-09-12 19:26:27.327 [INFO][5038] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.35.8/26] IPv6=[] ContainerID="b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" HandleID="k8s-pod-network.b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:26:27.387942 containerd[1514]: 2025-09-12 19:26:27.332 [INFO][5013] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" Namespace="kube-system" Pod="coredns-668d6bf9bc-sz5ss" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bfac3adb-2001-4e77-9843-4702a6abb198", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"", Pod:"coredns-668d6bf9bc-sz5ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied865fc9803", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:27.387942 containerd[1514]: 2025-09-12 19:26:27.332 [INFO][5013] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.35.8/32] ContainerID="b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" Namespace="kube-system" Pod="coredns-668d6bf9bc-sz5ss" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:26:27.387942 containerd[1514]: 2025-09-12 19:26:27.332 [INFO][5013] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calied865fc9803 ContainerID="b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" Namespace="kube-system" Pod="coredns-668d6bf9bc-sz5ss" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:26:27.387942 containerd[1514]: 2025-09-12 19:26:27.346 [INFO][5013] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" Namespace="kube-system" Pod="coredns-668d6bf9bc-sz5ss" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:26:27.387942 containerd[1514]: 2025-09-12 19:26:27.348 [INFO][5013] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" Namespace="kube-system" Pod="coredns-668d6bf9bc-sz5ss" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bfac3adb-2001-4e77-9843-4702a6abb198", ResourceVersion:"992", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548", Pod:"coredns-668d6bf9bc-sz5ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied865fc9803", MAC:"0e:5b:47:00:54:89", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:27.387942 containerd[1514]: 2025-09-12 19:26:27.375 [INFO][5013] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548" Namespace="kube-system" Pod="coredns-668d6bf9bc-sz5ss" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:26:27.428930 containerd[1514]: time="2025-09-12T19:26:27.428692860Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:26:27.453548 containerd[1514]: time="2025-09-12T19:26:27.453312673Z" level=info msg="RemovePodSandbox \"c888420e55fbc2b13676467294126607856f7386d69b3dfceec424fae9c545d8\" returns successfully" Sep 12 19:26:27.454354 containerd[1514]: time="2025-09-12T19:26:27.454305742Z" level=info msg="StopPodSandbox for \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\"" Sep 12 19:26:27.537385 containerd[1514]: time="2025-09-12T19:26:27.524684047Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 12 19:26:27.537385 containerd[1514]: time="2025-09-12T19:26:27.524789638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 12 19:26:27.537385 containerd[1514]: time="2025-09-12T19:26:27.524805615Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:27.537385 containerd[1514]: time="2025-09-12T19:26:27.525015614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 12 19:26:27.589410 systemd[1]: run-containerd-runc-k8s.io-b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548-runc.K9DuV0.mount: Deactivated successfully. Sep 12 19:26:27.604414 systemd[1]: Started cri-containerd-b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548.scope - libcontainer container b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548. Sep 12 19:26:27.752941 containerd[1514]: time="2025-09-12T19:26:27.752875049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-sz5ss,Uid:bfac3adb-2001-4e77-9843-4702a6abb198,Namespace:kube-system,Attempt:1,} returns sandbox id \"b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548\"" Sep 12 19:26:27.771036 containerd[1514]: time="2025-09-12T19:26:27.769837951Z" level=info msg="CreateContainer within sandbox \"b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 19:26:27.809750 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3066578471.mount: Deactivated successfully. Sep 12 19:26:27.811342 containerd[1514]: 2025-09-12 19:26:27.652 [WARNING][5077] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-whisker--548bf8484--dn2jb-eth0" Sep 12 19:26:27.811342 containerd[1514]: 2025-09-12 19:26:27.652 [INFO][5077] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Sep 12 19:26:27.811342 containerd[1514]: 2025-09-12 19:26:27.655 [INFO][5077] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" iface="eth0" netns="" Sep 12 19:26:27.811342 containerd[1514]: 2025-09-12 19:26:27.655 [INFO][5077] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Sep 12 19:26:27.811342 containerd[1514]: 2025-09-12 19:26:27.655 [INFO][5077] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Sep 12 19:26:27.811342 containerd[1514]: 2025-09-12 19:26:27.779 [INFO][5117] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" HandleID="k8s-pod-network.c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Workload="srv--gt1mb.gb1.brightbox.com-k8s-whisker--548bf8484--dn2jb-eth0" Sep 12 19:26:27.811342 containerd[1514]: 2025-09-12 19:26:27.780 [INFO][5117] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:27.811342 containerd[1514]: 2025-09-12 19:26:27.780 [INFO][5117] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:27.811342 containerd[1514]: 2025-09-12 19:26:27.800 [WARNING][5117] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" HandleID="k8s-pod-network.c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Workload="srv--gt1mb.gb1.brightbox.com-k8s-whisker--548bf8484--dn2jb-eth0" Sep 12 19:26:27.811342 containerd[1514]: 2025-09-12 19:26:27.800 [INFO][5117] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" HandleID="k8s-pod-network.c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Workload="srv--gt1mb.gb1.brightbox.com-k8s-whisker--548bf8484--dn2jb-eth0" Sep 12 19:26:27.811342 containerd[1514]: 2025-09-12 19:26:27.805 [INFO][5117] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:27.811342 containerd[1514]: 2025-09-12 19:26:27.807 [INFO][5077] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Sep 12 19:26:27.812812 containerd[1514]: time="2025-09-12T19:26:27.811365327Z" level=info msg="TearDown network for sandbox \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\" successfully" Sep 12 19:26:27.812812 containerd[1514]: time="2025-09-12T19:26:27.811399546Z" level=info msg="StopPodSandbox for \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\" returns successfully" Sep 12 19:26:27.812812 containerd[1514]: time="2025-09-12T19:26:27.812689228Z" level=info msg="RemovePodSandbox for \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\"" Sep 12 19:26:27.812812 containerd[1514]: time="2025-09-12T19:26:27.812767941Z" level=info msg="Forcibly stopping sandbox \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\"" Sep 12 19:26:27.815057 containerd[1514]: time="2025-09-12T19:26:27.813259678Z" level=info msg="CreateContainer within sandbox \"b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c646716f714514dcd6207c533bb437b03e0d30de010aeca83d7c0130d54cf6c6\"" Sep 12 19:26:27.815057 containerd[1514]: time="2025-09-12T19:26:27.813875898Z" level=info msg="StartContainer for \"c646716f714514dcd6207c533bb437b03e0d30de010aeca83d7c0130d54cf6c6\"" Sep 12 19:26:27.885247 systemd[1]: Started cri-containerd-c646716f714514dcd6207c533bb437b03e0d30de010aeca83d7c0130d54cf6c6.scope - libcontainer container c646716f714514dcd6207c533bb437b03e0d30de010aeca83d7c0130d54cf6c6. Sep 12 19:26:28.014981 containerd[1514]: time="2025-09-12T19:26:28.014900458Z" level=info msg="StartContainer for \"c646716f714514dcd6207c533bb437b03e0d30de010aeca83d7c0130d54cf6c6\" returns successfully" Sep 12 19:26:28.107757 containerd[1514]: 2025-09-12 19:26:27.972 [WARNING][5146] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" WorkloadEndpoint="srv--gt1mb.gb1.brightbox.com-k8s-whisker--548bf8484--dn2jb-eth0" Sep 12 19:26:28.107757 containerd[1514]: 2025-09-12 19:26:27.972 [INFO][5146] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Sep 12 19:26:28.107757 containerd[1514]: 2025-09-12 19:26:27.972 [INFO][5146] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" iface="eth0" netns="" Sep 12 19:26:28.107757 containerd[1514]: 2025-09-12 19:26:27.972 [INFO][5146] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Sep 12 19:26:28.107757 containerd[1514]: 2025-09-12 19:26:27.972 [INFO][5146] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Sep 12 19:26:28.107757 containerd[1514]: 2025-09-12 19:26:28.067 [INFO][5178] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" HandleID="k8s-pod-network.c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Workload="srv--gt1mb.gb1.brightbox.com-k8s-whisker--548bf8484--dn2jb-eth0" Sep 12 19:26:28.107757 containerd[1514]: 2025-09-12 19:26:28.068 [INFO][5178] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:28.107757 containerd[1514]: 2025-09-12 19:26:28.068 [INFO][5178] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:28.107757 containerd[1514]: 2025-09-12 19:26:28.086 [WARNING][5178] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" HandleID="k8s-pod-network.c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Workload="srv--gt1mb.gb1.brightbox.com-k8s-whisker--548bf8484--dn2jb-eth0" Sep 12 19:26:28.107757 containerd[1514]: 2025-09-12 19:26:28.086 [INFO][5178] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" HandleID="k8s-pod-network.c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Workload="srv--gt1mb.gb1.brightbox.com-k8s-whisker--548bf8484--dn2jb-eth0" Sep 12 19:26:28.107757 containerd[1514]: 2025-09-12 19:26:28.091 [INFO][5178] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:28.107757 containerd[1514]: 2025-09-12 19:26:28.096 [INFO][5146] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2" Sep 12 19:26:28.107757 containerd[1514]: time="2025-09-12T19:26:28.100638240Z" level=info msg="TearDown network for sandbox \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\" successfully" Sep 12 19:26:28.128430 containerd[1514]: time="2025-09-12T19:26:28.127446439Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:26:28.129368 containerd[1514]: time="2025-09-12T19:26:28.129280292Z" level=info msg="RemovePodSandbox \"c81ea4c5ed9932249ebb672df4404f1da1f7fb35ece96f9a94d21cb225eaa5a2\" returns successfully" Sep 12 19:26:28.132581 containerd[1514]: time="2025-09-12T19:26:28.132009270Z" level=info msg="StopPodSandbox for \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\"" Sep 12 19:26:28.415942 containerd[1514]: 2025-09-12 19:26:28.323 [WARNING][5201] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0", GenerateName:"calico-kube-controllers-6b79d5b75-", Namespace:"calico-system", SelfLink:"", UID:"7edf616c-f7f3-4891-b477-a2522e01e8c8", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b79d5b75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d", Pod:"calico-kube-controllers-6b79d5b75-txtkb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4aa31d8c5b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:28.415942 containerd[1514]: 2025-09-12 19:26:28.325 [INFO][5201] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Sep 12 19:26:28.415942 containerd[1514]: 2025-09-12 19:26:28.325 [INFO][5201] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" iface="eth0" netns="" Sep 12 19:26:28.415942 containerd[1514]: 2025-09-12 19:26:28.325 [INFO][5201] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Sep 12 19:26:28.415942 containerd[1514]: 2025-09-12 19:26:28.325 [INFO][5201] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Sep 12 19:26:28.415942 containerd[1514]: 2025-09-12 19:26:28.385 [INFO][5208] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" HandleID="k8s-pod-network.31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:28.415942 containerd[1514]: 2025-09-12 19:26:28.386 [INFO][5208] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:28.415942 containerd[1514]: 2025-09-12 19:26:28.386 [INFO][5208] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:28.415942 containerd[1514]: 2025-09-12 19:26:28.401 [WARNING][5208] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" HandleID="k8s-pod-network.31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:28.415942 containerd[1514]: 2025-09-12 19:26:28.401 [INFO][5208] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" HandleID="k8s-pod-network.31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:28.415942 containerd[1514]: 2025-09-12 19:26:28.407 [INFO][5208] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:28.415942 containerd[1514]: 2025-09-12 19:26:28.410 [INFO][5201] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Sep 12 19:26:28.421119 containerd[1514]: time="2025-09-12T19:26:28.416551045Z" level=info msg="TearDown network for sandbox \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\" successfully" Sep 12 19:26:28.421119 containerd[1514]: time="2025-09-12T19:26:28.416595502Z" level=info msg="StopPodSandbox for \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\" returns successfully" Sep 12 19:26:28.421119 containerd[1514]: time="2025-09-12T19:26:28.419312033Z" level=info msg="RemovePodSandbox for \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\"" Sep 12 19:26:28.421119 containerd[1514]: time="2025-09-12T19:26:28.419349864Z" level=info msg="Forcibly stopping sandbox \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\"" Sep 12 19:26:28.586610 kubelet[2683]: I0912 19:26:28.585950 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-sz5ss" podStartSLOduration=57.585874823 podStartE2EDuration="57.585874823s" podCreationTimestamp="2025-09-12 19:25:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 19:26:28.529005331 +0000 UTC m=+62.100214872" watchObservedRunningTime="2025-09-12 19:26:28.585874823 +0000 UTC m=+62.157084321" Sep 12 19:26:28.669667 containerd[1514]: 2025-09-12 19:26:28.522 [WARNING][5222] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0", GenerateName:"calico-kube-controllers-6b79d5b75-", Namespace:"calico-system", SelfLink:"", UID:"7edf616c-f7f3-4891-b477-a2522e01e8c8", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6b79d5b75", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d", Pod:"calico-kube-controllers-6b79d5b75-txtkb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.35.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4aa31d8c5b8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:28.669667 containerd[1514]: 2025-09-12 19:26:28.523 [INFO][5222] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Sep 12 19:26:28.669667 containerd[1514]: 2025-09-12 19:26:28.523 [INFO][5222] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" iface="eth0" netns="" Sep 12 19:26:28.669667 containerd[1514]: 2025-09-12 19:26:28.523 [INFO][5222] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Sep 12 19:26:28.669667 containerd[1514]: 2025-09-12 19:26:28.523 [INFO][5222] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Sep 12 19:26:28.669667 containerd[1514]: 2025-09-12 19:26:28.643 [INFO][5230] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" HandleID="k8s-pod-network.31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:28.669667 containerd[1514]: 2025-09-12 19:26:28.645 [INFO][5230] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:28.669667 containerd[1514]: 2025-09-12 19:26:28.645 [INFO][5230] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:28.669667 containerd[1514]: 2025-09-12 19:26:28.660 [WARNING][5230] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" HandleID="k8s-pod-network.31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:28.669667 containerd[1514]: 2025-09-12 19:26:28.660 [INFO][5230] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" HandleID="k8s-pod-network.31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--kube--controllers--6b79d5b75--txtkb-eth0" Sep 12 19:26:28.669667 containerd[1514]: 2025-09-12 19:26:28.663 [INFO][5230] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:28.669667 containerd[1514]: 2025-09-12 19:26:28.665 [INFO][5222] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f" Sep 12 19:26:28.669667 containerd[1514]: time="2025-09-12T19:26:28.669286481Z" level=info msg="TearDown network for sandbox \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\" successfully" Sep 12 19:26:28.678089 containerd[1514]: time="2025-09-12T19:26:28.678022213Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:26:28.678229 containerd[1514]: time="2025-09-12T19:26:28.678177619Z" level=info msg="RemovePodSandbox \"31a9b12140d0db648157dccedc0664206805b62ad7a1eeff81e1873456a3da0f\" returns successfully" Sep 12 19:26:28.680131 containerd[1514]: time="2025-09-12T19:26:28.680096394Z" level=info msg="StopPodSandbox for \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\"" Sep 12 19:26:28.869718 containerd[1514]: 2025-09-12 19:26:28.767 [WARNING][5246] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0", GenerateName:"calico-apiserver-746f98d4d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d596bad-4e4f-445e-bb83-18354a67cb67", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"746f98d4d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d", Pod:"calico-apiserver-746f98d4d8-b4lq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali75f9f21643a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:28.869718 containerd[1514]: 2025-09-12 19:26:28.767 [INFO][5246] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Sep 12 19:26:28.869718 containerd[1514]: 2025-09-12 19:26:28.767 [INFO][5246] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" iface="eth0" netns="" Sep 12 19:26:28.869718 containerd[1514]: 2025-09-12 19:26:28.767 [INFO][5246] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Sep 12 19:26:28.869718 containerd[1514]: 2025-09-12 19:26:28.767 [INFO][5246] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Sep 12 19:26:28.869718 containerd[1514]: 2025-09-12 19:26:28.839 [INFO][5256] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" HandleID="k8s-pod-network.e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:28.869718 containerd[1514]: 2025-09-12 19:26:28.839 [INFO][5256] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:28.869718 containerd[1514]: 2025-09-12 19:26:28.839 [INFO][5256] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:28.869718 containerd[1514]: 2025-09-12 19:26:28.856 [WARNING][5256] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" HandleID="k8s-pod-network.e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:28.869718 containerd[1514]: 2025-09-12 19:26:28.856 [INFO][5256] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" HandleID="k8s-pod-network.e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:28.869718 containerd[1514]: 2025-09-12 19:26:28.859 [INFO][5256] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:28.869718 containerd[1514]: 2025-09-12 19:26:28.866 [INFO][5246] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Sep 12 19:26:28.869718 containerd[1514]: time="2025-09-12T19:26:28.869535163Z" level=info msg="TearDown network for sandbox \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\" successfully" Sep 12 19:26:28.869718 containerd[1514]: time="2025-09-12T19:26:28.869580739Z" level=info msg="StopPodSandbox for \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\" returns successfully" Sep 12 19:26:28.874267 containerd[1514]: time="2025-09-12T19:26:28.871044114Z" level=info msg="RemovePodSandbox for \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\"" Sep 12 19:26:28.874267 containerd[1514]: time="2025-09-12T19:26:28.871467629Z" level=info msg="Forcibly stopping sandbox \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\"" Sep 12 19:26:29.006679 containerd[1514]: 2025-09-12 19:26:28.943 [WARNING][5270] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0", GenerateName:"calico-apiserver-746f98d4d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1d596bad-4e4f-445e-bb83-18354a67cb67", ResourceVersion:"971", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"746f98d4d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"483a1800410fcc62eb447545b020064907d784f7775014ffa103201810b03d6d", Pod:"calico-apiserver-746f98d4d8-b4lq7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali75f9f21643a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:29.006679 containerd[1514]: 2025-09-12 19:26:28.944 [INFO][5270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Sep 12 19:26:29.006679 containerd[1514]: 2025-09-12 19:26:28.944 [INFO][5270] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" iface="eth0" netns="" Sep 12 19:26:29.006679 containerd[1514]: 2025-09-12 19:26:28.944 [INFO][5270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Sep 12 19:26:29.006679 containerd[1514]: 2025-09-12 19:26:28.944 [INFO][5270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Sep 12 19:26:29.006679 containerd[1514]: 2025-09-12 19:26:28.985 [INFO][5277] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" HandleID="k8s-pod-network.e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:29.006679 containerd[1514]: 2025-09-12 19:26:28.985 [INFO][5277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:29.006679 containerd[1514]: 2025-09-12 19:26:28.985 [INFO][5277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:29.006679 containerd[1514]: 2025-09-12 19:26:28.997 [WARNING][5277] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" HandleID="k8s-pod-network.e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:29.006679 containerd[1514]: 2025-09-12 19:26:28.997 [INFO][5277] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" HandleID="k8s-pod-network.e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--b4lq7-eth0" Sep 12 19:26:29.006679 containerd[1514]: 2025-09-12 19:26:29.000 [INFO][5277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:29.006679 containerd[1514]: 2025-09-12 19:26:29.002 [INFO][5270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015" Sep 12 19:26:29.006679 containerd[1514]: time="2025-09-12T19:26:29.005643669Z" level=info msg="TearDown network for sandbox \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\" successfully" Sep 12 19:26:29.013938 containerd[1514]: time="2025-09-12T19:26:29.013902405Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:26:29.014159 containerd[1514]: time="2025-09-12T19:26:29.014128823Z" level=info msg="RemovePodSandbox \"e226a12b265e4282fe74de76223a30b0d3b9eb8844a2be4008eb9cb9e0610015\" returns successfully" Sep 12 19:26:29.015085 containerd[1514]: time="2025-09-12T19:26:29.015053658Z" level=info msg="StopPodSandbox for \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\"" Sep 12 19:26:29.093352 systemd-networkd[1427]: calied865fc9803: Gained IPv6LL Sep 12 19:26:29.133363 containerd[1514]: 2025-09-12 19:26:29.072 [WARNING][5291] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d7b3e6e7-1f71-472a-a961-c100d7e8208f", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb", Pod:"coredns-668d6bf9bc-qf4p7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf632028ac5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:29.133363 containerd[1514]: 2025-09-12 19:26:29.072 [INFO][5291] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Sep 12 19:26:29.133363 containerd[1514]: 2025-09-12 19:26:29.072 [INFO][5291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" iface="eth0" netns="" Sep 12 19:26:29.133363 containerd[1514]: 2025-09-12 19:26:29.072 [INFO][5291] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Sep 12 19:26:29.133363 containerd[1514]: 2025-09-12 19:26:29.072 [INFO][5291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Sep 12 19:26:29.133363 containerd[1514]: 2025-09-12 19:26:29.115 [INFO][5299] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" HandleID="k8s-pod-network.a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:29.133363 containerd[1514]: 2025-09-12 19:26:29.115 [INFO][5299] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:29.133363 containerd[1514]: 2025-09-12 19:26:29.115 [INFO][5299] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:29.133363 containerd[1514]: 2025-09-12 19:26:29.125 [WARNING][5299] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" HandleID="k8s-pod-network.a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:29.133363 containerd[1514]: 2025-09-12 19:26:29.125 [INFO][5299] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" HandleID="k8s-pod-network.a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:29.133363 containerd[1514]: 2025-09-12 19:26:29.128 [INFO][5299] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:29.133363 containerd[1514]: 2025-09-12 19:26:29.130 [INFO][5291] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Sep 12 19:26:29.135438 containerd[1514]: time="2025-09-12T19:26:29.133439497Z" level=info msg="TearDown network for sandbox \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\" successfully" Sep 12 19:26:29.135438 containerd[1514]: time="2025-09-12T19:26:29.133486909Z" level=info msg="StopPodSandbox for \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\" returns successfully" Sep 12 19:26:29.135438 containerd[1514]: time="2025-09-12T19:26:29.134407806Z" level=info msg="RemovePodSandbox for \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\"" Sep 12 19:26:29.135438 containerd[1514]: time="2025-09-12T19:26:29.134466232Z" level=info msg="Forcibly stopping sandbox \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\"" Sep 12 19:26:29.322314 containerd[1514]: 2025-09-12 19:26:29.228 [WARNING][5313] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d7b3e6e7-1f71-472a-a961-c100d7e8208f", ResourceVersion:"961", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"5baf02450959fcce3d847c9f55dd7f24d300b595a83702d5bf64538f161e1bdb", Pod:"coredns-668d6bf9bc-qf4p7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaf632028ac5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:29.322314 containerd[1514]: 2025-09-12 19:26:29.230 [INFO][5313] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Sep 12 19:26:29.322314 containerd[1514]: 2025-09-12 19:26:29.230 [INFO][5313] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" iface="eth0" netns="" Sep 12 19:26:29.322314 containerd[1514]: 2025-09-12 19:26:29.230 [INFO][5313] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Sep 12 19:26:29.322314 containerd[1514]: 2025-09-12 19:26:29.230 [INFO][5313] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Sep 12 19:26:29.322314 containerd[1514]: 2025-09-12 19:26:29.294 [INFO][5320] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" HandleID="k8s-pod-network.a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:29.322314 containerd[1514]: 2025-09-12 19:26:29.295 [INFO][5320] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:29.322314 containerd[1514]: 2025-09-12 19:26:29.295 [INFO][5320] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:29.322314 containerd[1514]: 2025-09-12 19:26:29.309 [WARNING][5320] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" HandleID="k8s-pod-network.a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:29.322314 containerd[1514]: 2025-09-12 19:26:29.309 [INFO][5320] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" HandleID="k8s-pod-network.a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--qf4p7-eth0" Sep 12 19:26:29.322314 containerd[1514]: 2025-09-12 19:26:29.313 [INFO][5320] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:29.322314 containerd[1514]: 2025-09-12 19:26:29.316 [INFO][5313] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a" Sep 12 19:26:29.323868 containerd[1514]: time="2025-09-12T19:26:29.322496568Z" level=info msg="TearDown network for sandbox \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\" successfully" Sep 12 19:26:29.335031 containerd[1514]: time="2025-09-12T19:26:29.334976694Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:26:29.335171 containerd[1514]: time="2025-09-12T19:26:29.335075695Z" level=info msg="RemovePodSandbox \"a637f969748a2a9adf6c9c9f494113160623da275b8f9442007de532ffde473a\" returns successfully" Sep 12 19:26:29.336284 containerd[1514]: time="2025-09-12T19:26:29.336242402Z" level=info msg="StopPodSandbox for \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\"" Sep 12 19:26:29.477119 containerd[1514]: 2025-09-12 19:26:29.415 [WARNING][5334] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06e002f4-3e23-487d-b3cb-f79cac263b04", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129", Pod:"csi-node-driver-bxklx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e42488e5d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:29.477119 containerd[1514]: 2025-09-12 19:26:29.416 [INFO][5334] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Sep 12 19:26:29.477119 containerd[1514]: 2025-09-12 19:26:29.416 [INFO][5334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" iface="eth0" netns="" Sep 12 19:26:29.477119 containerd[1514]: 2025-09-12 19:26:29.416 [INFO][5334] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Sep 12 19:26:29.477119 containerd[1514]: 2025-09-12 19:26:29.416 [INFO][5334] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Sep 12 19:26:29.477119 containerd[1514]: 2025-09-12 19:26:29.456 [INFO][5341] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" HandleID="k8s-pod-network.d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Workload="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:29.477119 containerd[1514]: 2025-09-12 19:26:29.457 [INFO][5341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:29.477119 containerd[1514]: 2025-09-12 19:26:29.457 [INFO][5341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:29.477119 containerd[1514]: 2025-09-12 19:26:29.468 [WARNING][5341] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" HandleID="k8s-pod-network.d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Workload="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:29.477119 containerd[1514]: 2025-09-12 19:26:29.469 [INFO][5341] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" HandleID="k8s-pod-network.d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Workload="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:29.477119 containerd[1514]: 2025-09-12 19:26:29.471 [INFO][5341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:29.477119 containerd[1514]: 2025-09-12 19:26:29.474 [INFO][5334] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Sep 12 19:26:29.477119 containerd[1514]: time="2025-09-12T19:26:29.476705760Z" level=info msg="TearDown network for sandbox \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\" successfully" Sep 12 19:26:29.477119 containerd[1514]: time="2025-09-12T19:26:29.476753846Z" level=info msg="StopPodSandbox for \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\" returns successfully" Sep 12 19:26:29.480852 containerd[1514]: time="2025-09-12T19:26:29.479667804Z" level=info msg="RemovePodSandbox for \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\"" Sep 12 19:26:29.480852 containerd[1514]: time="2025-09-12T19:26:29.479713661Z" level=info msg="Forcibly stopping sandbox \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\"" Sep 12 19:26:29.662372 containerd[1514]: 2025-09-12 19:26:29.558 [WARNING][5355] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"06e002f4-3e23-487d-b3cb-f79cac263b04", ResourceVersion:"922", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6c96d95cc7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129", Pod:"csi-node-driver-bxklx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.35.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e42488e5d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:29.662372 containerd[1514]: 2025-09-12 19:26:29.558 [INFO][5355] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Sep 12 19:26:29.662372 containerd[1514]: 2025-09-12 19:26:29.558 [INFO][5355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" iface="eth0" netns="" Sep 12 19:26:29.662372 containerd[1514]: 2025-09-12 19:26:29.558 [INFO][5355] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Sep 12 19:26:29.662372 containerd[1514]: 2025-09-12 19:26:29.558 [INFO][5355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Sep 12 19:26:29.662372 containerd[1514]: 2025-09-12 19:26:29.619 [INFO][5362] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" HandleID="k8s-pod-network.d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Workload="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:29.662372 containerd[1514]: 2025-09-12 19:26:29.620 [INFO][5362] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:29.662372 containerd[1514]: 2025-09-12 19:26:29.620 [INFO][5362] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:29.662372 containerd[1514]: 2025-09-12 19:26:29.647 [WARNING][5362] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" HandleID="k8s-pod-network.d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Workload="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:29.662372 containerd[1514]: 2025-09-12 19:26:29.648 [INFO][5362] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" HandleID="k8s-pod-network.d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Workload="srv--gt1mb.gb1.brightbox.com-k8s-csi--node--driver--bxklx-eth0" Sep 12 19:26:29.662372 containerd[1514]: 2025-09-12 19:26:29.653 [INFO][5362] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:29.662372 containerd[1514]: 2025-09-12 19:26:29.656 [INFO][5355] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6" Sep 12 19:26:29.666197 containerd[1514]: time="2025-09-12T19:26:29.662348221Z" level=info msg="TearDown network for sandbox \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\" successfully" Sep 12 19:26:29.667590 containerd[1514]: time="2025-09-12T19:26:29.667534796Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:26:29.667977 containerd[1514]: time="2025-09-12T19:26:29.667675012Z" level=info msg="RemovePodSandbox \"d337882cbaa1907a9d224ae6e95e6582b63715abd57cfeaf3b7fbacad78b15c6\" returns successfully" Sep 12 19:26:29.668552 containerd[1514]: time="2025-09-12T19:26:29.668510281Z" level=info msg="StopPodSandbox for \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\"" Sep 12 19:26:29.808841 containerd[1514]: 2025-09-12 19:26:29.749 [WARNING][5376] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0", GenerateName:"calico-apiserver-746f98d4d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"746f98d4d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf", Pod:"calico-apiserver-746f98d4d8-55djp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali51094904563", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:29.808841 containerd[1514]: 2025-09-12 19:26:29.750 [INFO][5376] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Sep 12 19:26:29.808841 containerd[1514]: 2025-09-12 19:26:29.750 [INFO][5376] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" iface="eth0" netns="" Sep 12 19:26:29.808841 containerd[1514]: 2025-09-12 19:26:29.750 [INFO][5376] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Sep 12 19:26:29.808841 containerd[1514]: 2025-09-12 19:26:29.750 [INFO][5376] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Sep 12 19:26:29.808841 containerd[1514]: 2025-09-12 19:26:29.788 [INFO][5384] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" HandleID="k8s-pod-network.98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:29.808841 containerd[1514]: 2025-09-12 19:26:29.788 [INFO][5384] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:29.808841 containerd[1514]: 2025-09-12 19:26:29.788 [INFO][5384] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:29.808841 containerd[1514]: 2025-09-12 19:26:29.800 [WARNING][5384] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" HandleID="k8s-pod-network.98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:29.808841 containerd[1514]: 2025-09-12 19:26:29.800 [INFO][5384] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" HandleID="k8s-pod-network.98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:29.808841 containerd[1514]: 2025-09-12 19:26:29.803 [INFO][5384] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:29.808841 containerd[1514]: 2025-09-12 19:26:29.805 [INFO][5376] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Sep 12 19:26:29.814594 containerd[1514]: time="2025-09-12T19:26:29.808938999Z" level=info msg="TearDown network for sandbox \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\" successfully" Sep 12 19:26:29.814594 containerd[1514]: time="2025-09-12T19:26:29.809158108Z" level=info msg="StopPodSandbox for \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\" returns successfully" Sep 12 19:26:29.814594 containerd[1514]: time="2025-09-12T19:26:29.810254629Z" level=info msg="RemovePodSandbox for \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\"" Sep 12 19:26:29.814594 containerd[1514]: time="2025-09-12T19:26:29.810292000Z" level=info msg="Forcibly stopping sandbox \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\"" Sep 12 19:26:30.006582 containerd[1514]: 2025-09-12 19:26:29.890 [WARNING][5398] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0", GenerateName:"calico-apiserver-746f98d4d8-", Namespace:"calico-apiserver", SelfLink:"", UID:"1a1e8a9d-b2d9-49ef-ad3f-47a3e6130476", ResourceVersion:"939", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"746f98d4d8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf", Pod:"calico-apiserver-746f98d4d8-55djp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.35.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali51094904563", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:26:30.006582 containerd[1514]: 2025-09-12 19:26:29.894 [INFO][5398] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Sep 12 19:26:30.006582 containerd[1514]: 2025-09-12 19:26:29.894 [INFO][5398] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" iface="eth0" netns="" Sep 12 19:26:30.006582 containerd[1514]: 2025-09-12 19:26:29.894 [INFO][5398] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Sep 12 19:26:30.006582 containerd[1514]: 2025-09-12 19:26:29.897 [INFO][5398] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Sep 12 19:26:30.006582 containerd[1514]: 2025-09-12 19:26:29.975 [INFO][5406] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" HandleID="k8s-pod-network.98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:30.006582 containerd[1514]: 2025-09-12 19:26:29.976 [INFO][5406] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:26:30.006582 containerd[1514]: 2025-09-12 19:26:29.977 [INFO][5406] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:26:30.006582 containerd[1514]: 2025-09-12 19:26:29.988 [WARNING][5406] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" HandleID="k8s-pod-network.98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:30.006582 containerd[1514]: 2025-09-12 19:26:29.989 [INFO][5406] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" HandleID="k8s-pod-network.98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Workload="srv--gt1mb.gb1.brightbox.com-k8s-calico--apiserver--746f98d4d8--55djp-eth0" Sep 12 19:26:30.006582 containerd[1514]: 2025-09-12 19:26:29.991 [INFO][5406] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:26:30.006582 containerd[1514]: 2025-09-12 19:26:29.996 [INFO][5398] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26" Sep 12 19:26:30.006582 containerd[1514]: time="2025-09-12T19:26:30.005121808Z" level=info msg="TearDown network for sandbox \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\" successfully" Sep 12 19:26:30.120668 containerd[1514]: time="2025-09-12T19:26:30.120514931Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:26:30.121108 containerd[1514]: time="2025-09-12T19:26:30.121077639Z" level=info msg="RemovePodSandbox \"98058b35dd209864709dbfa25d0d5a93bbe457115eb89a01d01a65ade3c6ef26\" returns successfully" Sep 12 19:26:31.225410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3482652125.mount: Deactivated successfully. Sep 12 19:26:32.282865 containerd[1514]: time="2025-09-12T19:26:32.281006256Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.3: active requests=0, bytes read=66357526" Sep 12 19:26:32.286270 containerd[1514]: time="2025-09-12T19:26:32.284826503Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" with image id \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\", size \"66357372\" in 6.096451026s" Sep 12 19:26:32.286270 containerd[1514]: time="2025-09-12T19:26:32.284884138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.3\" returns image reference \"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\"" Sep 12 19:26:32.288895 containerd[1514]: time="2025-09-12T19:26:32.288844898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:32.290324 containerd[1514]: time="2025-09-12T19:26:32.290285800Z" level=info msg="ImageCreate event name:\"sha256:a7d029fd8f6be94c26af980675c1650818e1e6e19dbd2f8c13e6e61963f021e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:32.292493 containerd[1514]: time="2025-09-12T19:26:32.292445973Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:46297703ab3739331a00a58f0d6a5498c8d3b6523ad947eed68592ee0f3e79f0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:32.338143 containerd[1514]: time="2025-09-12T19:26:32.338062377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\"" Sep 12 19:26:32.419808 containerd[1514]: time="2025-09-12T19:26:32.419725545Z" level=info msg="CreateContainer within sandbox \"3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Sep 12 19:26:32.628861 containerd[1514]: time="2025-09-12T19:26:32.628769148Z" level=info msg="CreateContainer within sandbox \"3496442880062470e5e755859bb65b66d1fdade981a3cf12be6fd4cd6362e232\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f4e879e0f0d9531d426645436b780480a6bcf83e90d6ae4c099364ef9056bdca\"" Sep 12 19:26:32.632903 containerd[1514]: time="2025-09-12T19:26:32.632505950Z" level=info msg="StartContainer for \"f4e879e0f0d9531d426645436b780480a6bcf83e90d6ae4c099364ef9056bdca\"" Sep 12 19:26:32.885260 systemd[1]: run-containerd-runc-k8s.io-f4e879e0f0d9531d426645436b780480a6bcf83e90d6ae4c099364ef9056bdca-runc.3qaWDV.mount: Deactivated successfully. Sep 12 19:26:32.911222 systemd[1]: Started cri-containerd-f4e879e0f0d9531d426645436b780480a6bcf83e90d6ae4c099364ef9056bdca.scope - libcontainer container f4e879e0f0d9531d426645436b780480a6bcf83e90d6ae4c099364ef9056bdca. Sep 12 19:26:33.057924 containerd[1514]: time="2025-09-12T19:26:33.057866553Z" level=info msg="StartContainer for \"f4e879e0f0d9531d426645436b780480a6bcf83e90d6ae4c099364ef9056bdca\" returns successfully" Sep 12 19:26:33.252528 containerd[1514]: time="2025-09-12T19:26:33.251424327Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:33.252528 containerd[1514]: time="2025-09-12T19:26:33.252107284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.3: active requests=0, bytes read=77" Sep 12 19:26:33.257092 containerd[1514]: time="2025-09-12T19:26:33.257018804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" with image id \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:6a24147f11c1edce9d6ba79bdb0c2beadec53853fb43438a287291e67b41e51b\", size \"48826583\" in 918.862914ms" Sep 12 19:26:33.257326 containerd[1514]: time="2025-09-12T19:26:33.257272287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.3\" returns image reference \"sha256:879f2443aed0573271114108bfec35d3e76419f98282ef796c646d0986c5ba6a\"" Sep 12 19:26:33.260926 containerd[1514]: time="2025-09-12T19:26:33.259699483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\"" Sep 12 19:26:33.263620 containerd[1514]: time="2025-09-12T19:26:33.263358245Z" level=info msg="CreateContainer within sandbox \"4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Sep 12 19:26:33.283667 containerd[1514]: time="2025-09-12T19:26:33.283600351Z" level=info msg="CreateContainer within sandbox \"4cc0864dacab3848dda06b9e2bca6fd25ff244d2ab261171b02ea1e325fcf6cf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c191a1aad7dc0ffa47bcf346a1a06695767309bef084003709f2e92ec53052f7\"" Sep 12 19:26:33.286044 containerd[1514]: time="2025-09-12T19:26:33.285154848Z" level=info msg="StartContainer for \"c191a1aad7dc0ffa47bcf346a1a06695767309bef084003709f2e92ec53052f7\"" Sep 12 19:26:33.338239 systemd[1]: Started cri-containerd-c191a1aad7dc0ffa47bcf346a1a06695767309bef084003709f2e92ec53052f7.scope - libcontainer container c191a1aad7dc0ffa47bcf346a1a06695767309bef084003709f2e92ec53052f7. Sep 12 19:26:33.428041 containerd[1514]: time="2025-09-12T19:26:33.427886664Z" level=info msg="StartContainer for \"c191a1aad7dc0ffa47bcf346a1a06695767309bef084003709f2e92ec53052f7\" returns successfully" Sep 12 19:26:33.730589 kubelet[2683]: I0912 19:26:33.694116 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-746f98d4d8-55djp" podStartSLOduration=37.300571229 podStartE2EDuration="50.680143626s" podCreationTimestamp="2025-09-12 19:25:43 +0000 UTC" firstStartedPulling="2025-09-12 19:26:19.879502312 +0000 UTC m=+53.450711816" lastFinishedPulling="2025-09-12 19:26:33.259074692 +0000 UTC m=+66.830284213" observedRunningTime="2025-09-12 19:26:33.638989527 +0000 UTC m=+67.210199048" watchObservedRunningTime="2025-09-12 19:26:33.680143626 +0000 UTC m=+67.251353129" Sep 12 19:26:33.731929 kubelet[2683]: I0912 19:26:33.731479 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-54d579b49d-hntgn" podStartSLOduration=33.802034628 podStartE2EDuration="46.731459181s" podCreationTimestamp="2025-09-12 19:25:47 +0000 UTC" firstStartedPulling="2025-09-12 19:26:19.38734169 +0000 UTC m=+52.958551181" lastFinishedPulling="2025-09-12 19:26:32.316766213 +0000 UTC m=+65.887975734" observedRunningTime="2025-09-12 19:26:33.728178289 +0000 UTC m=+67.299387794" watchObservedRunningTime="2025-09-12 19:26:33.731459181 +0000 UTC m=+67.302668684" Sep 12 19:26:39.316195 containerd[1514]: time="2025-09-12T19:26:39.315911691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:39.318493 containerd[1514]: time="2025-09-12T19:26:39.317843814Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.3: active requests=0, bytes read=51277746" Sep 12 19:26:39.320272 containerd[1514]: time="2025-09-12T19:26:39.318826430Z" level=info msg="ImageCreate event name:\"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:39.328672 containerd[1514]: time="2025-09-12T19:26:39.328629071Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:39.330258 containerd[1514]: time="2025-09-12T19:26:39.329650782Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" with image id \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:27c4187717f08f0a5727019d8beb7597665eb47e69eaa1d7d091a7e28913e577\", size \"52770417\" in 6.068987988s" Sep 12 19:26:39.330258 containerd[1514]: time="2025-09-12T19:26:39.329707247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.3\" returns image reference \"sha256:df191a54fb79de3c693f8b1b864a1bd3bd14f63b3fff9d5fa4869c471ce3cd37\"" Sep 12 19:26:39.355845 containerd[1514]: time="2025-09-12T19:26:39.355768868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\"" Sep 12 19:26:39.568733 containerd[1514]: time="2025-09-12T19:26:39.539535812Z" level=info msg="CreateContainer within sandbox \"b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Sep 12 19:26:39.671604 containerd[1514]: time="2025-09-12T19:26:39.671504267Z" level=info msg="CreateContainer within sandbox \"b64e0305d1d74c463057bbb045333c90aec5d87b6301fb42a6b22e3947fa4e1d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"719b1147f8b1b14159b4c0c45b45aebf4a6bdb92bce0e45002bc8067aa47d6c1\"" Sep 12 19:26:39.678204 containerd[1514]: time="2025-09-12T19:26:39.678147131Z" level=info msg="StartContainer for \"719b1147f8b1b14159b4c0c45b45aebf4a6bdb92bce0e45002bc8067aa47d6c1\"" Sep 12 19:26:39.814634 systemd[1]: Started cri-containerd-719b1147f8b1b14159b4c0c45b45aebf4a6bdb92bce0e45002bc8067aa47d6c1.scope - libcontainer container 719b1147f8b1b14159b4c0c45b45aebf4a6bdb92bce0e45002bc8067aa47d6c1. Sep 12 19:26:39.943028 containerd[1514]: time="2025-09-12T19:26:39.942483610Z" level=info msg="StartContainer for \"719b1147f8b1b14159b4c0c45b45aebf4a6bdb92bce0e45002bc8067aa47d6c1\" returns successfully" Sep 12 19:26:40.889750 kubelet[2683]: I0912 19:26:40.881877 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6b79d5b75-txtkb" podStartSLOduration=34.035004786 podStartE2EDuration="52.854230569s" podCreationTimestamp="2025-09-12 19:25:48 +0000 UTC" firstStartedPulling="2025-09-12 19:26:20.513451019 +0000 UTC m=+54.084660518" lastFinishedPulling="2025-09-12 19:26:39.332676764 +0000 UTC m=+72.903886301" observedRunningTime="2025-09-12 19:26:40.821644458 +0000 UTC m=+74.392853979" watchObservedRunningTime="2025-09-12 19:26:40.854230569 +0000 UTC m=+74.425440098" Sep 12 19:26:43.158247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3085175761.mount: Deactivated successfully. Sep 12 19:26:43.189459 containerd[1514]: time="2025-09-12T19:26:43.189369142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:43.191827 containerd[1514]: time="2025-09-12T19:26:43.191758252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.3: active requests=0, bytes read=33085545" Sep 12 19:26:43.193154 containerd[1514]: time="2025-09-12T19:26:43.193086221Z" level=info msg="ImageCreate event name:\"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:43.196152 containerd[1514]: time="2025-09-12T19:26:43.196028048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:43.198082 containerd[1514]: time="2025-09-12T19:26:43.197428942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" with image id \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:29becebc47401da9997a2a30f4c25c511a5f379d17275680b048224829af71a5\", size \"33085375\" in 3.840881573s" Sep 12 19:26:43.198082 containerd[1514]: time="2025-09-12T19:26:43.197479730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.3\" returns image reference \"sha256:7e29b0984d517678aab6ca138482c318989f6f28daf9d3b5dd6e4a5a3115ac16\"" Sep 12 19:26:43.199900 containerd[1514]: time="2025-09-12T19:26:43.199869541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\"" Sep 12 19:26:43.203810 containerd[1514]: time="2025-09-12T19:26:43.203770178Z" level=info msg="CreateContainer within sandbox \"d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Sep 12 19:26:43.237477 containerd[1514]: time="2025-09-12T19:26:43.237359691Z" level=info msg="CreateContainer within sandbox \"d32906493467b0cb3f0818aa882371cd130aeb9c1db71484dabd73484955b19a\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0836ee095a5bc328d77a5ce5c45d8e84d45c0d40a2152ace2cf4649e9ec1c488\"" Sep 12 19:26:43.239264 containerd[1514]: time="2025-09-12T19:26:43.239139008Z" level=info msg="StartContainer for \"0836ee095a5bc328d77a5ce5c45d8e84d45c0d40a2152ace2cf4649e9ec1c488\"" Sep 12 19:26:43.340897 systemd[1]: Started cri-containerd-0836ee095a5bc328d77a5ce5c45d8e84d45c0d40a2152ace2cf4649e9ec1c488.scope - libcontainer container 0836ee095a5bc328d77a5ce5c45d8e84d45c0d40a2152ace2cf4649e9ec1c488. Sep 12 19:26:43.437223 containerd[1514]: time="2025-09-12T19:26:43.436957712Z" level=info msg="StartContainer for \"0836ee095a5bc328d77a5ce5c45d8e84d45c0d40a2152ace2cf4649e9ec1c488\" returns successfully" Sep 12 19:26:43.833010 kubelet[2683]: I0912 19:26:43.832160 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-655866dbc-vrgnp" podStartSLOduration=2.191509147 podStartE2EDuration="28.831936649s" podCreationTimestamp="2025-09-12 19:26:15 +0000 UTC" firstStartedPulling="2025-09-12 19:26:16.559075581 +0000 UTC m=+50.130285078" lastFinishedPulling="2025-09-12 19:26:43.199503056 +0000 UTC m=+76.770712580" observedRunningTime="2025-09-12 19:26:43.82802717 +0000 UTC m=+77.399236691" watchObservedRunningTime="2025-09-12 19:26:43.831936649 +0000 UTC m=+77.403146147" Sep 12 19:26:45.585826 containerd[1514]: time="2025-09-12T19:26:45.585699462Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:45.587614 containerd[1514]: time="2025-09-12T19:26:45.587461291Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3: active requests=0, bytes read=14698542" Sep 12 19:26:45.590005 containerd[1514]: time="2025-09-12T19:26:45.588633071Z" level=info msg="ImageCreate event name:\"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:45.591990 containerd[1514]: time="2025-09-12T19:26:45.591883246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 19:26:45.593459 containerd[1514]: time="2025-09-12T19:26:45.593211062Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" with image id \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:731ab232ca708102ab332340b1274d5cd656aa896ecc5368ee95850b811df86f\", size \"16191197\" in 2.393006968s" Sep 12 19:26:45.593459 containerd[1514]: time="2025-09-12T19:26:45.593255654Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.3\" returns image reference \"sha256:b8f31c4fdaed3fa08af64de3d37d65a4c2ea0d9f6f522cb60d2e0cb424f8dd8a\"" Sep 12 19:26:45.632441 containerd[1514]: time="2025-09-12T19:26:45.632383790Z" level=info msg="CreateContainer within sandbox \"1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Sep 12 19:26:45.669633 containerd[1514]: time="2025-09-12T19:26:45.669571976Z" level=info msg="CreateContainer within sandbox \"1cfc7bcbdd0078a0b9815a3647742808d9bbefbd34831a0c00c2c5d2ef590129\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5b57fc8c2b9624ccc8fb7e7f65faf07b2e1e41aa3980ee7b256b2db7561bad9c\"" Sep 12 19:26:45.673551 containerd[1514]: time="2025-09-12T19:26:45.672043817Z" level=info msg="StartContainer for \"5b57fc8c2b9624ccc8fb7e7f65faf07b2e1e41aa3980ee7b256b2db7561bad9c\"" Sep 12 19:26:45.778209 systemd[1]: run-containerd-runc-k8s.io-5b57fc8c2b9624ccc8fb7e7f65faf07b2e1e41aa3980ee7b256b2db7561bad9c-runc.JpncpF.mount: Deactivated successfully. Sep 12 19:26:45.792174 systemd[1]: Started cri-containerd-5b57fc8c2b9624ccc8fb7e7f65faf07b2e1e41aa3980ee7b256b2db7561bad9c.scope - libcontainer container 5b57fc8c2b9624ccc8fb7e7f65faf07b2e1e41aa3980ee7b256b2db7561bad9c. Sep 12 19:26:45.874800 containerd[1514]: time="2025-09-12T19:26:45.874585568Z" level=info msg="StartContainer for \"5b57fc8c2b9624ccc8fb7e7f65faf07b2e1e41aa3980ee7b256b2db7561bad9c\" returns successfully" Sep 12 19:26:46.242692 kubelet[2683]: I0912 19:26:46.241038 2683 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Sep 12 19:26:46.246607 kubelet[2683]: I0912 19:26:46.246468 2683 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Sep 12 19:26:47.008930 kubelet[2683]: I0912 19:26:47.008793 2683 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bxklx" podStartSLOduration=31.195336306 podStartE2EDuration="59.008755006s" podCreationTimestamp="2025-09-12 19:25:48 +0000 UTC" firstStartedPulling="2025-09-12 19:26:17.799132075 +0000 UTC m=+51.370341573" lastFinishedPulling="2025-09-12 19:26:45.612550782 +0000 UTC m=+79.183760273" observedRunningTime="2025-09-12 19:26:47.001103345 +0000 UTC m=+80.572312843" watchObservedRunningTime="2025-09-12 19:26:47.008755006 +0000 UTC m=+80.579964511" Sep 12 19:26:47.380277 kubelet[2683]: I0912 19:26:47.376218 2683 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 19:26:47.725317 systemd[1]: Started sshd@9-10.230.43.118:22-139.178.68.195:47212.service - OpenSSH per-connection server daemon (139.178.68.195:47212). Sep 12 19:26:48.822020 sshd[5782]: Accepted publickey for core from 139.178.68.195 port 47212 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:26:48.828027 sshd[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:26:48.842239 systemd-logind[1485]: New session 12 of user core. Sep 12 19:26:48.852207 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 19:26:50.237881 sshd[5782]: pam_unix(sshd:session): session closed for user core Sep 12 19:26:50.287755 systemd[1]: sshd@9-10.230.43.118:22-139.178.68.195:47212.service: Deactivated successfully. Sep 12 19:26:50.307756 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 19:26:50.318160 systemd-logind[1485]: Session 12 logged out. Waiting for processes to exit. Sep 12 19:26:50.326518 systemd-logind[1485]: Removed session 12. Sep 12 19:26:55.404389 systemd[1]: Started sshd@10-10.230.43.118:22-139.178.68.195:57348.service - OpenSSH per-connection server daemon (139.178.68.195:57348). Sep 12 19:26:56.397699 sshd[5800]: Accepted publickey for core from 139.178.68.195 port 57348 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:26:56.401227 sshd[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:26:56.410403 systemd-logind[1485]: New session 13 of user core. Sep 12 19:26:56.417249 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 19:26:57.383737 sshd[5800]: pam_unix(sshd:session): session closed for user core Sep 12 19:26:57.396134 systemd[1]: sshd@10-10.230.43.118:22-139.178.68.195:57348.service: Deactivated successfully. Sep 12 19:26:57.401361 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 19:26:57.403107 systemd-logind[1485]: Session 13 logged out. Waiting for processes to exit. Sep 12 19:26:57.404898 systemd-logind[1485]: Removed session 13. Sep 12 19:27:02.548520 systemd[1]: Started sshd@11-10.230.43.118:22-139.178.68.195:59658.service - OpenSSH per-connection server daemon (139.178.68.195:59658). Sep 12 19:27:03.515458 sshd[5821]: Accepted publickey for core from 139.178.68.195 port 59658 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:27:03.517599 sshd[5821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:27:03.527661 systemd-logind[1485]: New session 14 of user core. Sep 12 19:27:03.533295 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 19:27:04.359844 sshd[5821]: pam_unix(sshd:session): session closed for user core Sep 12 19:27:04.364379 systemd[1]: sshd@11-10.230.43.118:22-139.178.68.195:59658.service: Deactivated successfully. Sep 12 19:27:04.368254 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 19:27:04.370398 systemd-logind[1485]: Session 14 logged out. Waiting for processes to exit. Sep 12 19:27:04.372590 systemd-logind[1485]: Removed session 14. Sep 12 19:27:04.522436 systemd[1]: Started sshd@12-10.230.43.118:22-139.178.68.195:59666.service - OpenSSH per-connection server daemon (139.178.68.195:59666). Sep 12 19:27:05.440352 sshd[5835]: Accepted publickey for core from 139.178.68.195 port 59666 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:27:05.442725 sshd[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:27:05.449918 systemd-logind[1485]: New session 15 of user core. Sep 12 19:27:05.462211 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 19:27:05.970844 systemd[1]: run-containerd-runc-k8s.io-f4e879e0f0d9531d426645436b780480a6bcf83e90d6ae4c099364ef9056bdca-runc.h1yQ7j.mount: Deactivated successfully. Sep 12 19:27:06.476357 sshd[5835]: pam_unix(sshd:session): session closed for user core Sep 12 19:27:06.498771 systemd[1]: sshd@12-10.230.43.118:22-139.178.68.195:59666.service: Deactivated successfully. Sep 12 19:27:06.505016 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 19:27:06.507162 systemd-logind[1485]: Session 15 logged out. Waiting for processes to exit. Sep 12 19:27:06.509471 systemd-logind[1485]: Removed session 15. Sep 12 19:27:06.640232 systemd[1]: Started sshd@13-10.230.43.118:22-139.178.68.195:59668.service - OpenSSH per-connection server daemon (139.178.68.195:59668). Sep 12 19:27:07.706167 sshd[5889]: Accepted publickey for core from 139.178.68.195 port 59668 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:27:07.712402 sshd[5889]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:27:07.719983 systemd-logind[1485]: New session 16 of user core. Sep 12 19:27:07.725165 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 19:27:08.567051 sshd[5889]: pam_unix(sshd:session): session closed for user core Sep 12 19:27:08.577362 systemd[1]: sshd@13-10.230.43.118:22-139.178.68.195:59668.service: Deactivated successfully. Sep 12 19:27:08.580257 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 19:27:08.581582 systemd-logind[1485]: Session 16 logged out. Waiting for processes to exit. Sep 12 19:27:08.583897 systemd-logind[1485]: Removed session 16. Sep 12 19:27:10.836753 systemd[1]: run-containerd-runc-k8s.io-719b1147f8b1b14159b4c0c45b45aebf4a6bdb92bce0e45002bc8067aa47d6c1-runc.Sl8OjJ.mount: Deactivated successfully. Sep 12 19:27:13.732093 systemd[1]: Started sshd@14-10.230.43.118:22-139.178.68.195:35300.service - OpenSSH per-connection server daemon (139.178.68.195:35300). Sep 12 19:27:14.688907 sshd[5927]: Accepted publickey for core from 139.178.68.195 port 35300 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:27:14.693233 sshd[5927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:27:14.704317 systemd-logind[1485]: New session 17 of user core. Sep 12 19:27:14.713233 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 19:27:15.785439 sshd[5927]: pam_unix(sshd:session): session closed for user core Sep 12 19:27:15.794499 systemd[1]: sshd@14-10.230.43.118:22-139.178.68.195:35300.service: Deactivated successfully. Sep 12 19:27:15.799574 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 19:27:15.801794 systemd-logind[1485]: Session 17 logged out. Waiting for processes to exit. Sep 12 19:27:15.803471 systemd-logind[1485]: Removed session 17. Sep 12 19:27:20.946924 systemd[1]: Started sshd@15-10.230.43.118:22-139.178.68.195:52822.service - OpenSSH per-connection server daemon (139.178.68.195:52822). Sep 12 19:27:21.951017 sshd[5982]: Accepted publickey for core from 139.178.68.195 port 52822 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:27:21.954813 sshd[5982]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:27:21.965337 systemd-logind[1485]: New session 18 of user core. Sep 12 19:27:21.973164 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 19:27:23.044955 sshd[5982]: pam_unix(sshd:session): session closed for user core Sep 12 19:27:23.051747 systemd[1]: sshd@15-10.230.43.118:22-139.178.68.195:52822.service: Deactivated successfully. Sep 12 19:27:23.057315 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 19:27:23.061769 systemd-logind[1485]: Session 18 logged out. Waiting for processes to exit. Sep 12 19:27:23.063544 systemd-logind[1485]: Removed session 18. Sep 12 19:27:28.208426 systemd[1]: Started sshd@16-10.230.43.118:22-139.178.68.195:52824.service - OpenSSH per-connection server daemon (139.178.68.195:52824). Sep 12 19:27:29.128372 sshd[5997]: Accepted publickey for core from 139.178.68.195 port 52824 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:27:29.131574 sshd[5997]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:27:29.140949 systemd-logind[1485]: New session 19 of user core. Sep 12 19:27:29.149202 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 19:27:29.930338 sshd[5997]: pam_unix(sshd:session): session closed for user core Sep 12 19:27:29.935706 systemd[1]: sshd@16-10.230.43.118:22-139.178.68.195:52824.service: Deactivated successfully. Sep 12 19:27:29.939868 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 19:27:29.942544 systemd-logind[1485]: Session 19 logged out. Waiting for processes to exit. Sep 12 19:27:29.944495 systemd-logind[1485]: Removed session 19. Sep 12 19:27:30.220746 containerd[1514]: time="2025-09-12T19:27:30.194686670Z" level=info msg="StopPodSandbox for \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\"" Sep 12 19:27:30.976157 containerd[1514]: 2025-09-12 19:27:30.716 [WARNING][6017] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bfac3adb-2001-4e77-9843-4702a6abb198", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548", Pod:"coredns-668d6bf9bc-sz5ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied865fc9803", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:27:30.976157 containerd[1514]: 2025-09-12 19:27:30.719 [INFO][6017] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:27:30.976157 containerd[1514]: 2025-09-12 19:27:30.719 [INFO][6017] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" iface="eth0" netns="" Sep 12 19:27:30.976157 containerd[1514]: 2025-09-12 19:27:30.719 [INFO][6017] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:27:30.976157 containerd[1514]: 2025-09-12 19:27:30.719 [INFO][6017] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:27:30.976157 containerd[1514]: 2025-09-12 19:27:30.943 [INFO][6025] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" HandleID="k8s-pod-network.63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:27:30.976157 containerd[1514]: 2025-09-12 19:27:30.946 [INFO][6025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:27:30.976157 containerd[1514]: 2025-09-12 19:27:30.946 [INFO][6025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:27:30.976157 containerd[1514]: 2025-09-12 19:27:30.967 [WARNING][6025] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" HandleID="k8s-pod-network.63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:27:30.976157 containerd[1514]: 2025-09-12 19:27:30.968 [INFO][6025] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" HandleID="k8s-pod-network.63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:27:30.976157 containerd[1514]: 2025-09-12 19:27:30.970 [INFO][6025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:27:30.976157 containerd[1514]: 2025-09-12 19:27:30.973 [INFO][6017] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:27:30.984495 containerd[1514]: time="2025-09-12T19:27:30.980738728Z" level=info msg="TearDown network for sandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\" successfully" Sep 12 19:27:30.984495 containerd[1514]: time="2025-09-12T19:27:30.980841482Z" level=info msg="StopPodSandbox for \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\" returns successfully" Sep 12 19:27:30.997838 containerd[1514]: time="2025-09-12T19:27:30.997504664Z" level=info msg="RemovePodSandbox for \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\"" Sep 12 19:27:31.001509 containerd[1514]: time="2025-09-12T19:27:31.001475099Z" level=info msg="Forcibly stopping sandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\"" Sep 12 19:27:31.134205 containerd[1514]: 2025-09-12 19:27:31.058 [WARNING][6039] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"bfac3adb-2001-4e77-9843-4702a6abb198", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.September, 12, 19, 25, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-gt1mb.gb1.brightbox.com", ContainerID:"b3340af88806e4829a8af62bde93f93fc7381f365536c7c7ee1ad933e8143548", Pod:"coredns-668d6bf9bc-sz5ss", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.35.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calied865fc9803", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Sep 12 19:27:31.134205 containerd[1514]: 2025-09-12 19:27:31.058 [INFO][6039] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:27:31.134205 containerd[1514]: 2025-09-12 19:27:31.058 [INFO][6039] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" iface="eth0" netns="" Sep 12 19:27:31.134205 containerd[1514]: 2025-09-12 19:27:31.058 [INFO][6039] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:27:31.134205 containerd[1514]: 2025-09-12 19:27:31.058 [INFO][6039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:27:31.134205 containerd[1514]: 2025-09-12 19:27:31.114 [INFO][6046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" HandleID="k8s-pod-network.63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:27:31.134205 containerd[1514]: 2025-09-12 19:27:31.114 [INFO][6046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Sep 12 19:27:31.134205 containerd[1514]: 2025-09-12 19:27:31.115 [INFO][6046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Sep 12 19:27:31.134205 containerd[1514]: 2025-09-12 19:27:31.124 [WARNING][6046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" HandleID="k8s-pod-network.63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:27:31.134205 containerd[1514]: 2025-09-12 19:27:31.124 [INFO][6046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" HandleID="k8s-pod-network.63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Workload="srv--gt1mb.gb1.brightbox.com-k8s-coredns--668d6bf9bc--sz5ss-eth0" Sep 12 19:27:31.134205 containerd[1514]: 2025-09-12 19:27:31.126 [INFO][6046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Sep 12 19:27:31.134205 containerd[1514]: 2025-09-12 19:27:31.130 [INFO][6039] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c" Sep 12 19:27:31.136111 containerd[1514]: time="2025-09-12T19:27:31.134300319Z" level=info msg="TearDown network for sandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\" successfully" Sep 12 19:27:31.196376 containerd[1514]: time="2025-09-12T19:27:31.196203910Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Sep 12 19:27:31.196376 containerd[1514]: time="2025-09-12T19:27:31.196373456Z" level=info msg="RemovePodSandbox \"63a1f186e6694ace56a8358406ee2f8f2d2756a2884d9655af905742effaab7c\" returns successfully" Sep 12 19:27:35.098433 systemd[1]: Started sshd@17-10.230.43.118:22-139.178.68.195:59908.service - OpenSSH per-connection server daemon (139.178.68.195:59908). Sep 12 19:27:36.086556 sshd[6056]: Accepted publickey for core from 139.178.68.195 port 59908 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:27:36.091442 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:27:36.109038 systemd-logind[1485]: New session 20 of user core. Sep 12 19:27:36.117444 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 19:27:37.474891 sshd[6056]: pam_unix(sshd:session): session closed for user core Sep 12 19:27:37.487689 systemd-logind[1485]: Session 20 logged out. Waiting for processes to exit. Sep 12 19:27:37.488893 systemd[1]: sshd@17-10.230.43.118:22-139.178.68.195:59908.service: Deactivated successfully. Sep 12 19:27:37.494734 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 19:27:37.498662 systemd-logind[1485]: Removed session 20. Sep 12 19:27:37.637571 systemd[1]: Started sshd@18-10.230.43.118:22-139.178.68.195:59918.service - OpenSSH per-connection server daemon (139.178.68.195:59918). Sep 12 19:27:38.581259 sshd[6090]: Accepted publickey for core from 139.178.68.195 port 59918 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:27:38.583741 sshd[6090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:27:38.591924 systemd-logind[1485]: New session 21 of user core. Sep 12 19:27:38.598207 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 19:27:39.606884 sshd[6090]: pam_unix(sshd:session): session closed for user core Sep 12 19:27:39.616370 systemd[1]: sshd@18-10.230.43.118:22-139.178.68.195:59918.service: Deactivated successfully. Sep 12 19:27:39.619671 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 19:27:39.620951 systemd-logind[1485]: Session 21 logged out. Waiting for processes to exit. Sep 12 19:27:39.623540 systemd-logind[1485]: Removed session 21. Sep 12 19:27:39.770395 systemd[1]: Started sshd@19-10.230.43.118:22-139.178.68.195:59928.service - OpenSSH per-connection server daemon (139.178.68.195:59928). Sep 12 19:27:40.700216 sshd[6101]: Accepted publickey for core from 139.178.68.195 port 59928 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:27:40.702869 sshd[6101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:27:40.715376 systemd-logind[1485]: New session 22 of user core. Sep 12 19:27:40.723319 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 19:27:42.252040 sshd[6101]: pam_unix(sshd:session): session closed for user core Sep 12 19:27:42.267634 systemd[1]: sshd@19-10.230.43.118:22-139.178.68.195:59928.service: Deactivated successfully. Sep 12 19:27:42.272407 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 19:27:42.275718 systemd-logind[1485]: Session 22 logged out. Waiting for processes to exit. Sep 12 19:27:42.278182 systemd-logind[1485]: Removed session 22. Sep 12 19:27:42.408399 systemd[1]: Started sshd@20-10.230.43.118:22-139.178.68.195:57648.service - OpenSSH per-connection server daemon (139.178.68.195:57648). Sep 12 19:27:43.370625 sshd[6147]: Accepted publickey for core from 139.178.68.195 port 57648 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:27:43.374642 sshd[6147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:27:43.382614 systemd-logind[1485]: New session 23 of user core. Sep 12 19:27:43.390159 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 19:27:44.835652 sshd[6147]: pam_unix(sshd:session): session closed for user core Sep 12 19:27:44.851259 systemd[1]: sshd@20-10.230.43.118:22-139.178.68.195:57648.service: Deactivated successfully. Sep 12 19:27:44.858137 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 19:27:44.863851 systemd-logind[1485]: Session 23 logged out. Waiting for processes to exit. Sep 12 19:27:44.870094 systemd-logind[1485]: Removed session 23. Sep 12 19:27:44.995147 systemd[1]: Started sshd@21-10.230.43.118:22-139.178.68.195:57652.service - OpenSSH per-connection server daemon (139.178.68.195:57652). Sep 12 19:27:45.942611 sshd[6158]: Accepted publickey for core from 139.178.68.195 port 57652 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:27:45.946435 sshd[6158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:27:45.955066 systemd-logind[1485]: New session 24 of user core. Sep 12 19:27:45.961179 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 19:27:47.264565 sshd[6158]: pam_unix(sshd:session): session closed for user core Sep 12 19:27:47.310854 systemd[1]: sshd@21-10.230.43.118:22-139.178.68.195:57652.service: Deactivated successfully. Sep 12 19:27:47.319397 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 19:27:47.324718 systemd-logind[1485]: Session 24 logged out. Waiting for processes to exit. Sep 12 19:27:47.331525 systemd-logind[1485]: Removed session 24. Sep 12 19:27:52.447241 systemd[1]: Started sshd@22-10.230.43.118:22-139.178.68.195:40132.service - OpenSSH per-connection server daemon (139.178.68.195:40132). Sep 12 19:27:53.475931 sshd[6212]: Accepted publickey for core from 139.178.68.195 port 40132 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:27:53.486215 sshd[6212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:27:53.501393 systemd-logind[1485]: New session 25 of user core. Sep 12 19:27:53.511196 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 19:27:55.005246 sshd[6212]: pam_unix(sshd:session): session closed for user core Sep 12 19:27:55.019322 systemd[1]: sshd@22-10.230.43.118:22-139.178.68.195:40132.service: Deactivated successfully. Sep 12 19:27:55.025950 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 19:27:55.031507 systemd-logind[1485]: Session 25 logged out. Waiting for processes to exit. Sep 12 19:27:55.035298 systemd-logind[1485]: Removed session 25. Sep 12 19:28:00.182776 systemd[1]: Started sshd@23-10.230.43.118:22-139.178.68.195:34334.service - OpenSSH per-connection server daemon (139.178.68.195:34334). Sep 12 19:28:01.146363 sshd[6232]: Accepted publickey for core from 139.178.68.195 port 34334 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:28:01.162324 sshd[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:28:01.178999 systemd-logind[1485]: New session 26 of user core. Sep 12 19:28:01.185516 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 12 19:28:02.557441 sshd[6232]: pam_unix(sshd:session): session closed for user core Sep 12 19:28:02.576389 systemd-logind[1485]: Session 26 logged out. Waiting for processes to exit. Sep 12 19:28:02.578507 systemd[1]: sshd@23-10.230.43.118:22-139.178.68.195:34334.service: Deactivated successfully. Sep 12 19:28:02.582628 systemd[1]: session-26.scope: Deactivated successfully. Sep 12 19:28:02.588201 systemd-logind[1485]: Removed session 26. Sep 12 19:28:07.717322 systemd[1]: Started sshd@24-10.230.43.118:22-139.178.68.195:34346.service - OpenSSH per-connection server daemon (139.178.68.195:34346). Sep 12 19:28:08.763563 sshd[6287]: Accepted publickey for core from 139.178.68.195 port 34346 ssh2: RSA SHA256:dkjv4dzdxNx6D5mJfOKHLwjtsDmLV1bsqsLWNbTbrhg Sep 12 19:28:08.771920 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 19:28:08.783191 systemd-logind[1485]: New session 27 of user core. Sep 12 19:28:08.789181 systemd[1]: Started session-27.scope - Session 27 of User core. Sep 12 19:28:10.056353 sshd[6287]: pam_unix(sshd:session): session closed for user core Sep 12 19:28:10.062352 systemd-logind[1485]: Session 27 logged out. Waiting for processes to exit. Sep 12 19:28:10.064143 systemd[1]: sshd@24-10.230.43.118:22-139.178.68.195:34346.service: Deactivated successfully. Sep 12 19:28:10.069739 systemd[1]: session-27.scope: Deactivated successfully. Sep 12 19:28:10.073072 systemd-logind[1485]: Removed session 27.