Aug 13 07:54:42.019625 kernel: Linux version 6.6.100-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 22:14:58 -00 2025 Aug 13 07:54:42.019660 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:54:42.019674 kernel: BIOS-provided physical RAM map: Aug 13 07:54:42.019689 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Aug 13 07:54:42.019699 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Aug 13 07:54:42.019708 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 07:54:42.019719 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Aug 13 07:54:42.019730 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Aug 13 07:54:42.019740 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 07:54:42.019749 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 07:54:42.019759 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 07:54:42.019769 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 07:54:42.019792 kernel: NX (Execute Disable) protection: active Aug 13 07:54:42.019808 kernel: APIC: Static calls initialized Aug 13 07:54:42.019820 kernel: SMBIOS 2.8 present. Aug 13 07:54:42.019836 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Aug 13 07:54:42.019847 kernel: Hypervisor detected: KVM Aug 13 07:54:42.019863 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 07:54:42.019874 kernel: kvm-clock: using sched offset of 4905624260 cycles Aug 13 07:54:42.019886 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 07:54:42.019897 kernel: tsc: Detected 2799.998 MHz processor Aug 13 07:54:42.019908 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 07:54:42.019919 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 07:54:42.019930 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Aug 13 07:54:42.019941 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 07:54:42.019952 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 07:54:42.019967 kernel: Using GB pages for direct mapping Aug 13 07:54:42.019978 kernel: ACPI: Early table checksum verification disabled Aug 13 07:54:42.019989 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Aug 13 07:54:42.020000 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:54:42.020011 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:54:42.020022 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:54:42.020033 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Aug 13 07:54:42.020043 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:54:42.020054 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:54:42.020069 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:54:42.020080 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 07:54:42.020094 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Aug 13 07:54:42.020105 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Aug 13 07:54:42.020116 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Aug 13 07:54:42.020133 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Aug 13 07:54:42.020144 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Aug 13 07:54:42.020160 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Aug 13 07:54:42.020172 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Aug 13 07:54:42.020183 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 13 07:54:42.020203 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 13 07:54:42.020216 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Aug 13 07:54:42.020227 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Aug 13 07:54:42.020260 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Aug 13 07:54:42.020272 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Aug 13 07:54:42.020290 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Aug 13 07:54:42.020301 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Aug 13 07:54:42.020312 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Aug 13 07:54:42.020323 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Aug 13 07:54:42.020334 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Aug 13 07:54:42.020345 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Aug 13 07:54:42.020357 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Aug 13 07:54:42.020368 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Aug 13 07:54:42.020385 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Aug 13 07:54:42.020402 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Aug 13 07:54:42.020414 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Aug 13 07:54:42.020425 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Aug 13 07:54:42.020436 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Aug 13 07:54:42.020458 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Aug 13 07:54:42.020470 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Aug 13 07:54:42.020482 kernel: Zone ranges: Aug 13 07:54:42.020493 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 07:54:42.020504 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Aug 13 07:54:42.020521 kernel: Normal empty Aug 13 07:54:42.020533 kernel: Movable zone start for each node Aug 13 07:54:42.020544 kernel: Early memory node ranges Aug 13 07:54:42.020555 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 07:54:42.020567 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Aug 13 07:54:42.020578 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Aug 13 07:54:42.020589 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 07:54:42.020600 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 07:54:42.020618 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Aug 13 07:54:42.020630 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 07:54:42.020647 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 07:54:42.020659 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 07:54:42.020670 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 07:54:42.020681 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 07:54:42.020693 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 07:54:42.020712 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 07:54:42.020723 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 07:54:42.020734 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 07:54:42.020746 kernel: TSC deadline timer available Aug 13 07:54:42.020762 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Aug 13 07:54:42.020774 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 07:54:42.020785 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 07:54:42.020796 kernel: Booting paravirtualized kernel on KVM Aug 13 07:54:42.020808 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 07:54:42.020819 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Aug 13 07:54:42.020831 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Aug 13 07:54:42.020842 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Aug 13 07:54:42.020853 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Aug 13 07:54:42.020869 kernel: kvm-guest: PV spinlocks enabled Aug 13 07:54:42.020881 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 07:54:42.020895 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:54:42.020907 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 07:54:42.020919 kernel: random: crng init done Aug 13 07:54:42.020930 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 07:54:42.020941 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 13 07:54:42.020952 kernel: Fallback order for Node 0: 0 Aug 13 07:54:42.020969 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Aug 13 07:54:42.020985 kernel: Policy zone: DMA32 Aug 13 07:54:42.020998 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 07:54:42.021009 kernel: software IO TLB: area num 16. Aug 13 07:54:42.021021 kernel: Memory: 1901528K/2096616K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42876K init, 2316K bss, 194828K reserved, 0K cma-reserved) Aug 13 07:54:42.021032 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Aug 13 07:54:42.021043 kernel: Kernel/User page tables isolation: enabled Aug 13 07:54:42.021055 kernel: ftrace: allocating 37968 entries in 149 pages Aug 13 07:54:42.021071 kernel: ftrace: allocated 149 pages with 4 groups Aug 13 07:54:42.021083 kernel: Dynamic Preempt: voluntary Aug 13 07:54:42.021099 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 07:54:42.021111 kernel: rcu: RCU event tracing is enabled. Aug 13 07:54:42.021123 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Aug 13 07:54:42.021134 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 07:54:42.021158 kernel: Rude variant of Tasks RCU enabled. Aug 13 07:54:42.021174 kernel: Tracing variant of Tasks RCU enabled. Aug 13 07:54:42.021186 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 07:54:42.021198 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Aug 13 07:54:42.021210 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Aug 13 07:54:42.021222 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 07:54:42.023291 kernel: Console: colour VGA+ 80x25 Aug 13 07:54:42.023307 kernel: printk: console [tty0] enabled Aug 13 07:54:42.023332 kernel: printk: console [ttyS0] enabled Aug 13 07:54:42.023343 kernel: ACPI: Core revision 20230628 Aug 13 07:54:42.023355 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 07:54:42.023373 kernel: x2apic enabled Aug 13 07:54:42.023385 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 07:54:42.023417 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Aug 13 07:54:42.023430 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Aug 13 07:54:42.023452 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 07:54:42.023464 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Aug 13 07:54:42.023476 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Aug 13 07:54:42.023488 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 07:54:42.023500 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 07:54:42.023511 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 07:54:42.023530 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 07:54:42.023542 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 07:54:42.023554 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 07:54:42.023565 kernel: MDS: Mitigation: Clear CPU buffers Aug 13 07:54:42.023577 kernel: MMIO Stale Data: Unknown: No mitigations Aug 13 07:54:42.023589 kernel: SRBDS: Unknown: Dependent on hypervisor status Aug 13 07:54:42.023600 kernel: ITS: Mitigation: Aligned branch/return thunks Aug 13 07:54:42.023612 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 07:54:42.023625 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 07:54:42.023636 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 07:54:42.023648 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 07:54:42.023665 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Aug 13 07:54:42.023677 kernel: Freeing SMP alternatives memory: 32K Aug 13 07:54:42.023695 kernel: pid_max: default: 32768 minimum: 301 Aug 13 07:54:42.023708 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Aug 13 07:54:42.023719 kernel: landlock: Up and running. Aug 13 07:54:42.023731 kernel: SELinux: Initializing. Aug 13 07:54:42.023743 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:54:42.023755 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Aug 13 07:54:42.023767 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Aug 13 07:54:42.023779 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 07:54:42.023791 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 07:54:42.023810 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Aug 13 07:54:42.023822 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Aug 13 07:54:42.023840 kernel: signal: max sigframe size: 1776 Aug 13 07:54:42.023852 kernel: rcu: Hierarchical SRCU implementation. Aug 13 07:54:42.023864 kernel: rcu: Max phase no-delay instances is 400. Aug 13 07:54:42.023876 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 13 07:54:42.023888 kernel: smp: Bringing up secondary CPUs ... Aug 13 07:54:42.023900 kernel: smpboot: x86: Booting SMP configuration: Aug 13 07:54:42.023912 kernel: .... node #0, CPUs: #1 Aug 13 07:54:42.023929 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Aug 13 07:54:42.023942 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 07:54:42.023954 kernel: smpboot: Max logical packages: 16 Aug 13 07:54:42.023966 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Aug 13 07:54:42.023978 kernel: devtmpfs: initialized Aug 13 07:54:42.023990 kernel: x86/mm: Memory block size: 128MB Aug 13 07:54:42.024002 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 07:54:42.024014 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Aug 13 07:54:42.024026 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 07:54:42.024043 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 07:54:42.024055 kernel: audit: initializing netlink subsys (disabled) Aug 13 07:54:42.024067 kernel: audit: type=2000 audit(1755071679.879:1): state=initialized audit_enabled=0 res=1 Aug 13 07:54:42.024079 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 07:54:42.024100 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 07:54:42.024112 kernel: cpuidle: using governor menu Aug 13 07:54:42.024124 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 07:54:42.024136 kernel: dca service started, version 1.12.1 Aug 13 07:54:42.024148 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Aug 13 07:54:42.024165 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 07:54:42.024177 kernel: PCI: Using configuration type 1 for base access Aug 13 07:54:42.024189 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 07:54:42.024201 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 07:54:42.024213 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 07:54:42.024225 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 07:54:42.024252 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 07:54:42.024265 kernel: ACPI: Added _OSI(Module Device) Aug 13 07:54:42.024276 kernel: ACPI: Added _OSI(Processor Device) Aug 13 07:54:42.024295 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 07:54:42.024308 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 07:54:42.024319 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 13 07:54:42.024331 kernel: ACPI: Interpreter enabled Aug 13 07:54:42.024343 kernel: ACPI: PM: (supports S0 S5) Aug 13 07:54:42.024355 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 07:54:42.024367 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 07:54:42.024379 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 07:54:42.024391 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 07:54:42.024408 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 07:54:42.024695 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 07:54:42.024894 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 13 07:54:42.025071 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 13 07:54:42.025090 kernel: PCI host bridge to bus 0000:00 Aug 13 07:54:42.026348 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 07:54:42.026557 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 07:54:42.026765 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 07:54:42.026983 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 07:54:42.027157 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 07:54:42.027468 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Aug 13 07:54:42.027633 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 07:54:42.027887 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Aug 13 07:54:42.028113 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Aug 13 07:54:42.028335 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Aug 13 07:54:42.028528 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Aug 13 07:54:42.028705 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Aug 13 07:54:42.028900 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 07:54:42.029091 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Aug 13 07:54:42.029292 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Aug 13 07:54:42.029558 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Aug 13 07:54:42.029740 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Aug 13 07:54:42.029964 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Aug 13 07:54:42.030154 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Aug 13 07:54:42.030363 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Aug 13 07:54:42.030550 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Aug 13 07:54:42.030769 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Aug 13 07:54:42.030958 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Aug 13 07:54:42.031164 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Aug 13 07:54:42.034709 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Aug 13 07:54:42.034917 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Aug 13 07:54:42.035108 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Aug 13 07:54:42.035346 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Aug 13 07:54:42.035578 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Aug 13 07:54:42.035797 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Aug 13 07:54:42.036009 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Aug 13 07:54:42.036184 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Aug 13 07:54:42.036380 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Aug 13 07:54:42.036590 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Aug 13 07:54:42.036795 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Aug 13 07:54:42.036973 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Aug 13 07:54:42.037149 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Aug 13 07:54:42.038410 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Aug 13 07:54:42.038625 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Aug 13 07:54:42.038801 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 07:54:42.038994 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Aug 13 07:54:42.039172 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Aug 13 07:54:42.039372 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Aug 13 07:54:42.039648 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Aug 13 07:54:42.039825 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Aug 13 07:54:42.040024 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Aug 13 07:54:42.040208 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Aug 13 07:54:42.042471 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Aug 13 07:54:42.042721 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Aug 13 07:54:42.042990 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 07:54:42.043225 kernel: pci_bus 0000:02: extended config space not accessible Aug 13 07:54:42.044533 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Aug 13 07:54:42.044725 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Aug 13 07:54:42.044918 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Aug 13 07:54:42.045097 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Aug 13 07:54:42.046515 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Aug 13 07:54:42.046703 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Aug 13 07:54:42.046906 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Aug 13 07:54:42.047087 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Aug 13 07:54:42.048364 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 07:54:42.048604 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Aug 13 07:54:42.048814 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Aug 13 07:54:42.049011 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Aug 13 07:54:42.049208 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Aug 13 07:54:42.049481 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 07:54:42.049673 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Aug 13 07:54:42.049849 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Aug 13 07:54:42.050036 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 07:54:42.050251 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Aug 13 07:54:42.053804 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Aug 13 07:54:42.053998 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 07:54:42.054176 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Aug 13 07:54:42.056396 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Aug 13 07:54:42.056596 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 07:54:42.056783 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Aug 13 07:54:42.056970 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Aug 13 07:54:42.057150 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 07:54:42.057369 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Aug 13 07:54:42.057567 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Aug 13 07:54:42.057741 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 07:54:42.057760 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 07:54:42.057774 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 07:54:42.057786 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 07:54:42.057798 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 07:54:42.057818 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 07:54:42.057831 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 07:54:42.057843 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 07:54:42.057855 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 07:54:42.057867 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 07:54:42.057879 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 07:54:42.057891 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 07:54:42.057903 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 07:54:42.057915 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 07:54:42.057933 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 07:54:42.057945 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 07:54:42.057957 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 07:54:42.057970 kernel: iommu: Default domain type: Translated Aug 13 07:54:42.057990 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 07:54:42.058002 kernel: PCI: Using ACPI for IRQ routing Aug 13 07:54:42.058015 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 07:54:42.058027 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Aug 13 07:54:42.058039 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Aug 13 07:54:42.058240 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 07:54:42.058451 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 07:54:42.058628 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 07:54:42.058648 kernel: vgaarb: loaded Aug 13 07:54:42.058660 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 07:54:42.058672 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 07:54:42.058685 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 07:54:42.058697 kernel: pnp: PnP ACPI init Aug 13 07:54:42.058924 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 07:54:42.058945 kernel: pnp: PnP ACPI: found 5 devices Aug 13 07:54:42.058957 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 07:54:42.058969 kernel: NET: Registered PF_INET protocol family Aug 13 07:54:42.058981 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 07:54:42.058993 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Aug 13 07:54:42.059012 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 07:54:42.059024 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 13 07:54:42.059043 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Aug 13 07:54:42.059055 kernel: TCP: Hash tables configured (established 16384 bind 16384) Aug 13 07:54:42.059086 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:54:42.059098 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Aug 13 07:54:42.059109 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 07:54:42.059121 kernel: NET: Registered PF_XDP protocol family Aug 13 07:54:42.060349 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Aug 13 07:54:42.060632 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Aug 13 07:54:42.060876 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Aug 13 07:54:42.061141 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Aug 13 07:54:42.062515 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Aug 13 07:54:42.062770 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Aug 13 07:54:42.062981 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Aug 13 07:54:42.063159 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Aug 13 07:54:42.064314 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Aug 13 07:54:42.064508 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Aug 13 07:54:42.064683 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Aug 13 07:54:42.064887 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Aug 13 07:54:42.065064 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Aug 13 07:54:42.066257 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Aug 13 07:54:42.066487 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Aug 13 07:54:42.066662 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Aug 13 07:54:42.066862 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Aug 13 07:54:42.067081 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Aug 13 07:54:42.067281 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Aug 13 07:54:42.067479 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Aug 13 07:54:42.067666 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Aug 13 07:54:42.067840 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 07:54:42.068012 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Aug 13 07:54:42.068183 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Aug 13 07:54:42.068396 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Aug 13 07:54:42.068675 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 07:54:42.068884 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Aug 13 07:54:42.069065 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Aug 13 07:54:42.069301 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Aug 13 07:54:42.069538 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 07:54:42.069725 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Aug 13 07:54:42.069897 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Aug 13 07:54:42.070077 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Aug 13 07:54:42.072380 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 07:54:42.072612 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Aug 13 07:54:42.072796 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Aug 13 07:54:42.072976 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Aug 13 07:54:42.073155 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 07:54:42.075394 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Aug 13 07:54:42.075609 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Aug 13 07:54:42.075800 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Aug 13 07:54:42.075980 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 07:54:42.076157 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Aug 13 07:54:42.076493 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Aug 13 07:54:42.076682 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Aug 13 07:54:42.076857 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 07:54:42.077049 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Aug 13 07:54:42.077242 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Aug 13 07:54:42.077512 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Aug 13 07:54:42.077695 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 07:54:42.077894 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 07:54:42.078085 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 07:54:42.080302 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 07:54:42.080537 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 07:54:42.080703 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 07:54:42.080865 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Aug 13 07:54:42.081136 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Aug 13 07:54:42.082918 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Aug 13 07:54:42.083098 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Aug 13 07:54:42.083322 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Aug 13 07:54:42.083554 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Aug 13 07:54:42.083727 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Aug 13 07:54:42.083896 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Aug 13 07:54:42.084078 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Aug 13 07:54:42.084318 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Aug 13 07:54:42.084530 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Aug 13 07:54:42.084792 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Aug 13 07:54:42.084977 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Aug 13 07:54:42.085143 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Aug 13 07:54:42.085377 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Aug 13 07:54:42.085559 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Aug 13 07:54:42.085758 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Aug 13 07:54:42.085942 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Aug 13 07:54:42.086114 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Aug 13 07:54:42.086331 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Aug 13 07:54:42.086570 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Aug 13 07:54:42.086737 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Aug 13 07:54:42.086946 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Aug 13 07:54:42.087128 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Aug 13 07:54:42.087465 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Aug 13 07:54:42.087638 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Aug 13 07:54:42.087659 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 07:54:42.087673 kernel: PCI: CLS 0 bytes, default 64 Aug 13 07:54:42.087686 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 07:54:42.087699 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Aug 13 07:54:42.087713 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 13 07:54:42.087732 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Aug 13 07:54:42.087745 kernel: Initialise system trusted keyrings Aug 13 07:54:42.087758 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Aug 13 07:54:42.087778 kernel: Key type asymmetric registered Aug 13 07:54:42.087791 kernel: Asymmetric key parser 'x509' registered Aug 13 07:54:42.087803 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 13 07:54:42.087816 kernel: io scheduler mq-deadline registered Aug 13 07:54:42.087828 kernel: io scheduler kyber registered Aug 13 07:54:42.087841 kernel: io scheduler bfq registered Aug 13 07:54:42.088022 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Aug 13 07:54:42.088199 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Aug 13 07:54:42.088401 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:54:42.088606 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Aug 13 07:54:42.088818 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Aug 13 07:54:42.088992 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:54:42.089166 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Aug 13 07:54:42.089509 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Aug 13 07:54:42.089840 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:54:42.090059 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Aug 13 07:54:42.090294 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Aug 13 07:54:42.090485 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:54:42.090665 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Aug 13 07:54:42.090843 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Aug 13 07:54:42.091051 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:54:42.091236 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Aug 13 07:54:42.091471 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Aug 13 07:54:42.091649 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:54:42.091856 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Aug 13 07:54:42.092025 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Aug 13 07:54:42.092208 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:54:42.092421 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Aug 13 07:54:42.092607 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Aug 13 07:54:42.092781 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Aug 13 07:54:42.092802 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 07:54:42.092816 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 07:54:42.092830 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Aug 13 07:54:42.092851 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 07:54:42.092864 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 07:54:42.092877 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 07:54:42.092890 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 07:54:42.092903 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 07:54:42.093130 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 07:54:42.093171 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 07:54:42.093377 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 07:54:42.093563 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T07:54:41 UTC (1755071681) Aug 13 07:54:42.093737 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Aug 13 07:54:42.093757 kernel: intel_pstate: CPU model not supported Aug 13 07:54:42.093782 kernel: NET: Registered PF_INET6 protocol family Aug 13 07:54:42.093794 kernel: Segment Routing with IPv6 Aug 13 07:54:42.093806 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 07:54:42.093818 kernel: NET: Registered PF_PACKET protocol family Aug 13 07:54:42.093830 kernel: Key type dns_resolver registered Aug 13 07:54:42.093855 kernel: IPI shorthand broadcast: enabled Aug 13 07:54:42.093875 kernel: sched_clock: Marking stable (1415004020, 231748007)->(1885746310, -238994283) Aug 13 07:54:42.093887 kernel: registered taskstats version 1 Aug 13 07:54:42.093912 kernel: Loading compiled-in X.509 certificates Aug 13 07:54:42.093925 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.100-flatcar: 264e720147fa8df9744bb9dc1c08171c0cb20041' Aug 13 07:54:42.093938 kernel: Key type .fscrypt registered Aug 13 07:54:42.093950 kernel: Key type fscrypt-provisioning registered Aug 13 07:54:42.093962 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 07:54:42.093975 kernel: ima: Allocated hash algorithm: sha1 Aug 13 07:54:42.093993 kernel: ima: No architecture policies found Aug 13 07:54:42.094006 kernel: clk: Disabling unused clocks Aug 13 07:54:42.094019 kernel: Freeing unused kernel image (initmem) memory: 42876K Aug 13 07:54:42.094031 kernel: Write protecting the kernel read-only data: 36864k Aug 13 07:54:42.094044 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Aug 13 07:54:42.094056 kernel: Run /init as init process Aug 13 07:54:42.094069 kernel: with arguments: Aug 13 07:54:42.094082 kernel: /init Aug 13 07:54:42.094094 kernel: with environment: Aug 13 07:54:42.094107 kernel: HOME=/ Aug 13 07:54:42.094124 kernel: TERM=linux Aug 13 07:54:42.094137 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 07:54:42.094153 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:54:42.094169 systemd[1]: Detected virtualization kvm. Aug 13 07:54:42.094183 systemd[1]: Detected architecture x86-64. Aug 13 07:54:42.094196 systemd[1]: Running in initrd. Aug 13 07:54:42.094209 systemd[1]: No hostname configured, using default hostname. Aug 13 07:54:42.094227 systemd[1]: Hostname set to . Aug 13 07:54:42.094241 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:54:42.094255 systemd[1]: Queued start job for default target initrd.target. Aug 13 07:54:42.094268 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:54:42.094393 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:54:42.094410 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 07:54:42.094424 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:54:42.094449 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 07:54:42.094486 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 07:54:42.094503 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 07:54:42.094518 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 07:54:42.094531 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:54:42.094545 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:54:42.094559 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:54:42.094572 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:54:42.094592 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:54:42.094606 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:54:42.094620 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:54:42.094633 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:54:42.094647 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:54:42.094661 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:54:42.094684 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:54:42.094698 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:54:42.094712 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:54:42.094732 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:54:42.094746 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 07:54:42.094760 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:54:42.094773 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 07:54:42.094787 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 07:54:42.094800 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:54:42.094814 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:54:42.094880 systemd-journald[202]: Collecting audit messages is disabled. Aug 13 07:54:42.094918 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:54:42.094932 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 07:54:42.094946 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:54:42.094960 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 07:54:42.094980 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:54:42.095025 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:54:42.095040 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 07:54:42.095055 systemd-journald[202]: Journal started Aug 13 07:54:42.095111 systemd-journald[202]: Runtime Journal (/run/log/journal/73727eee1b434dc080b3847fc5869ebf) is 4.7M, max 38.0M, 33.2M free. Aug 13 07:54:42.045311 systemd-modules-load[203]: Inserted module 'overlay' Aug 13 07:54:42.157726 kernel: Bridge firewalling registered Aug 13 07:54:42.157769 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:54:42.096679 systemd-modules-load[203]: Inserted module 'br_netfilter' Aug 13 07:54:42.158687 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:54:42.160014 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:54:42.170561 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:54:42.173283 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:54:42.176514 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:54:42.185456 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:54:42.201745 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:54:42.202817 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:54:42.206450 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:54:42.216586 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 07:54:42.218675 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:54:42.223431 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:54:42.232804 dracut-cmdline[234]: dracut-dracut-053 Aug 13 07:54:42.236317 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=8b1c4c6202e70eaa8c6477427259ab5e403c8f1de8515605304942a21d23450a Aug 13 07:54:42.278418 systemd-resolved[238]: Positive Trust Anchors: Aug 13 07:54:42.278478 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:54:42.278520 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:54:42.287109 systemd-resolved[238]: Defaulting to hostname 'linux'. Aug 13 07:54:42.288879 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:54:42.289996 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:54:42.345281 kernel: SCSI subsystem initialized Aug 13 07:54:42.357283 kernel: Loading iSCSI transport class v2.0-870. Aug 13 07:54:42.369294 kernel: iscsi: registered transport (tcp) Aug 13 07:54:42.394606 kernel: iscsi: registered transport (qla4xxx) Aug 13 07:54:42.394680 kernel: QLogic iSCSI HBA Driver Aug 13 07:54:42.452504 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 07:54:42.465500 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 07:54:42.497842 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 07:54:42.497928 kernel: device-mapper: uevent: version 1.0.3 Aug 13 07:54:42.497949 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 13 07:54:42.548286 kernel: raid6: sse2x4 gen() 13687 MB/s Aug 13 07:54:42.565282 kernel: raid6: sse2x2 gen() 9278 MB/s Aug 13 07:54:42.583926 kernel: raid6: sse2x1 gen() 10164 MB/s Aug 13 07:54:42.583999 kernel: raid6: using algorithm sse2x4 gen() 13687 MB/s Aug 13 07:54:42.602954 kernel: raid6: .... xor() 7729 MB/s, rmw enabled Aug 13 07:54:42.603020 kernel: raid6: using ssse3x2 recovery algorithm Aug 13 07:54:42.628337 kernel: xor: automatically using best checksumming function avx Aug 13 07:54:42.814404 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 07:54:42.829461 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:54:42.839533 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:54:42.856675 systemd-udevd[420]: Using default interface naming scheme 'v255'. Aug 13 07:54:42.863527 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:54:42.872824 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 07:54:42.894463 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Aug 13 07:54:42.936262 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:54:42.941451 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:54:43.072050 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:54:43.080584 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 07:54:43.115815 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 07:54:43.126099 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:54:43.128307 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:54:43.129271 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:54:43.138712 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 07:54:43.161474 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:54:43.213499 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Aug 13 07:54:43.226284 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Aug 13 07:54:43.234276 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 07:54:43.245255 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 07:54:43.245315 kernel: GPT:17805311 != 125829119 Aug 13 07:54:43.245353 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 07:54:43.246856 kernel: GPT:17805311 != 125829119 Aug 13 07:54:43.248314 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 07:54:43.249579 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:54:43.268569 kernel: AVX version of gcm_enc/dec engaged. Aug 13 07:54:43.272273 kernel: AES CTR mode by8 optimization enabled Aug 13 07:54:43.283275 kernel: ACPI: bus type USB registered Aug 13 07:54:43.287102 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:54:43.288714 kernel: usbcore: registered new interface driver usbfs Aug 13 07:54:43.288752 kernel: usbcore: registered new interface driver hub Aug 13 07:54:43.288976 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:54:43.292760 kernel: usbcore: registered new device driver usb Aug 13 07:54:43.293582 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:54:43.295278 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:54:43.295492 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:54:43.298403 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:54:43.314987 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:54:43.343269 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Aug 13 07:54:43.343683 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Aug 13 07:54:43.351303 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Aug 13 07:54:43.375259 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Aug 13 07:54:43.375618 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (475) Aug 13 07:54:43.383529 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Aug 13 07:54:43.384004 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Aug 13 07:54:43.384264 kernel: hub 1-0:1.0: USB hub found Aug 13 07:54:43.384543 kernel: hub 1-0:1.0: 4 ports detected Aug 13 07:54:43.384752 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Aug 13 07:54:43.388297 kernel: hub 2-0:1.0: USB hub found Aug 13 07:54:43.390216 kernel: hub 2-0:1.0: 4 ports detected Aug 13 07:54:43.397317 kernel: BTRFS: device fsid 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (469) Aug 13 07:54:43.405619 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 13 07:54:43.502530 kernel: libata version 3.00 loaded. Aug 13 07:54:43.502579 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 07:54:43.502997 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 07:54:43.503020 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Aug 13 07:54:43.503227 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 07:54:43.503477 kernel: scsi host0: ahci Aug 13 07:54:43.503735 kernel: scsi host1: ahci Aug 13 07:54:43.503973 kernel: scsi host2: ahci Aug 13 07:54:43.504216 kernel: scsi host3: ahci Aug 13 07:54:43.504833 kernel: scsi host4: ahci Aug 13 07:54:43.505072 kernel: scsi host5: ahci Aug 13 07:54:43.505301 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Aug 13 07:54:43.505333 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Aug 13 07:54:43.505351 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Aug 13 07:54:43.505368 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Aug 13 07:54:43.505384 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Aug 13 07:54:43.505401 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Aug 13 07:54:43.508594 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:54:43.518350 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 13 07:54:43.525187 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:54:43.531309 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 13 07:54:43.532144 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 13 07:54:43.545578 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 07:54:43.549717 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 07:54:43.555706 disk-uuid[563]: Primary Header is updated. Aug 13 07:54:43.555706 disk-uuid[563]: Secondary Entries is updated. Aug 13 07:54:43.555706 disk-uuid[563]: Secondary Header is updated. Aug 13 07:54:43.565258 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:54:43.576265 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:54:43.577087 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:54:43.633254 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Aug 13 07:54:43.760603 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 07:54:43.760876 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 07:54:43.760900 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 07:54:43.763997 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 07:54:43.764642 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 07:54:43.766330 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 07:54:43.789270 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 13 07:54:43.796090 kernel: usbcore: registered new interface driver usbhid Aug 13 07:54:43.796150 kernel: usbhid: USB HID core driver Aug 13 07:54:43.803627 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Aug 13 07:54:43.803745 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Aug 13 07:54:44.578307 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 13 07:54:44.578713 disk-uuid[564]: The operation has completed successfully. Aug 13 07:54:44.627571 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 07:54:44.627772 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 07:54:44.652525 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 07:54:44.658813 sh[583]: Success Aug 13 07:54:44.676269 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Aug 13 07:54:44.726558 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 07:54:44.738362 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 07:54:44.740421 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 07:54:44.763292 kernel: BTRFS info (device dm-0): first mount of filesystem 6f4baebc-7e60-4ee7-93a9-8bedb08a33ad Aug 13 07:54:44.763410 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:54:44.763445 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 13 07:54:44.767389 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 13 07:54:44.767436 kernel: BTRFS info (device dm-0): using free space tree Aug 13 07:54:44.777498 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 07:54:44.779080 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 07:54:44.788442 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 07:54:44.792028 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 07:54:44.806321 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:54:44.809852 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:54:44.809883 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:54:44.814261 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:54:44.827465 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 13 07:54:44.831269 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:54:44.838934 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 07:54:44.846513 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 07:54:44.954833 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:54:44.972137 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:54:44.999286 systemd-networkd[768]: lo: Link UP Aug 13 07:54:45.000613 systemd-networkd[768]: lo: Gained carrier Aug 13 07:54:45.003786 systemd-networkd[768]: Enumeration completed Aug 13 07:54:45.004368 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:54:45.004385 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:54:45.005438 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:54:45.006745 systemd[1]: Reached target network.target - Network. Aug 13 07:54:45.007836 systemd-networkd[768]: eth0: Link UP Aug 13 07:54:45.007843 systemd-networkd[768]: eth0: Gained carrier Aug 13 07:54:45.007858 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:54:45.041472 systemd-networkd[768]: eth0: DHCPv4 address 10.230.74.218/30, gateway 10.230.74.217 acquired from 10.230.74.217 Aug 13 07:54:45.099796 ignition[679]: Ignition 2.19.0 Aug 13 07:54:45.099816 ignition[679]: Stage: fetch-offline Aug 13 07:54:45.099883 ignition[679]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:54:45.102257 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:54:45.099910 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 07:54:45.100070 ignition[679]: parsed url from cmdline: "" Aug 13 07:54:45.100077 ignition[679]: no config URL provided Aug 13 07:54:45.100086 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:54:45.100103 ignition[679]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:54:45.100111 ignition[679]: failed to fetch config: resource requires networking Aug 13 07:54:45.100914 ignition[679]: Ignition finished successfully Aug 13 07:54:45.110535 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 07:54:45.174358 ignition[777]: Ignition 2.19.0 Aug 13 07:54:45.174399 ignition[777]: Stage: fetch Aug 13 07:54:45.174625 ignition[777]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:54:45.174644 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 07:54:45.174790 ignition[777]: parsed url from cmdline: "" Aug 13 07:54:45.174797 ignition[777]: no config URL provided Aug 13 07:54:45.174806 ignition[777]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 07:54:45.174822 ignition[777]: no config at "/usr/lib/ignition/user.ign" Aug 13 07:54:45.174997 ignition[777]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Aug 13 07:54:45.175073 ignition[777]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Aug 13 07:54:45.175198 ignition[777]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Aug 13 07:54:45.198678 ignition[777]: GET result: OK Aug 13 07:54:45.199171 ignition[777]: parsing config with SHA512: 770f3391d50356c640a858d6082abf570a2cd2235b50d980528d1858293439e3030f18a217addd40da2422a84c0f82ea7457879b8b714df7b648d7b244418b22 Aug 13 07:54:45.207840 unknown[777]: fetched base config from "system" Aug 13 07:54:45.207875 unknown[777]: fetched base config from "system" Aug 13 07:54:45.208667 ignition[777]: fetch: fetch complete Aug 13 07:54:45.207885 unknown[777]: fetched user config from "openstack" Aug 13 07:54:45.208675 ignition[777]: fetch: fetch passed Aug 13 07:54:45.210532 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 07:54:45.208743 ignition[777]: Ignition finished successfully Aug 13 07:54:45.220532 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 07:54:45.241186 ignition[783]: Ignition 2.19.0 Aug 13 07:54:45.241225 ignition[783]: Stage: kargs Aug 13 07:54:45.241521 ignition[783]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:54:45.241541 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 07:54:45.242849 ignition[783]: kargs: kargs passed Aug 13 07:54:45.245639 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 07:54:45.242917 ignition[783]: Ignition finished successfully Aug 13 07:54:45.252508 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 07:54:45.274435 ignition[789]: Ignition 2.19.0 Aug 13 07:54:45.274453 ignition[789]: Stage: disks Aug 13 07:54:45.274701 ignition[789]: no configs at "/usr/lib/ignition/base.d" Aug 13 07:54:45.274721 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 07:54:45.275899 ignition[789]: disks: disks passed Aug 13 07:54:45.279423 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 07:54:45.276020 ignition[789]: Ignition finished successfully Aug 13 07:54:45.280971 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 07:54:45.282481 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:54:45.283827 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:54:45.285414 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:54:45.286936 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:54:45.293456 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 07:54:45.314423 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Aug 13 07:54:45.320744 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 07:54:45.326338 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 07:54:45.449259 kernel: EXT4-fs (vda9): mounted filesystem 98cc0201-e9ec-4d2c-8a62-5b521bf9317d r/w with ordered data mode. Quota mode: none. Aug 13 07:54:45.450479 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 07:54:45.451813 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 07:54:45.463397 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:54:45.466391 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 07:54:45.467512 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 07:54:45.469644 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Aug 13 07:54:45.471568 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 07:54:45.471606 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:54:45.484504 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (805) Aug 13 07:54:45.489576 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 07:54:45.494793 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:54:45.495128 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:54:45.495186 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:54:45.504681 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 07:54:45.513839 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:54:45.517869 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:54:45.586754 initrd-setup-root[832]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 07:54:45.593863 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Aug 13 07:54:45.603545 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 07:54:45.608734 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 07:54:45.715316 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 07:54:45.726466 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 07:54:45.730751 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 07:54:45.741333 kernel: BTRFS info (device vda6): last unmount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:54:45.759094 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 07:54:45.778314 ignition[921]: INFO : Ignition 2.19.0 Aug 13 07:54:45.778314 ignition[921]: INFO : Stage: mount Aug 13 07:54:45.780100 ignition[921]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:54:45.780100 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 07:54:45.781904 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 07:54:45.784711 ignition[921]: INFO : mount: mount passed Aug 13 07:54:45.784711 ignition[921]: INFO : Ignition finished successfully Aug 13 07:54:45.784792 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 07:54:46.190609 systemd-networkd[768]: eth0: Gained IPv6LL Aug 13 07:54:47.698252 systemd-networkd[768]: eth0: Ignoring DHCPv6 address 2a02:1348:179:92b6:24:19ff:fee6:4ada/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:92b6:24:19ff:fee6:4ada/64 assigned by NDisc. Aug 13 07:54:47.698268 systemd-networkd[768]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Aug 13 07:54:52.679384 coreos-metadata[807]: Aug 13 07:54:52.679 WARN failed to locate config-drive, using the metadata service API instead Aug 13 07:54:52.701089 coreos-metadata[807]: Aug 13 07:54:52.701 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Aug 13 07:54:52.716566 coreos-metadata[807]: Aug 13 07:54:52.716 INFO Fetch successful Aug 13 07:54:52.717479 coreos-metadata[807]: Aug 13 07:54:52.716 INFO wrote hostname srv-er0cq.gb1.brightbox.com to /sysroot/etc/hostname Aug 13 07:54:52.719184 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Aug 13 07:54:52.719421 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Aug 13 07:54:52.737517 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 07:54:52.760625 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 07:54:52.772260 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (939) Aug 13 07:54:52.775648 kernel: BTRFS info (device vda6): first mount of filesystem 7cc37ed4-8461-447f-bee4-dfe5b4695079 Aug 13 07:54:52.775684 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 07:54:52.777441 kernel: BTRFS info (device vda6): using free space tree Aug 13 07:54:52.783277 kernel: BTRFS info (device vda6): auto enabling async discard Aug 13 07:54:52.786062 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 07:54:52.835312 ignition[957]: INFO : Ignition 2.19.0 Aug 13 07:54:52.835312 ignition[957]: INFO : Stage: files Aug 13 07:54:52.837257 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:54:52.837257 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 07:54:52.837257 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Aug 13 07:54:52.840075 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 07:54:52.840075 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 07:54:52.842049 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 07:54:52.842979 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 07:54:52.842979 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 07:54:52.842830 unknown[957]: wrote ssh authorized keys file for user: core Aug 13 07:54:52.846012 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:54:52.846012 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Aug 13 07:54:52.846012 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:54:52.846012 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 07:54:53.095589 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 13 07:54:53.476273 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 07:54:53.477839 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 13 07:54:53.477839 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 07:54:53.477839 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:54:53.477839 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 07:54:53.477839 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:54:53.489115 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 07:54:53.489115 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:54:53.489115 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 07:54:53.489115 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:54:53.489115 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 07:54:53.489115 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:54:53.489115 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:54:53.489115 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:54:53.489115 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 07:54:53.855112 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 13 07:54:55.192728 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 07:54:55.194629 ignition[957]: INFO : files: op(c): [started] processing unit "containerd.service" Aug 13 07:54:55.194629 ignition[957]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:54:55.194629 ignition[957]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Aug 13 07:54:55.194629 ignition[957]: INFO : files: op(c): [finished] processing unit "containerd.service" Aug 13 07:54:55.194629 ignition[957]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Aug 13 07:54:55.194629 ignition[957]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:54:55.203959 ignition[957]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 07:54:55.203959 ignition[957]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Aug 13 07:54:55.203959 ignition[957]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Aug 13 07:54:55.203959 ignition[957]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 07:54:55.203959 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:54:55.203959 ignition[957]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 07:54:55.203959 ignition[957]: INFO : files: files passed Aug 13 07:54:55.203959 ignition[957]: INFO : Ignition finished successfully Aug 13 07:54:55.197743 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 07:54:55.209415 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 07:54:55.215463 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 07:54:55.216810 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 07:54:55.218802 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 07:54:55.244264 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:54:55.244264 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:54:55.248073 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 07:54:55.250556 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:54:55.251971 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 07:54:55.259447 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 07:54:55.291209 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 07:54:55.291421 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 07:54:55.293249 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 07:54:55.294681 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 07:54:55.296270 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 07:54:55.303418 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 07:54:55.324485 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:54:55.331422 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 07:54:55.346253 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:54:55.347206 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:54:55.348985 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 07:54:55.350535 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 07:54:55.350722 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 07:54:55.352521 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 07:54:55.353410 systemd[1]: Stopped target basic.target - Basic System. Aug 13 07:54:55.354838 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 07:54:55.356180 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 07:54:55.357573 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 07:54:55.359128 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 07:54:55.360607 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 07:54:55.362199 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 07:54:55.363719 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 07:54:55.365269 systemd[1]: Stopped target swap.target - Swaps. Aug 13 07:54:55.366637 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 07:54:55.366828 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 07:54:55.368464 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:54:55.369389 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:54:55.370836 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 07:54:55.373277 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:54:55.374306 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 07:54:55.374492 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 07:54:55.376397 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 07:54:55.376641 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 07:54:55.378308 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 07:54:55.378470 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 07:54:55.385525 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 07:54:55.386349 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 07:54:55.386580 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:54:55.391507 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 07:54:55.398816 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 07:54:55.399093 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:54:55.400168 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 07:54:55.400428 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 07:54:55.407722 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 07:54:55.407875 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 07:54:55.424700 ignition[1010]: INFO : Ignition 2.19.0 Aug 13 07:54:55.424700 ignition[1010]: INFO : Stage: umount Aug 13 07:54:55.427415 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 07:54:55.427415 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Aug 13 07:54:55.427415 ignition[1010]: INFO : umount: umount passed Aug 13 07:54:55.427415 ignition[1010]: INFO : Ignition finished successfully Aug 13 07:54:55.427059 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 07:54:55.428523 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 07:54:55.428665 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 07:54:55.430574 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 07:54:55.430702 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 07:54:55.433179 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 07:54:55.433266 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 07:54:55.434047 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 07:54:55.434109 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 07:54:55.435525 systemd[1]: Stopped target network.target - Network. Aug 13 07:54:55.436774 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 07:54:55.436874 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 07:54:55.438260 systemd[1]: Stopped target paths.target - Path Units. Aug 13 07:54:55.439561 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 07:54:55.443327 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:54:55.444211 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 07:54:55.445748 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 07:54:55.447366 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 07:54:55.447459 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 07:54:55.448798 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 07:54:55.448861 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 07:54:55.450297 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 07:54:55.450390 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 07:54:55.451887 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 07:54:55.451956 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 07:54:55.453526 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 07:54:55.457329 systemd-networkd[768]: eth0: DHCPv6 lease lost Aug 13 07:54:55.457926 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 07:54:55.459603 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 07:54:55.459792 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 07:54:55.462619 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 07:54:55.462734 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:54:55.472409 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 07:54:55.473856 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 07:54:55.473951 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 07:54:55.477613 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:54:55.484604 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 07:54:55.484799 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 07:54:55.490119 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 07:54:55.490306 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:54:55.491334 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 07:54:55.491404 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 07:54:55.492926 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 07:54:55.493006 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:54:55.496405 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 07:54:55.496684 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:54:55.502583 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 07:54:55.502720 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 07:54:55.503837 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 07:54:55.503890 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:54:55.505500 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 07:54:55.505592 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 07:54:55.507959 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 07:54:55.508045 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 07:54:55.511683 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 07:54:55.511757 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 07:54:55.523482 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 07:54:55.526602 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 07:54:55.526685 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:54:55.528343 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 07:54:55.528409 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:54:55.530462 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 07:54:55.530604 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 07:54:55.535602 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 07:54:55.535757 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 07:54:55.555442 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 07:54:55.555658 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 07:54:55.557442 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 07:54:55.558571 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 07:54:55.558659 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 07:54:55.563401 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 07:54:55.585467 systemd[1]: Switching root. Aug 13 07:54:55.617155 systemd-journald[202]: Journal stopped Aug 13 07:54:57.233151 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Aug 13 07:54:57.233281 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 07:54:57.233322 kernel: SELinux: policy capability open_perms=1 Aug 13 07:54:57.233342 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 07:54:57.233360 kernel: SELinux: policy capability always_check_network=0 Aug 13 07:54:57.233377 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 07:54:57.233394 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 07:54:57.233418 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 07:54:57.233459 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 07:54:57.233479 kernel: audit: type=1403 audit(1755071695.913:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 07:54:57.233512 systemd[1]: Successfully loaded SELinux policy in 54.772ms. Aug 13 07:54:57.233545 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.387ms. Aug 13 07:54:57.233572 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 13 07:54:57.233592 systemd[1]: Detected virtualization kvm. Aug 13 07:54:57.233611 systemd[1]: Detected architecture x86-64. Aug 13 07:54:57.233637 systemd[1]: Detected first boot. Aug 13 07:54:57.233657 systemd[1]: Hostname set to . Aug 13 07:54:57.233699 systemd[1]: Initializing machine ID from VM UUID. Aug 13 07:54:57.233720 zram_generator::config[1070]: No configuration found. Aug 13 07:54:57.233751 systemd[1]: Populated /etc with preset unit settings. Aug 13 07:54:57.233770 systemd[1]: Queued start job for default target multi-user.target. Aug 13 07:54:57.233794 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 13 07:54:57.233828 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 07:54:57.233849 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 07:54:57.233867 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 07:54:57.233901 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 07:54:57.233933 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 07:54:57.233952 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 07:54:57.233970 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 07:54:57.233987 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 07:54:57.234014 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 07:54:57.234032 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 07:54:57.234051 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 07:54:57.234077 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 07:54:57.234131 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 07:54:57.234153 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 07:54:57.234173 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 07:54:57.234192 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 07:54:57.234253 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 07:54:57.234281 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 07:54:57.234309 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 07:54:57.234345 systemd[1]: Reached target slices.target - Slice Units. Aug 13 07:54:57.234366 systemd[1]: Reached target swap.target - Swaps. Aug 13 07:54:57.234400 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 07:54:57.234420 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 07:54:57.234445 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 07:54:57.234464 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 13 07:54:57.234506 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 07:54:57.234554 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 07:54:57.234577 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 07:54:57.234596 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 07:54:57.234628 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 07:54:57.234655 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 07:54:57.234686 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 07:54:57.234704 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:54:57.234739 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 07:54:57.234759 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 07:54:57.234824 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 07:54:57.234865 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 07:54:57.234883 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:54:57.234901 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 07:54:57.234931 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 07:54:57.234951 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:54:57.234969 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:54:57.235028 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:54:57.235059 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 07:54:57.235079 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:54:57.235099 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 07:54:57.235140 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Aug 13 07:54:57.235170 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Aug 13 07:54:57.235190 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 07:54:57.235209 kernel: fuse: init (API version 7.39) Aug 13 07:54:57.235259 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 07:54:57.235283 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 07:54:57.235302 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 07:54:57.235321 kernel: loop: module loaded Aug 13 07:54:57.235339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 07:54:57.235358 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:54:57.235377 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 07:54:57.235432 systemd-journald[1188]: Collecting audit messages is disabled. Aug 13 07:54:57.235499 kernel: ACPI: bus type drm_connector registered Aug 13 07:54:57.235534 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 07:54:57.235579 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 07:54:57.235601 systemd-journald[1188]: Journal started Aug 13 07:54:57.235644 systemd-journald[1188]: Runtime Journal (/run/log/journal/73727eee1b434dc080b3847fc5869ebf) is 4.7M, max 38.0M, 33.2M free. Aug 13 07:54:57.239308 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 07:54:57.244693 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 07:54:57.245548 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 07:54:57.246420 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 07:54:57.247503 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 07:54:57.248686 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 07:54:57.250155 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 07:54:57.250408 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 07:54:57.251592 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:54:57.251831 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:54:57.253166 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:54:57.253435 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:54:57.254836 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:54:57.255064 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:54:57.256401 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 07:54:57.256643 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 07:54:57.257770 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:54:57.258094 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:54:57.260848 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 07:54:57.263127 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 07:54:57.266520 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 07:54:57.282689 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 07:54:57.289343 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 07:54:57.300309 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 07:54:57.304332 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 07:54:57.312383 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 07:54:57.325434 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 07:54:57.326366 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:54:57.331287 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 07:54:57.333394 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:54:57.347411 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 07:54:57.356384 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 07:54:57.360925 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 07:54:57.364814 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 07:54:57.385355 systemd-journald[1188]: Time spent on flushing to /var/log/journal/73727eee1b434dc080b3847fc5869ebf is 57.771ms for 1125 entries. Aug 13 07:54:57.385355 systemd-journald[1188]: System Journal (/var/log/journal/73727eee1b434dc080b3847fc5869ebf) is 8.0M, max 584.8M, 576.8M free. Aug 13 07:54:57.477481 systemd-journald[1188]: Received client request to flush runtime journal. Aug 13 07:54:57.389771 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 07:54:57.393918 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 07:54:57.427803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 07:54:57.472781 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Aug 13 07:54:57.472800 systemd-tmpfiles[1227]: ACLs are not supported, ignoring. Aug 13 07:54:57.484735 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 07:54:57.496169 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 07:54:57.508424 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 07:54:57.509851 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 07:54:57.522625 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 13 07:54:57.542560 udevadm[1245]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Aug 13 07:54:57.572641 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 07:54:57.581553 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 07:54:57.606623 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Aug 13 07:54:57.606651 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Aug 13 07:54:57.616144 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 07:54:58.181897 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 07:54:58.191468 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 07:54:58.225545 systemd-udevd[1255]: Using default interface naming scheme 'v255'. Aug 13 07:54:58.252049 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 07:54:58.263442 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 07:54:58.296286 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 07:54:58.320443 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Aug 13 07:54:58.408328 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 07:54:58.426272 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1258) Aug 13 07:54:58.615290 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 07:54:58.629269 kernel: ACPI: button: Power Button [PWRF] Aug 13 07:54:58.636885 systemd-networkd[1260]: lo: Link UP Aug 13 07:54:58.636899 systemd-networkd[1260]: lo: Gained carrier Aug 13 07:54:58.640247 systemd-networkd[1260]: Enumeration completed Aug 13 07:54:58.642420 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:54:58.642433 systemd-networkd[1260]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 07:54:58.642477 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 13 07:54:58.644261 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 07:54:58.645021 systemd-networkd[1260]: eth0: Link UP Aug 13 07:54:58.645027 systemd-networkd[1260]: eth0: Gained carrier Aug 13 07:54:58.645044 systemd-networkd[1260]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 07:54:58.655435 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 07:54:58.662312 systemd-networkd[1260]: eth0: DHCPv4 address 10.230.74.218/30, gateway 10.230.74.217 acquired from 10.230.74.217 Aug 13 07:54:58.695264 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 07:54:58.721333 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Aug 13 07:54:58.721462 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 07:54:58.729866 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Aug 13 07:54:58.730212 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 07:54:58.797547 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 07:54:59.006566 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 13 07:54:59.037092 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 07:54:59.046481 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 13 07:54:59.067285 lvm[1295]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:54:59.105823 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 13 07:54:59.108073 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 07:54:59.115557 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 13 07:54:59.123971 lvm[1298]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 13 07:54:59.158620 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 13 07:54:59.160767 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 07:54:59.161865 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 07:54:59.162039 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 07:54:59.162856 systemd[1]: Reached target machines.target - Containers. Aug 13 07:54:59.165694 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 13 07:54:59.172427 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 07:54:59.175531 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 07:54:59.176671 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:54:59.184448 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 07:54:59.188398 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 13 07:54:59.194157 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 07:54:59.202402 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 07:54:59.211297 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 07:54:59.238867 kernel: loop0: detected capacity change from 0 to 140768 Aug 13 07:54:59.244035 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 07:54:59.245052 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 13 07:54:59.282266 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 07:54:59.313288 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 07:54:59.371294 kernel: loop2: detected capacity change from 0 to 142488 Aug 13 07:54:59.422074 kernel: loop3: detected capacity change from 0 to 8 Aug 13 07:54:59.491268 kernel: loop4: detected capacity change from 0 to 140768 Aug 13 07:54:59.525268 kernel: loop5: detected capacity change from 0 to 221472 Aug 13 07:54:59.543361 kernel: loop6: detected capacity change from 0 to 142488 Aug 13 07:54:59.575271 kernel: loop7: detected capacity change from 0 to 8 Aug 13 07:54:59.577281 (sd-merge)[1319]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Aug 13 07:54:59.578189 (sd-merge)[1319]: Merged extensions into '/usr'. Aug 13 07:54:59.603873 systemd[1]: Reloading requested from client PID 1306 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 07:54:59.603901 systemd[1]: Reloading... Aug 13 07:54:59.739449 zram_generator::config[1347]: No configuration found. Aug 13 07:54:59.946433 ldconfig[1302]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 07:54:59.994510 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:55:00.084005 systemd[1]: Reloading finished in 478 ms. Aug 13 07:55:00.111483 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 07:55:00.117472 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 07:55:00.132481 systemd[1]: Starting ensure-sysext.service... Aug 13 07:55:00.137407 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 07:55:00.145389 systemd[1]: Reloading requested from client PID 1410 ('systemctl') (unit ensure-sysext.service)... Aug 13 07:55:00.145586 systemd[1]: Reloading... Aug 13 07:55:00.192505 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 07:55:00.193164 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 07:55:00.195448 systemd-tmpfiles[1411]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 07:55:00.196070 systemd-tmpfiles[1411]: ACLs are not supported, ignoring. Aug 13 07:55:00.196194 systemd-tmpfiles[1411]: ACLs are not supported, ignoring. Aug 13 07:55:00.204373 systemd-tmpfiles[1411]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:55:00.204403 systemd-tmpfiles[1411]: Skipping /boot Aug 13 07:55:00.267577 systemd-tmpfiles[1411]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 07:55:00.267770 systemd-tmpfiles[1411]: Skipping /boot Aug 13 07:55:00.326258 zram_generator::config[1441]: No configuration found. Aug 13 07:55:00.398661 systemd-networkd[1260]: eth0: Gained IPv6LL Aug 13 07:55:00.503319 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:55:00.591795 systemd[1]: Reloading finished in 445 ms. Aug 13 07:55:00.614309 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 07:55:00.626477 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 07:55:00.646569 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:55:00.652460 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 07:55:00.656448 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 07:55:00.668987 systemd-networkd[1260]: eth0: Ignoring DHCPv6 address 2a02:1348:179:92b6:24:19ff:fee6:4ada/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:92b6:24:19ff:fee6:4ada/64 assigned by NDisc. Aug 13 07:55:00.669001 systemd-networkd[1260]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Aug 13 07:55:00.674039 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 07:55:00.680717 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 07:55:00.690974 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:55:00.691341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:55:00.702593 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:55:00.715611 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:55:00.722527 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:55:00.727955 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:55:00.728141 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:55:00.737456 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:55:00.739627 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:55:00.744841 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 07:55:00.760123 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:55:00.762528 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:55:00.768681 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:55:00.770452 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:55:00.781097 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 07:55:00.788002 augenrules[1535]: No rules Aug 13 07:55:00.789098 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:55:00.790055 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 07:55:00.802465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 07:55:00.811437 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 07:55:00.815934 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 07:55:00.828417 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 07:55:00.831553 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 07:55:00.849486 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 07:55:00.852046 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 07:55:00.858480 systemd[1]: Finished ensure-sysext.service. Aug 13 07:55:00.861420 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:55:00.865056 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 07:55:00.867480 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 07:55:00.868517 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 07:55:00.870888 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 07:55:00.872215 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 07:55:00.874015 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 07:55:00.874543 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 07:55:00.875861 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 07:55:00.877458 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 07:55:00.885007 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 07:55:00.898037 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 07:55:00.898343 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 07:55:00.905453 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 07:55:00.908309 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 07:55:00.914284 systemd-resolved[1515]: Positive Trust Anchors: Aug 13 07:55:00.914312 systemd-resolved[1515]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 07:55:00.914356 systemd-resolved[1515]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 07:55:00.921918 systemd-resolved[1515]: Using system hostname 'srv-er0cq.gb1.brightbox.com'. Aug 13 07:55:00.925226 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 07:55:00.926211 systemd[1]: Reached target network.target - Network. Aug 13 07:55:00.926865 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 07:55:00.929555 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 07:55:00.992115 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 07:55:00.993675 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 07:55:00.994510 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 07:55:00.995412 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 07:55:00.996173 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 07:55:00.996939 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 07:55:00.996981 systemd[1]: Reached target paths.target - Path Units. Aug 13 07:55:00.997685 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 07:55:00.998660 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 07:55:00.999625 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 07:55:01.000393 systemd[1]: Reached target timers.target - Timer Units. Aug 13 07:55:01.004293 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 07:55:01.007268 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 07:55:01.010438 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 07:55:01.012596 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 07:55:01.013367 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 07:55:01.014043 systemd[1]: Reached target basic.target - Basic System. Aug 13 07:55:01.014999 systemd[1]: System is tainted: cgroupsv1 Aug 13 07:55:01.015123 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:55:01.015166 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 07:55:01.022394 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 07:55:01.027441 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 07:55:01.037428 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 07:55:01.045346 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 07:55:01.059438 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 07:55:01.061422 jq[1577]: false Aug 13 07:55:01.061264 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 07:55:01.070421 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:55:01.076144 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 07:55:01.084180 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 07:55:01.095061 dbus-daemon[1574]: [system] SELinux support is enabled Aug 13 07:55:01.096391 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 07:55:01.102396 dbus-daemon[1574]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1260 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 07:55:01.104454 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 07:55:01.113496 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 07:55:01.124417 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 07:55:01.128830 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 07:55:01.137286 extend-filesystems[1578]: Found loop4 Aug 13 07:55:01.137286 extend-filesystems[1578]: Found loop5 Aug 13 07:55:01.137286 extend-filesystems[1578]: Found loop6 Aug 13 07:55:01.137286 extend-filesystems[1578]: Found loop7 Aug 13 07:55:01.137286 extend-filesystems[1578]: Found vda Aug 13 07:55:01.137286 extend-filesystems[1578]: Found vda1 Aug 13 07:55:01.137286 extend-filesystems[1578]: Found vda2 Aug 13 07:55:01.137286 extend-filesystems[1578]: Found vda3 Aug 13 07:55:01.137286 extend-filesystems[1578]: Found usr Aug 13 07:55:01.137286 extend-filesystems[1578]: Found vda4 Aug 13 07:55:01.137286 extend-filesystems[1578]: Found vda6 Aug 13 07:55:01.137286 extend-filesystems[1578]: Found vda7 Aug 13 07:55:01.137286 extend-filesystems[1578]: Found vda9 Aug 13 07:55:01.267315 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Aug 13 07:55:01.136557 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 07:55:01.267682 extend-filesystems[1578]: Checking size of /dev/vda9 Aug 13 07:55:01.267682 extend-filesystems[1578]: Resized partition /dev/vda9 Aug 13 07:55:01.160981 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 07:55:01.279629 extend-filesystems[1610]: resize2fs 1.47.1 (20-May-2024) Aug 13 07:55:01.168423 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 07:55:01.192767 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 07:55:01.293837 jq[1600]: true Aug 13 07:55:01.193135 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 07:55:01.201829 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 07:55:01.300563 update_engine[1594]: I20250813 07:55:01.297816 1594 main.cc:92] Flatcar Update Engine starting Aug 13 07:55:01.202226 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 07:55:01.209536 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 07:55:01.209901 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 07:55:01.273447 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 07:55:01.310443 update_engine[1594]: I20250813 07:55:01.306671 1594 update_check_scheduler.cc:74] Next update check in 11m43s Aug 13 07:55:01.315072 (ntainerd)[1620]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 07:55:01.318040 dbus-daemon[1574]: [system] Successfully activated service 'org.freedesktop.systemd1' Aug 13 07:55:01.320104 systemd[1]: Started update-engine.service - Update Engine. Aug 13 07:55:01.322651 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 07:55:01.322712 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 07:55:01.323587 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 07:55:01.323615 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 07:55:01.324544 systemd-timesyncd[1567]: Contacted time server 178.215.228.24:123 (0.flatcar.pool.ntp.org). Aug 13 07:55:01.324610 systemd-timesyncd[1567]: Initial clock synchronization to Wed 2025-08-13 07:55:01.309193 UTC. Aug 13 07:55:01.325266 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 07:55:01.333423 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 07:55:01.369865 jq[1619]: true Aug 13 07:55:01.386429 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 07:55:01.394710 tar[1614]: linux-amd64/helm Aug 13 07:55:01.457537 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1634) Aug 13 07:55:01.482852 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Aug 13 07:55:01.532699 extend-filesystems[1610]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 13 07:55:01.532699 extend-filesystems[1610]: old_desc_blocks = 1, new_desc_blocks = 8 Aug 13 07:55:01.532699 extend-filesystems[1610]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Aug 13 07:55:01.526081 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 07:55:01.572861 extend-filesystems[1578]: Resized filesystem in /dev/vda9 Aug 13 07:55:01.526482 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 07:55:01.537359 systemd-logind[1591]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 07:55:01.537392 systemd-logind[1591]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 07:55:01.542540 systemd-logind[1591]: New seat seat0. Aug 13 07:55:01.554066 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 07:55:01.601137 bash[1658]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:55:01.586821 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 07:55:01.628563 systemd[1]: Starting sshkeys.service... Aug 13 07:55:01.796216 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 07:55:01.814401 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 07:55:01.929074 sshd_keygen[1608]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 07:55:02.103172 dbus-daemon[1574]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 07:55:02.103436 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 07:55:02.109514 dbus-daemon[1574]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1639 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 07:55:02.121699 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 07:55:02.123151 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 07:55:02.136393 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 07:55:02.148968 locksmithd[1631]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 07:55:02.172203 polkitd[1691]: Started polkitd version 121 Aug 13 07:55:02.174952 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 07:55:02.175406 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 07:55:02.187757 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 07:55:02.204019 polkitd[1691]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 07:55:02.204135 polkitd[1691]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 07:55:02.209266 polkitd[1691]: Finished loading, compiling and executing 2 rules Aug 13 07:55:02.225475 dbus-daemon[1574]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 07:55:02.225708 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 07:55:02.227849 polkitd[1691]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 07:55:02.243441 systemd-hostnamed[1639]: Hostname set to (static) Aug 13 07:55:02.269835 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 07:55:02.279801 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 07:55:02.293996 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 07:55:02.297410 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 07:55:02.332253 containerd[1620]: time="2025-08-13T07:55:02.331280892Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Aug 13 07:55:02.508276 containerd[1620]: time="2025-08-13T07:55:02.508013434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:55:02.512677 containerd[1620]: time="2025-08-13T07:55:02.511859437Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.100-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:55:02.512677 containerd[1620]: time="2025-08-13T07:55:02.511908957Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 13 07:55:02.512677 containerd[1620]: time="2025-08-13T07:55:02.511940582Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 13 07:55:02.512677 containerd[1620]: time="2025-08-13T07:55:02.512300019Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 13 07:55:02.512677 containerd[1620]: time="2025-08-13T07:55:02.512340918Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 13 07:55:02.512677 containerd[1620]: time="2025-08-13T07:55:02.512510149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:55:02.512677 containerd[1620]: time="2025-08-13T07:55:02.512550072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:55:02.513313 containerd[1620]: time="2025-08-13T07:55:02.513286973Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:55:02.513413 containerd[1620]: time="2025-08-13T07:55:02.513390241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 13 07:55:02.513547 containerd[1620]: time="2025-08-13T07:55:02.513513750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:55:02.513643 containerd[1620]: time="2025-08-13T07:55:02.513622450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 13 07:55:02.514406 containerd[1620]: time="2025-08-13T07:55:02.513844816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:55:02.514406 containerd[1620]: time="2025-08-13T07:55:02.514352581Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 13 07:55:02.514739 containerd[1620]: time="2025-08-13T07:55:02.514710311Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 13 07:55:02.514838 containerd[1620]: time="2025-08-13T07:55:02.514818079Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 13 07:55:02.515062 containerd[1620]: time="2025-08-13T07:55:02.515037954Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 13 07:55:02.515306 containerd[1620]: time="2025-08-13T07:55:02.515269889Z" level=info msg="metadata content store policy set" policy=shared Aug 13 07:55:02.519387 containerd[1620]: time="2025-08-13T07:55:02.519357307Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 13 07:55:02.519743 containerd[1620]: time="2025-08-13T07:55:02.519635147Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 13 07:55:02.519743 containerd[1620]: time="2025-08-13T07:55:02.519701954Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 13 07:55:02.520106 containerd[1620]: time="2025-08-13T07:55:02.519964796Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 13 07:55:02.520106 containerd[1620]: time="2025-08-13T07:55:02.520035636Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 13 07:55:02.520651 containerd[1620]: time="2025-08-13T07:55:02.520458408Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 13 07:55:02.523564 containerd[1620]: time="2025-08-13T07:55:02.523534830Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 13 07:55:02.524257 containerd[1620]: time="2025-08-13T07:55:02.523883787Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 13 07:55:02.524257 containerd[1620]: time="2025-08-13T07:55:02.523928453Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 13 07:55:02.524257 containerd[1620]: time="2025-08-13T07:55:02.523956574Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 13 07:55:02.524257 containerd[1620]: time="2025-08-13T07:55:02.523977480Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 13 07:55:02.524257 containerd[1620]: time="2025-08-13T07:55:02.524000876Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 13 07:55:02.524257 containerd[1620]: time="2025-08-13T07:55:02.524020633Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 13 07:55:02.524257 containerd[1620]: time="2025-08-13T07:55:02.524052452Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 13 07:55:02.524257 containerd[1620]: time="2025-08-13T07:55:02.524072171Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 13 07:55:02.524257 containerd[1620]: time="2025-08-13T07:55:02.524107847Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 13 07:55:02.524257 containerd[1620]: time="2025-08-13T07:55:02.524133594Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 13 07:55:02.524257 containerd[1620]: time="2025-08-13T07:55:02.524167548Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 13 07:55:02.524257 containerd[1620]: time="2025-08-13T07:55:02.524215458Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.524824519Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.524869369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.524890069Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.524909897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.524940759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.524981773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.525011448Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.525033050Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.525053102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.525076249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.525095383Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.525126533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.525154254Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.525206511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526260 containerd[1620]: time="2025-08-13T07:55:02.525257596Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526906 containerd[1620]: time="2025-08-13T07:55:02.525286627Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 13 07:55:02.526906 containerd[1620]: time="2025-08-13T07:55:02.525369785Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 13 07:55:02.526906 containerd[1620]: time="2025-08-13T07:55:02.525406667Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 13 07:55:02.526906 containerd[1620]: time="2025-08-13T07:55:02.525425275Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 13 07:55:02.526906 containerd[1620]: time="2025-08-13T07:55:02.525444827Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 13 07:55:02.526906 containerd[1620]: time="2025-08-13T07:55:02.525460607Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.526906 containerd[1620]: time="2025-08-13T07:55:02.525480133Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 13 07:55:02.526906 containerd[1620]: time="2025-08-13T07:55:02.525524017Z" level=info msg="NRI interface is disabled by configuration." Aug 13 07:55:02.526906 containerd[1620]: time="2025-08-13T07:55:02.525556882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 13 07:55:02.527227 containerd[1620]: time="2025-08-13T07:55:02.526054936Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 13 07:55:02.527227 containerd[1620]: time="2025-08-13T07:55:02.526148158Z" level=info msg="Connect containerd service" Aug 13 07:55:02.530260 containerd[1620]: time="2025-08-13T07:55:02.528895008Z" level=info msg="using legacy CRI server" Aug 13 07:55:02.530260 containerd[1620]: time="2025-08-13T07:55:02.528919049Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 07:55:02.530260 containerd[1620]: time="2025-08-13T07:55:02.529257813Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 13 07:55:02.530959 containerd[1620]: time="2025-08-13T07:55:02.530927163Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 07:55:02.532398 containerd[1620]: time="2025-08-13T07:55:02.532164994Z" level=info msg="Start subscribing containerd event" Aug 13 07:55:02.532699 containerd[1620]: time="2025-08-13T07:55:02.532674577Z" level=info msg="Start recovering state" Aug 13 07:55:02.533049 containerd[1620]: time="2025-08-13T07:55:02.532783524Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 07:55:02.533385 containerd[1620]: time="2025-08-13T07:55:02.533361351Z" level=info msg="Start event monitor" Aug 13 07:55:02.533515 containerd[1620]: time="2025-08-13T07:55:02.533481544Z" level=info msg="Start snapshots syncer" Aug 13 07:55:02.534116 containerd[1620]: time="2025-08-13T07:55:02.533706854Z" level=info msg="Start cni network conf syncer for default" Aug 13 07:55:02.534116 containerd[1620]: time="2025-08-13T07:55:02.533755131Z" level=info msg="Start streaming server" Aug 13 07:55:02.534116 containerd[1620]: time="2025-08-13T07:55:02.533640126Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 07:55:02.536292 containerd[1620]: time="2025-08-13T07:55:02.536268068Z" level=info msg="containerd successfully booted in 0.208177s" Aug 13 07:55:02.536561 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 07:55:03.000657 tar[1614]: linux-amd64/LICENSE Aug 13 07:55:03.000657 tar[1614]: linux-amd64/README.md Aug 13 07:55:03.032381 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 07:55:03.951482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:55:03.967430 (kubelet)[1730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:55:03.973930 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 07:55:03.982589 systemd[1]: Started sshd@0-10.230.74.218:22-139.178.68.195:33054.service - OpenSSH per-connection server daemon (139.178.68.195:33054). Aug 13 07:55:04.722622 kubelet[1730]: E0813 07:55:04.722559 1730 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:55:04.726252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:55:04.726646 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:55:04.952359 sshd[1731]: Accepted publickey for core from 139.178.68.195 port 33054 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:55:04.955336 sshd[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:55:04.971411 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 07:55:04.989764 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 07:55:04.994669 systemd-logind[1591]: New session 1 of user core. Aug 13 07:55:05.025685 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 07:55:05.036850 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 07:55:05.052115 (systemd)[1745]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 07:55:05.207042 systemd[1745]: Queued start job for default target default.target. Aug 13 07:55:05.208327 systemd[1745]: Created slice app.slice - User Application Slice. Aug 13 07:55:05.208369 systemd[1745]: Reached target paths.target - Paths. Aug 13 07:55:05.208389 systemd[1745]: Reached target timers.target - Timers. Aug 13 07:55:05.217398 systemd[1745]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 07:55:05.226012 systemd[1745]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 07:55:05.226099 systemd[1745]: Reached target sockets.target - Sockets. Aug 13 07:55:05.226152 systemd[1745]: Reached target basic.target - Basic System. Aug 13 07:55:05.226217 systemd[1745]: Reached target default.target - Main User Target. Aug 13 07:55:05.226303 systemd[1745]: Startup finished in 160ms. Aug 13 07:55:05.226437 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 07:55:05.238048 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 07:55:05.895697 systemd[1]: Started sshd@1-10.230.74.218:22-139.178.68.195:33064.service - OpenSSH per-connection server daemon (139.178.68.195:33064). Aug 13 07:55:06.816302 sshd[1758]: Accepted publickey for core from 139.178.68.195 port 33064 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:55:06.818382 sshd[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:55:06.825123 systemd-logind[1591]: New session 2 of user core. Aug 13 07:55:06.830733 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 07:55:07.356681 login[1710]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 07:55:07.361965 login[1709]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 13 07:55:07.363736 systemd-logind[1591]: New session 3 of user core. Aug 13 07:55:07.377815 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 07:55:07.384011 systemd-logind[1591]: New session 4 of user core. Aug 13 07:55:07.388826 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 07:55:07.438947 sshd[1758]: pam_unix(sshd:session): session closed for user core Aug 13 07:55:07.456128 systemd[1]: sshd@1-10.230.74.218:22-139.178.68.195:33064.service: Deactivated successfully. Aug 13 07:55:07.463558 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 07:55:07.464604 systemd-logind[1591]: Session 2 logged out. Waiting for processes to exit. Aug 13 07:55:07.466966 systemd-logind[1591]: Removed session 2. Aug 13 07:55:07.593738 systemd[1]: Started sshd@2-10.230.74.218:22-139.178.68.195:33074.service - OpenSSH per-connection server daemon (139.178.68.195:33074). Aug 13 07:55:08.185868 coreos-metadata[1572]: Aug 13 07:55:08.185 WARN failed to locate config-drive, using the metadata service API instead Aug 13 07:55:08.213296 coreos-metadata[1572]: Aug 13 07:55:08.212 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Aug 13 07:55:08.219111 coreos-metadata[1572]: Aug 13 07:55:08.219 INFO Fetch failed with 404: resource not found Aug 13 07:55:08.219111 coreos-metadata[1572]: Aug 13 07:55:08.219 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Aug 13 07:55:08.220063 coreos-metadata[1572]: Aug 13 07:55:08.220 INFO Fetch successful Aug 13 07:55:08.220246 coreos-metadata[1572]: Aug 13 07:55:08.220 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Aug 13 07:55:08.267599 coreos-metadata[1572]: Aug 13 07:55:08.267 INFO Fetch successful Aug 13 07:55:08.267599 coreos-metadata[1572]: Aug 13 07:55:08.267 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Aug 13 07:55:08.285413 coreos-metadata[1572]: Aug 13 07:55:08.285 INFO Fetch successful Aug 13 07:55:08.285413 coreos-metadata[1572]: Aug 13 07:55:08.285 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Aug 13 07:55:08.299487 coreos-metadata[1572]: Aug 13 07:55:08.299 INFO Fetch successful Aug 13 07:55:08.299674 coreos-metadata[1572]: Aug 13 07:55:08.299 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Aug 13 07:55:08.319887 coreos-metadata[1572]: Aug 13 07:55:08.319 INFO Fetch successful Aug 13 07:55:08.349900 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 07:55:08.351493 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 07:55:08.484026 sshd[1794]: Accepted publickey for core from 139.178.68.195 port 33074 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:55:08.486681 sshd[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:55:08.493895 systemd-logind[1591]: New session 5 of user core. Aug 13 07:55:08.501786 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 07:55:09.109607 sshd[1794]: pam_unix(sshd:session): session closed for user core Aug 13 07:55:09.113785 systemd[1]: sshd@2-10.230.74.218:22-139.178.68.195:33074.service: Deactivated successfully. Aug 13 07:55:09.117998 systemd-logind[1591]: Session 5 logged out. Waiting for processes to exit. Aug 13 07:55:09.119090 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 07:55:09.121509 systemd-logind[1591]: Removed session 5. Aug 13 07:55:09.179940 coreos-metadata[1671]: Aug 13 07:55:09.179 WARN failed to locate config-drive, using the metadata service API instead Aug 13 07:55:09.201685 coreos-metadata[1671]: Aug 13 07:55:09.201 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Aug 13 07:55:09.226841 coreos-metadata[1671]: Aug 13 07:55:09.226 INFO Fetch successful Aug 13 07:55:09.227048 coreos-metadata[1671]: Aug 13 07:55:09.227 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Aug 13 07:55:09.258813 coreos-metadata[1671]: Aug 13 07:55:09.258 INFO Fetch successful Aug 13 07:55:09.264330 unknown[1671]: wrote ssh authorized keys file for user: core Aug 13 07:55:09.284030 update-ssh-keys[1814]: Updated "/home/core/.ssh/authorized_keys" Aug 13 07:55:09.284883 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 07:55:09.290931 systemd[1]: Finished sshkeys.service. Aug 13 07:55:09.296386 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 07:55:09.296573 systemd[1]: Startup finished in 15.743s (kernel) + 13.437s (userspace) = 29.181s. Aug 13 07:55:14.814719 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 07:55:14.826569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:55:15.174484 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:55:15.190903 (kubelet)[1834]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:55:15.292256 kubelet[1834]: E0813 07:55:15.292168 1834 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:55:15.296789 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:55:15.298017 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:55:19.257611 systemd[1]: Started sshd@3-10.230.74.218:22-139.178.68.195:54588.service - OpenSSH per-connection server daemon (139.178.68.195:54588). Aug 13 07:55:20.140527 sshd[1843]: Accepted publickey for core from 139.178.68.195 port 54588 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:55:20.142791 sshd[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:55:20.148987 systemd-logind[1591]: New session 6 of user core. Aug 13 07:55:20.156661 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 07:55:20.761597 sshd[1843]: pam_unix(sshd:session): session closed for user core Aug 13 07:55:20.765571 systemd[1]: sshd@3-10.230.74.218:22-139.178.68.195:54588.service: Deactivated successfully. Aug 13 07:55:20.769875 systemd-logind[1591]: Session 6 logged out. Waiting for processes to exit. Aug 13 07:55:20.771190 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 07:55:20.772352 systemd-logind[1591]: Removed session 6. Aug 13 07:55:20.915616 systemd[1]: Started sshd@4-10.230.74.218:22-139.178.68.195:48926.service - OpenSSH per-connection server daemon (139.178.68.195:48926). Aug 13 07:55:21.803746 sshd[1851]: Accepted publickey for core from 139.178.68.195 port 48926 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:55:21.805774 sshd[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:55:21.812182 systemd-logind[1591]: New session 7 of user core. Aug 13 07:55:21.819699 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 07:55:22.426625 sshd[1851]: pam_unix(sshd:session): session closed for user core Aug 13 07:55:22.431563 systemd[1]: sshd@4-10.230.74.218:22-139.178.68.195:48926.service: Deactivated successfully. Aug 13 07:55:22.435575 systemd-logind[1591]: Session 7 logged out. Waiting for processes to exit. Aug 13 07:55:22.436390 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 07:55:22.437894 systemd-logind[1591]: Removed session 7. Aug 13 07:55:22.599640 systemd[1]: Started sshd@5-10.230.74.218:22-139.178.68.195:48930.service - OpenSSH per-connection server daemon (139.178.68.195:48930). Aug 13 07:55:23.543102 sshd[1859]: Accepted publickey for core from 139.178.68.195 port 48930 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:55:23.545113 sshd[1859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:55:23.553225 systemd-logind[1591]: New session 8 of user core. Aug 13 07:55:23.560954 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 07:55:24.203770 sshd[1859]: pam_unix(sshd:session): session closed for user core Aug 13 07:55:24.207757 systemd[1]: sshd@5-10.230.74.218:22-139.178.68.195:48930.service: Deactivated successfully. Aug 13 07:55:24.211599 systemd-logind[1591]: Session 8 logged out. Waiting for processes to exit. Aug 13 07:55:24.211764 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 07:55:24.214289 systemd-logind[1591]: Removed session 8. Aug 13 07:55:24.346878 systemd[1]: Started sshd@6-10.230.74.218:22-139.178.68.195:48940.service - OpenSSH per-connection server daemon (139.178.68.195:48940). Aug 13 07:55:25.230098 sshd[1867]: Accepted publickey for core from 139.178.68.195 port 48940 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:55:25.232083 sshd[1867]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:55:25.238345 systemd-logind[1591]: New session 9 of user core. Aug 13 07:55:25.245683 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 07:55:25.314628 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 07:55:25.322727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:55:25.490468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:55:25.503924 (kubelet)[1883]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:55:25.615001 kubelet[1883]: E0813 07:55:25.614906 1883 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:55:25.617344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:55:25.617629 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:55:25.727207 sudo[1891]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 07:55:25.727692 sudo[1891]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:55:25.741851 sudo[1891]: pam_unix(sudo:session): session closed for user root Aug 13 07:55:25.886647 sshd[1867]: pam_unix(sshd:session): session closed for user core Aug 13 07:55:25.890796 systemd[1]: sshd@6-10.230.74.218:22-139.178.68.195:48940.service: Deactivated successfully. Aug 13 07:55:25.895194 systemd-logind[1591]: Session 9 logged out. Waiting for processes to exit. Aug 13 07:55:25.896783 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 07:55:25.898173 systemd-logind[1591]: Removed session 9. Aug 13 07:55:26.044472 systemd[1]: Started sshd@7-10.230.74.218:22-139.178.68.195:48944.service - OpenSSH per-connection server daemon (139.178.68.195:48944). Aug 13 07:55:26.929300 sshd[1896]: Accepted publickey for core from 139.178.68.195 port 48944 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:55:26.931786 sshd[1896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:55:26.939028 systemd-logind[1591]: New session 10 of user core. Aug 13 07:55:26.950832 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 07:55:27.411015 sudo[1901]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 07:55:27.411715 sudo[1901]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:55:27.416947 sudo[1901]: pam_unix(sudo:session): session closed for user root Aug 13 07:55:27.424810 sudo[1900]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 13 07:55:27.425298 sudo[1900]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:55:27.445577 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 13 07:55:27.448431 auditctl[1904]: No rules Aug 13 07:55:27.448982 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 07:55:27.449390 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 13 07:55:27.457696 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 13 07:55:27.495044 augenrules[1923]: No rules Aug 13 07:55:27.496118 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 13 07:55:27.499556 sudo[1900]: pam_unix(sudo:session): session closed for user root Aug 13 07:55:27.645599 sshd[1896]: pam_unix(sshd:session): session closed for user core Aug 13 07:55:27.650456 systemd[1]: sshd@7-10.230.74.218:22-139.178.68.195:48944.service: Deactivated successfully. Aug 13 07:55:27.654146 systemd-logind[1591]: Session 10 logged out. Waiting for processes to exit. Aug 13 07:55:27.655057 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 07:55:27.656518 systemd-logind[1591]: Removed session 10. Aug 13 07:55:27.797609 systemd[1]: Started sshd@8-10.230.74.218:22-139.178.68.195:48960.service - OpenSSH per-connection server daemon (139.178.68.195:48960). Aug 13 07:55:28.685367 sshd[1932]: Accepted publickey for core from 139.178.68.195 port 48960 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:55:28.687523 sshd[1932]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:55:28.693830 systemd-logind[1591]: New session 11 of user core. Aug 13 07:55:28.702663 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 07:55:29.166457 sudo[1936]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 07:55:29.166942 sudo[1936]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 07:55:29.840717 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 07:55:29.854295 (dockerd)[1952]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 07:55:30.557836 dockerd[1952]: time="2025-08-13T07:55:30.557689904Z" level=info msg="Starting up" Aug 13 07:55:30.721942 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2408631009-merged.mount: Deactivated successfully. Aug 13 07:55:30.862425 dockerd[1952]: time="2025-08-13T07:55:30.862167169Z" level=info msg="Loading containers: start." Aug 13 07:55:31.026493 kernel: Initializing XFRM netlink socket Aug 13 07:55:31.127866 systemd-networkd[1260]: docker0: Link UP Aug 13 07:55:31.153615 dockerd[1952]: time="2025-08-13T07:55:31.153561335Z" level=info msg="Loading containers: done." Aug 13 07:55:31.177068 dockerd[1952]: time="2025-08-13T07:55:31.176959297Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 07:55:31.177349 dockerd[1952]: time="2025-08-13T07:55:31.177139882Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Aug 13 07:55:31.177417 dockerd[1952]: time="2025-08-13T07:55:31.177403585Z" level=info msg="Daemon has completed initialization" Aug 13 07:55:31.213262 dockerd[1952]: time="2025-08-13T07:55:31.212983845Z" level=info msg="API listen on /run/docker.sock" Aug 13 07:55:31.213859 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 07:55:31.715717 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4029985826-merged.mount: Deactivated successfully. Aug 13 07:55:32.157412 containerd[1620]: time="2025-08-13T07:55:32.157287085Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Aug 13 07:55:32.257976 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 07:55:32.975410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount797682569.mount: Deactivated successfully. Aug 13 07:55:34.949981 containerd[1620]: time="2025-08-13T07:55:34.949842658Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:34.951408 containerd[1620]: time="2025-08-13T07:55:34.950767638Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960995" Aug 13 07:55:34.955334 containerd[1620]: time="2025-08-13T07:55:34.955279766Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:34.960733 containerd[1620]: time="2025-08-13T07:55:34.960668093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:34.962349 containerd[1620]: time="2025-08-13T07:55:34.962026928Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 2.804599769s" Aug 13 07:55:34.962349 containerd[1620]: time="2025-08-13T07:55:34.962093512Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Aug 13 07:55:34.964893 containerd[1620]: time="2025-08-13T07:55:34.964608556Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Aug 13 07:55:35.815379 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 07:55:35.824592 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:55:36.205623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:55:36.220927 (kubelet)[2167]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:55:36.436139 kubelet[2167]: E0813 07:55:36.435583 2167 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:55:36.438009 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:55:36.438406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:55:37.608068 containerd[1620]: time="2025-08-13T07:55:37.607924599Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:37.613331 containerd[1620]: time="2025-08-13T07:55:37.613287863Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:37.615255 containerd[1620]: time="2025-08-13T07:55:37.613491474Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713784" Aug 13 07:55:37.623426 containerd[1620]: time="2025-08-13T07:55:37.623370860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:37.624667 containerd[1620]: time="2025-08-13T07:55:37.624610355Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 2.659954787s" Aug 13 07:55:37.624755 containerd[1620]: time="2025-08-13T07:55:37.624713845Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Aug 13 07:55:37.626878 containerd[1620]: time="2025-08-13T07:55:37.626647760Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Aug 13 07:55:39.836771 containerd[1620]: time="2025-08-13T07:55:39.836544752Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:39.839068 containerd[1620]: time="2025-08-13T07:55:39.838707771Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780394" Aug 13 07:55:39.839923 containerd[1620]: time="2025-08-13T07:55:39.839834103Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:39.845164 containerd[1620]: time="2025-08-13T07:55:39.845065543Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:39.848584 containerd[1620]: time="2025-08-13T07:55:39.846995262Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.219976119s" Aug 13 07:55:39.848584 containerd[1620]: time="2025-08-13T07:55:39.847060137Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Aug 13 07:55:39.848894 containerd[1620]: time="2025-08-13T07:55:39.848849123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Aug 13 07:55:42.197966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4095755851.mount: Deactivated successfully. Aug 13 07:55:43.124217 containerd[1620]: time="2025-08-13T07:55:43.123684651Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:43.126216 containerd[1620]: time="2025-08-13T07:55:43.125812036Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354633" Aug 13 07:55:43.127110 containerd[1620]: time="2025-08-13T07:55:43.126995382Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:43.132204 containerd[1620]: time="2025-08-13T07:55:43.132132699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:43.133462 containerd[1620]: time="2025-08-13T07:55:43.133408183Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 3.284479644s" Aug 13 07:55:43.133548 containerd[1620]: time="2025-08-13T07:55:43.133517979Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Aug 13 07:55:43.136475 containerd[1620]: time="2025-08-13T07:55:43.136429098Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 07:55:44.114225 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount452123543.mount: Deactivated successfully. Aug 13 07:55:45.863199 containerd[1620]: time="2025-08-13T07:55:45.862916089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:45.864844 containerd[1620]: time="2025-08-13T07:55:45.864804031Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Aug 13 07:55:45.865838 containerd[1620]: time="2025-08-13T07:55:45.865727663Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:45.869917 containerd[1620]: time="2025-08-13T07:55:45.869838559Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:45.871561 containerd[1620]: time="2025-08-13T07:55:45.871513753Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.735027258s" Aug 13 07:55:45.871664 containerd[1620]: time="2025-08-13T07:55:45.871565898Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 07:55:45.872370 containerd[1620]: time="2025-08-13T07:55:45.872215404Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 07:55:46.130188 update_engine[1594]: I20250813 07:55:46.129817 1594 update_attempter.cc:509] Updating boot flags... Aug 13 07:55:46.211066 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2250) Aug 13 07:55:46.285268 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2252) Aug 13 07:55:46.565055 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 13 07:55:46.573536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:55:46.979495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:55:46.987894 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 07:55:47.076073 kubelet[2269]: E0813 07:55:47.075885 2269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 07:55:47.079531 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 07:55:47.079879 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 07:55:47.160191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4070133176.mount: Deactivated successfully. Aug 13 07:55:47.165570 containerd[1620]: time="2025-08-13T07:55:47.165498818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:47.167332 containerd[1620]: time="2025-08-13T07:55:47.167267061Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Aug 13 07:55:47.168402 containerd[1620]: time="2025-08-13T07:55:47.168326277Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:47.171730 containerd[1620]: time="2025-08-13T07:55:47.171665505Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:47.173354 containerd[1620]: time="2025-08-13T07:55:47.173080080Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.300780028s" Aug 13 07:55:47.173354 containerd[1620]: time="2025-08-13T07:55:47.173142736Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 07:55:47.174514 containerd[1620]: time="2025-08-13T07:55:47.174385096Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 07:55:48.727843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3408983255.mount: Deactivated successfully. Aug 13 07:55:51.475828 containerd[1620]: time="2025-08-13T07:55:51.475638410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:51.480861 containerd[1620]: time="2025-08-13T07:55:51.480785171Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Aug 13 07:55:51.482450 containerd[1620]: time="2025-08-13T07:55:51.482404139Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:51.491853 containerd[1620]: time="2025-08-13T07:55:51.491768544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:55:51.494967 containerd[1620]: time="2025-08-13T07:55:51.493654491Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 4.319208469s" Aug 13 07:55:51.494967 containerd[1620]: time="2025-08-13T07:55:51.493732641Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 07:55:55.710031 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:55:55.727619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:55:55.779265 systemd[1]: Reloading requested from client PID 2362 ('systemctl') (unit session-11.scope)... Aug 13 07:55:55.779333 systemd[1]: Reloading... Aug 13 07:55:56.092293 zram_generator::config[2397]: No configuration found. Aug 13 07:55:56.289871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:55:56.399171 systemd[1]: Reloading finished in 618 ms. Aug 13 07:55:56.467523 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 07:55:56.467676 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 07:55:56.468382 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:55:56.488373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:55:56.659699 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:55:56.670989 (kubelet)[2478]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:55:56.800028 kubelet[2478]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:55:56.800028 kubelet[2478]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:55:56.800028 kubelet[2478]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:55:56.800028 kubelet[2478]: I0813 07:55:56.798750 2478 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:55:57.946226 kubelet[2478]: I0813 07:55:57.944897 2478 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:55:57.946226 kubelet[2478]: I0813 07:55:57.945430 2478 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:55:57.946226 kubelet[2478]: I0813 07:55:57.945793 2478 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:55:57.984484 kubelet[2478]: I0813 07:55:57.984328 2478 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:55:57.986228 kubelet[2478]: E0813 07:55:57.985587 2478 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.74.218:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.74.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:55:57.997311 kubelet[2478]: E0813 07:55:57.997256 2478 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:55:57.997577 kubelet[2478]: I0813 07:55:57.997555 2478 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:55:58.008339 kubelet[2478]: I0813 07:55:58.008311 2478 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:55:58.010790 kubelet[2478]: I0813 07:55:58.010767 2478 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:55:58.011285 kubelet[2478]: I0813 07:55:58.011218 2478 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:55:58.011799 kubelet[2478]: I0813 07:55:58.011399 2478 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-er0cq.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:55:58.012687 kubelet[2478]: I0813 07:55:58.012322 2478 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:55:58.012687 kubelet[2478]: I0813 07:55:58.012365 2478 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:55:58.012687 kubelet[2478]: I0813 07:55:58.012636 2478 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:55:58.019658 kubelet[2478]: I0813 07:55:58.019503 2478 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:55:58.019658 kubelet[2478]: I0813 07:55:58.019557 2478 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:55:58.021397 kubelet[2478]: I0813 07:55:58.020882 2478 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:55:58.021397 kubelet[2478]: I0813 07:55:58.020988 2478 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:55:58.023172 kubelet[2478]: W0813 07:55:58.022924 2478 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.74.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-er0cq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.74.218:6443: connect: connection refused Aug 13 07:55:58.023172 kubelet[2478]: E0813 07:55:58.023010 2478 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.74.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-er0cq.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.74.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:55:58.024394 kubelet[2478]: W0813 07:55:58.024315 2478 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.74.218:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.74.218:6443: connect: connection refused Aug 13 07:55:58.024691 kubelet[2478]: E0813 07:55:58.024528 2478 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.74.218:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.74.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:55:58.024944 kubelet[2478]: I0813 07:55:58.024920 2478 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:55:58.029156 kubelet[2478]: I0813 07:55:58.028385 2478 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:55:58.029156 kubelet[2478]: W0813 07:55:58.028511 2478 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 07:55:58.031980 kubelet[2478]: I0813 07:55:58.031958 2478 server.go:1274] "Started kubelet" Aug 13 07:55:58.032721 kubelet[2478]: I0813 07:55:58.032683 2478 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:55:58.035385 kubelet[2478]: I0813 07:55:58.035356 2478 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:55:58.036253 kubelet[2478]: I0813 07:55:58.036215 2478 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:55:58.044814 kubelet[2478]: I0813 07:55:58.044744 2478 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:55:58.045154 kubelet[2478]: I0813 07:55:58.045125 2478 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:55:58.045277 kubelet[2478]: I0813 07:55:58.045139 2478 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:55:58.049012 kubelet[2478]: I0813 07:55:58.048990 2478 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:55:58.049446 kubelet[2478]: E0813 07:55:58.049421 2478 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-er0cq.gb1.brightbox.com\" not found" Aug 13 07:55:58.053514 kubelet[2478]: I0813 07:55:58.053490 2478 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:55:58.054167 kubelet[2478]: I0813 07:55:58.053738 2478 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:55:58.057267 kubelet[2478]: W0813 07:55:58.055881 2478 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.74.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.74.218:6443: connect: connection refused Aug 13 07:55:58.057421 kubelet[2478]: E0813 07:55:58.057392 2478 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.74.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.74.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:55:58.061905 kubelet[2478]: E0813 07:55:58.059265 2478 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.74.218:6443/api/v1/namespaces/default/events\": dial tcp 10.230.74.218:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-er0cq.gb1.brightbox.com.185b447714ceacc2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-er0cq.gb1.brightbox.com,UID:srv-er0cq.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-er0cq.gb1.brightbox.com,},FirstTimestamp:2025-08-13 07:55:58.031826114 +0000 UTC m=+1.349130504,LastTimestamp:2025-08-13 07:55:58.031826114 +0000 UTC m=+1.349130504,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-er0cq.gb1.brightbox.com,}" Aug 13 07:55:58.064457 kubelet[2478]: E0813 07:55:58.064401 2478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.74.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-er0cq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.74.218:6443: connect: connection refused" interval="200ms" Aug 13 07:55:58.065396 kubelet[2478]: I0813 07:55:58.065357 2478 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:55:58.066776 kubelet[2478]: I0813 07:55:58.066748 2478 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:55:58.066853 kubelet[2478]: I0813 07:55:58.066814 2478 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:55:58.066912 kubelet[2478]: I0813 07:55:58.066862 2478 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:55:58.066968 kubelet[2478]: E0813 07:55:58.066943 2478 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:55:58.070730 kubelet[2478]: I0813 07:55:58.070701 2478 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:55:58.071156 kubelet[2478]: I0813 07:55:58.071128 2478 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:55:58.073616 kubelet[2478]: I0813 07:55:58.073595 2478 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:55:58.081962 kubelet[2478]: W0813 07:55:58.081893 2478 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.74.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.74.218:6443: connect: connection refused Aug 13 07:55:58.082096 kubelet[2478]: E0813 07:55:58.081976 2478 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.74.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.74.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:55:58.101573 kubelet[2478]: E0813 07:55:58.101534 2478 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:55:58.113846 kubelet[2478]: I0813 07:55:58.113807 2478 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:55:58.113846 kubelet[2478]: I0813 07:55:58.113832 2478 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:55:58.114071 kubelet[2478]: I0813 07:55:58.113873 2478 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:55:58.115884 kubelet[2478]: I0813 07:55:58.115846 2478 policy_none.go:49] "None policy: Start" Aug 13 07:55:58.116726 kubelet[2478]: I0813 07:55:58.116701 2478 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:55:58.116817 kubelet[2478]: I0813 07:55:58.116733 2478 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:55:58.128269 kubelet[2478]: I0813 07:55:58.127527 2478 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:55:58.128269 kubelet[2478]: I0813 07:55:58.127807 2478 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:55:58.128269 kubelet[2478]: I0813 07:55:58.127836 2478 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:55:58.131064 kubelet[2478]: I0813 07:55:58.131045 2478 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:55:58.133207 kubelet[2478]: E0813 07:55:58.133180 2478 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-er0cq.gb1.brightbox.com\" not found" Aug 13 07:55:58.232333 kubelet[2478]: I0813 07:55:58.231687 2478 kubelet_node_status.go:72] "Attempting to register node" node="srv-er0cq.gb1.brightbox.com" Aug 13 07:55:58.232333 kubelet[2478]: E0813 07:55:58.232167 2478 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.74.218:6443/api/v1/nodes\": dial tcp 10.230.74.218:6443: connect: connection refused" node="srv-er0cq.gb1.brightbox.com" Aug 13 07:55:58.266514 kubelet[2478]: E0813 07:55:58.266441 2478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.74.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-er0cq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.74.218:6443: connect: connection refused" interval="400ms" Aug 13 07:55:58.355252 kubelet[2478]: I0813 07:55:58.355162 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ac847a2be5afe4e19a3bbbcd16b0b3e-k8s-certs\") pod \"kube-apiserver-srv-er0cq.gb1.brightbox.com\" (UID: \"7ac847a2be5afe4e19a3bbbcd16b0b3e\") " pod="kube-system/kube-apiserver-srv-er0cq.gb1.brightbox.com" Aug 13 07:55:58.355445 kubelet[2478]: I0813 07:55:58.355266 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ac847a2be5afe4e19a3bbbcd16b0b3e-usr-share-ca-certificates\") pod \"kube-apiserver-srv-er0cq.gb1.brightbox.com\" (UID: \"7ac847a2be5afe4e19a3bbbcd16b0b3e\") " pod="kube-system/kube-apiserver-srv-er0cq.gb1.brightbox.com" Aug 13 07:55:58.355445 kubelet[2478]: I0813 07:55:58.355301 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3e4fcd98082e59363f150095b315311-ca-certs\") pod \"kube-controller-manager-srv-er0cq.gb1.brightbox.com\" (UID: \"a3e4fcd98082e59363f150095b315311\") " pod="kube-system/kube-controller-manager-srv-er0cq.gb1.brightbox.com" Aug 13 07:55:58.355445 kubelet[2478]: I0813 07:55:58.355327 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3e4fcd98082e59363f150095b315311-flexvolume-dir\") pod \"kube-controller-manager-srv-er0cq.gb1.brightbox.com\" (UID: \"a3e4fcd98082e59363f150095b315311\") " pod="kube-system/kube-controller-manager-srv-er0cq.gb1.brightbox.com" Aug 13 07:55:58.355445 kubelet[2478]: I0813 07:55:58.355355 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48b267584655eb6d56d9487290a98625-kubeconfig\") pod \"kube-scheduler-srv-er0cq.gb1.brightbox.com\" (UID: \"48b267584655eb6d56d9487290a98625\") " pod="kube-system/kube-scheduler-srv-er0cq.gb1.brightbox.com" Aug 13 07:55:58.355445 kubelet[2478]: I0813 07:55:58.355381 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ac847a2be5afe4e19a3bbbcd16b0b3e-ca-certs\") pod \"kube-apiserver-srv-er0cq.gb1.brightbox.com\" (UID: \"7ac847a2be5afe4e19a3bbbcd16b0b3e\") " pod="kube-system/kube-apiserver-srv-er0cq.gb1.brightbox.com" Aug 13 07:55:58.355701 kubelet[2478]: I0813 07:55:58.355409 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3e4fcd98082e59363f150095b315311-k8s-certs\") pod \"kube-controller-manager-srv-er0cq.gb1.brightbox.com\" (UID: \"a3e4fcd98082e59363f150095b315311\") " pod="kube-system/kube-controller-manager-srv-er0cq.gb1.brightbox.com" Aug 13 07:55:58.355701 kubelet[2478]: I0813 07:55:58.355466 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3e4fcd98082e59363f150095b315311-kubeconfig\") pod \"kube-controller-manager-srv-er0cq.gb1.brightbox.com\" (UID: \"a3e4fcd98082e59363f150095b315311\") " pod="kube-system/kube-controller-manager-srv-er0cq.gb1.brightbox.com" Aug 13 07:55:58.355701 kubelet[2478]: I0813 07:55:58.355494 2478 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3e4fcd98082e59363f150095b315311-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-er0cq.gb1.brightbox.com\" (UID: \"a3e4fcd98082e59363f150095b315311\") " pod="kube-system/kube-controller-manager-srv-er0cq.gb1.brightbox.com" Aug 13 07:55:58.436133 kubelet[2478]: I0813 07:55:58.436082 2478 kubelet_node_status.go:72] "Attempting to register node" node="srv-er0cq.gb1.brightbox.com" Aug 13 07:55:58.436612 kubelet[2478]: E0813 07:55:58.436575 2478 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.74.218:6443/api/v1/nodes\": dial tcp 10.230.74.218:6443: connect: connection refused" node="srv-er0cq.gb1.brightbox.com" Aug 13 07:55:58.485911 containerd[1620]: time="2025-08-13T07:55:58.485712739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-er0cq.gb1.brightbox.com,Uid:a3e4fcd98082e59363f150095b315311,Namespace:kube-system,Attempt:0,}" Aug 13 07:55:58.486618 containerd[1620]: time="2025-08-13T07:55:58.485712850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-er0cq.gb1.brightbox.com,Uid:7ac847a2be5afe4e19a3bbbcd16b0b3e,Namespace:kube-system,Attempt:0,}" Aug 13 07:55:58.490045 containerd[1620]: time="2025-08-13T07:55:58.489734356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-er0cq.gb1.brightbox.com,Uid:48b267584655eb6d56d9487290a98625,Namespace:kube-system,Attempt:0,}" Aug 13 07:55:58.667793 kubelet[2478]: E0813 07:55:58.667739 2478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.74.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-er0cq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.74.218:6443: connect: connection refused" interval="800ms" Aug 13 07:55:58.839934 kubelet[2478]: I0813 07:55:58.839587 2478 kubelet_node_status.go:72] "Attempting to register node" node="srv-er0cq.gb1.brightbox.com" Aug 13 07:55:58.840104 kubelet[2478]: E0813 07:55:58.840032 2478 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.74.218:6443/api/v1/nodes\": dial tcp 10.230.74.218:6443: connect: connection refused" node="srv-er0cq.gb1.brightbox.com" Aug 13 07:55:59.154485 kubelet[2478]: W0813 07:55:59.154323 2478 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.230.74.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.230.74.218:6443: connect: connection refused Aug 13 07:55:59.154485 kubelet[2478]: E0813 07:55:59.154389 2478 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.230.74.218:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.74.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:55:59.163090 kubelet[2478]: W0813 07:55:59.163002 2478 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.230.74.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-er0cq.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.230.74.218:6443: connect: connection refused Aug 13 07:55:59.163090 kubelet[2478]: E0813 07:55:59.163083 2478 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.230.74.218:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-er0cq.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.74.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:55:59.469219 kubelet[2478]: E0813 07:55:59.469031 2478 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.74.218:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-er0cq.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.74.218:6443: connect: connection refused" interval="1.6s" Aug 13 07:55:59.584931 kubelet[2478]: W0813 07:55:59.584811 2478 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.230.74.218:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.230.74.218:6443: connect: connection refused Aug 13 07:55:59.585101 kubelet[2478]: E0813 07:55:59.584934 2478 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.230.74.218:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.74.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:55:59.610824 kubelet[2478]: W0813 07:55:59.610766 2478 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.230.74.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.230.74.218:6443: connect: connection refused Aug 13 07:55:59.610940 kubelet[2478]: E0813 07:55:59.610833 2478 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.230.74.218:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.74.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:55:59.643553 kubelet[2478]: I0813 07:55:59.643432 2478 kubelet_node_status.go:72] "Attempting to register node" node="srv-er0cq.gb1.brightbox.com" Aug 13 07:55:59.644968 kubelet[2478]: E0813 07:55:59.644161 2478 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.230.74.218:6443/api/v1/nodes\": dial tcp 10.230.74.218:6443: connect: connection refused" node="srv-er0cq.gb1.brightbox.com" Aug 13 07:55:59.750648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount344120729.mount: Deactivated successfully. Aug 13 07:55:59.759093 containerd[1620]: time="2025-08-13T07:55:59.759040036Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:55:59.760452 containerd[1620]: time="2025-08-13T07:55:59.760396379Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Aug 13 07:55:59.762019 containerd[1620]: time="2025-08-13T07:55:59.761829150Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:55:59.766190 containerd[1620]: time="2025-08-13T07:55:59.766157009Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:55:59.767811 containerd[1620]: time="2025-08-13T07:55:59.767430928Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:55:59.767811 containerd[1620]: time="2025-08-13T07:55:59.767474177Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 13 07:55:59.767972 containerd[1620]: time="2025-08-13T07:55:59.767933204Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:55:59.769310 containerd[1620]: time="2025-08-13T07:55:59.769264093Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 07:55:59.771263 containerd[1620]: time="2025-08-13T07:55:59.771025801Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.283934187s" Aug 13 07:55:59.777826 containerd[1620]: time="2025-08-13T07:55:59.777764991Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.291112116s" Aug 13 07:55:59.781551 containerd[1620]: time="2025-08-13T07:55:59.781397028Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.29156524s" Aug 13 07:55:59.993755 containerd[1620]: time="2025-08-13T07:55:59.993555534Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:55:59.993977 containerd[1620]: time="2025-08-13T07:55:59.993872702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:55:59.994309 containerd[1620]: time="2025-08-13T07:55:59.994020456Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:55:59.995457 containerd[1620]: time="2025-08-13T07:55:59.995369155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:00.004994 containerd[1620]: time="2025-08-13T07:56:00.003637232Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:56:00.005165 containerd[1620]: time="2025-08-13T07:56:00.005105032Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:56:00.005468 containerd[1620]: time="2025-08-13T07:56:00.005416061Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:00.005752 containerd[1620]: time="2025-08-13T07:56:00.005649804Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:56:00.005752 containerd[1620]: time="2025-08-13T07:56:00.005707739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:56:00.005752 containerd[1620]: time="2025-08-13T07:56:00.005724872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:00.006154 containerd[1620]: time="2025-08-13T07:56:00.006069753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:00.006491 containerd[1620]: time="2025-08-13T07:56:00.006415172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:00.134099 kubelet[2478]: E0813 07:56:00.134042 2478 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.230.74.218:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.74.218:6443: connect: connection refused" logger="UnhandledError" Aug 13 07:56:00.172874 containerd[1620]: time="2025-08-13T07:56:00.172307713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-er0cq.gb1.brightbox.com,Uid:48b267584655eb6d56d9487290a98625,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d07087723be540100b42bf9349390dcde853cffed9f1743e6c14c76a4b09279\"" Aug 13 07:56:00.173709 containerd[1620]: time="2025-08-13T07:56:00.173439861Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-er0cq.gb1.brightbox.com,Uid:a3e4fcd98082e59363f150095b315311,Namespace:kube-system,Attempt:0,} returns sandbox id \"01c33f3f49feefc1a4b51e3b5ea1459b333c04f80541a1b0891b082a04795a12\"" Aug 13 07:56:00.180712 containerd[1620]: time="2025-08-13T07:56:00.180667607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-er0cq.gb1.brightbox.com,Uid:7ac847a2be5afe4e19a3bbbcd16b0b3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"f5a1951c668b66030874a329766f748666e6e27d12b0128d3673b34d9c16ac08\"" Aug 13 07:56:00.186292 containerd[1620]: time="2025-08-13T07:56:00.186248444Z" level=info msg="CreateContainer within sandbox \"f5a1951c668b66030874a329766f748666e6e27d12b0128d3673b34d9c16ac08\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 07:56:00.187056 containerd[1620]: time="2025-08-13T07:56:00.186678612Z" level=info msg="CreateContainer within sandbox \"8d07087723be540100b42bf9349390dcde853cffed9f1743e6c14c76a4b09279\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 07:56:00.187275 containerd[1620]: time="2025-08-13T07:56:00.186894453Z" level=info msg="CreateContainer within sandbox \"01c33f3f49feefc1a4b51e3b5ea1459b333c04f80541a1b0891b082a04795a12\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 07:56:00.201463 containerd[1620]: time="2025-08-13T07:56:00.201415953Z" level=info msg="CreateContainer within sandbox \"8d07087723be540100b42bf9349390dcde853cffed9f1743e6c14c76a4b09279\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6b09860021ad4db36c3c93ca601e1bf29f420fc7216dd47a0710393b14cfc3da\"" Aug 13 07:56:00.204248 containerd[1620]: time="2025-08-13T07:56:00.203218873Z" level=info msg="StartContainer for \"6b09860021ad4db36c3c93ca601e1bf29f420fc7216dd47a0710393b14cfc3da\"" Aug 13 07:56:00.215491 containerd[1620]: time="2025-08-13T07:56:00.215440425Z" level=info msg="CreateContainer within sandbox \"01c33f3f49feefc1a4b51e3b5ea1459b333c04f80541a1b0891b082a04795a12\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bff7ebabc3ebf58875021d0d6e44dec6d977ab39339d7a4cc676ac901f61fadf\"" Aug 13 07:56:00.216807 containerd[1620]: time="2025-08-13T07:56:00.216773668Z" level=info msg="StartContainer for \"bff7ebabc3ebf58875021d0d6e44dec6d977ab39339d7a4cc676ac901f61fadf\"" Aug 13 07:56:00.222387 containerd[1620]: time="2025-08-13T07:56:00.222337237Z" level=info msg="CreateContainer within sandbox \"f5a1951c668b66030874a329766f748666e6e27d12b0128d3673b34d9c16ac08\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1a625191e89eb5881be0c7f6e347607d045d26653f2fc9eddbd2b5cdd64e39e4\"" Aug 13 07:56:00.223080 containerd[1620]: time="2025-08-13T07:56:00.223049811Z" level=info msg="StartContainer for \"1a625191e89eb5881be0c7f6e347607d045d26653f2fc9eddbd2b5cdd64e39e4\"" Aug 13 07:56:00.422616 containerd[1620]: time="2025-08-13T07:56:00.422568787Z" level=info msg="StartContainer for \"6b09860021ad4db36c3c93ca601e1bf29f420fc7216dd47a0710393b14cfc3da\" returns successfully" Aug 13 07:56:00.429578 containerd[1620]: time="2025-08-13T07:56:00.429540008Z" level=info msg="StartContainer for \"1a625191e89eb5881be0c7f6e347607d045d26653f2fc9eddbd2b5cdd64e39e4\" returns successfully" Aug 13 07:56:00.467754 containerd[1620]: time="2025-08-13T07:56:00.467151670Z" level=info msg="StartContainer for \"bff7ebabc3ebf58875021d0d6e44dec6d977ab39339d7a4cc676ac901f61fadf\" returns successfully" Aug 13 07:56:01.188677 systemd[1]: Started sshd@9-10.230.74.218:22-49.247.36.49:48147.service - OpenSSH per-connection server daemon (49.247.36.49:48147). Aug 13 07:56:01.253267 kubelet[2478]: I0813 07:56:01.252134 2478 kubelet_node_status.go:72] "Attempting to register node" node="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:02.967271 sshd[2753]: Received disconnect from 49.247.36.49 port 48147:11: Bye Bye [preauth] Aug 13 07:56:02.967271 sshd[2753]: Disconnected from authenticating user root 49.247.36.49 port 48147 [preauth] Aug 13 07:56:02.974928 systemd[1]: sshd@9-10.230.74.218:22-49.247.36.49:48147.service: Deactivated successfully. Aug 13 07:56:03.439647 kubelet[2478]: E0813 07:56:03.439547 2478 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-er0cq.gb1.brightbox.com\" not found" node="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:03.499255 kubelet[2478]: I0813 07:56:03.497116 2478 kubelet_node_status.go:75] "Successfully registered node" node="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:04.028295 kubelet[2478]: I0813 07:56:04.028147 2478 apiserver.go:52] "Watching apiserver" Aug 13 07:56:04.053887 kubelet[2478]: I0813 07:56:04.053825 2478 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:56:05.717713 systemd[1]: Reloading requested from client PID 2760 ('systemctl') (unit session-11.scope)... Aug 13 07:56:05.717759 systemd[1]: Reloading... Aug 13 07:56:05.834476 zram_generator::config[2799]: No configuration found. Aug 13 07:56:06.077876 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 07:56:06.202338 systemd[1]: Reloading finished in 483 ms. Aug 13 07:56:06.260699 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:56:06.280737 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 07:56:06.281445 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:56:06.292538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 07:56:06.736468 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 07:56:06.749241 (kubelet)[2873]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 07:56:06.868685 kubelet[2873]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:56:06.869427 kubelet[2873]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 07:56:06.869427 kubelet[2873]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 07:56:06.878871 kubelet[2873]: I0813 07:56:06.877564 2873 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 07:56:06.895485 kubelet[2873]: I0813 07:56:06.895278 2873 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 07:56:06.895485 kubelet[2873]: I0813 07:56:06.895319 2873 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 07:56:06.895709 kubelet[2873]: I0813 07:56:06.895619 2873 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 07:56:06.909778 kubelet[2873]: I0813 07:56:06.909113 2873 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 07:56:06.918087 kubelet[2873]: I0813 07:56:06.917788 2873 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 07:56:06.960799 kubelet[2873]: E0813 07:56:06.960202 2873 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Aug 13 07:56:06.960799 kubelet[2873]: I0813 07:56:06.960267 2873 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Aug 13 07:56:06.966300 kubelet[2873]: I0813 07:56:06.966269 2873 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 07:56:06.967332 kubelet[2873]: I0813 07:56:06.966844 2873 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 07:56:06.967332 kubelet[2873]: I0813 07:56:06.967057 2873 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 07:56:06.967523 kubelet[2873]: I0813 07:56:06.967166 2873 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-er0cq.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Aug 13 07:56:06.967523 kubelet[2873]: I0813 07:56:06.967517 2873 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 07:56:06.968119 kubelet[2873]: I0813 07:56:06.967535 2873 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 07:56:06.968119 kubelet[2873]: I0813 07:56:06.967639 2873 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:56:06.968119 kubelet[2873]: I0813 07:56:06.968034 2873 kubelet.go:408] "Attempting to sync node with API server" Aug 13 07:56:06.968119 kubelet[2873]: I0813 07:56:06.968065 2873 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 07:56:06.970865 kubelet[2873]: I0813 07:56:06.969812 2873 kubelet.go:314] "Adding apiserver pod source" Aug 13 07:56:06.970865 kubelet[2873]: I0813 07:56:06.969849 2873 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 07:56:06.975904 kubelet[2873]: I0813 07:56:06.975877 2873 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Aug 13 07:56:06.976651 kubelet[2873]: I0813 07:56:06.976629 2873 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 07:56:06.978132 kubelet[2873]: I0813 07:56:06.978112 2873 server.go:1274] "Started kubelet" Aug 13 07:56:07.000288 kubelet[2873]: I0813 07:56:06.997524 2873 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 07:56:07.011831 kubelet[2873]: I0813 07:56:07.011798 2873 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 07:56:07.027538 kubelet[2873]: I0813 07:56:07.014885 2873 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 07:56:07.029814 kubelet[2873]: I0813 07:56:07.029787 2873 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 07:56:07.029898 kubelet[2873]: I0813 07:56:07.019568 2873 server.go:449] "Adding debug handlers to kubelet server" Aug 13 07:56:07.036654 kubelet[2873]: I0813 07:56:06.999392 2873 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 07:56:07.039826 kubelet[2873]: I0813 07:56:07.039801 2873 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 07:56:07.040318 kubelet[2873]: I0813 07:56:07.040301 2873 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 07:56:07.040732 kubelet[2873]: I0813 07:56:07.040710 2873 reconciler.go:26] "Reconciler: start to sync state" Aug 13 07:56:07.048223 kubelet[2873]: I0813 07:56:07.048197 2873 factory.go:221] Registration of the systemd container factory successfully Aug 13 07:56:07.048519 kubelet[2873]: I0813 07:56:07.048491 2873 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 07:56:07.049514 kubelet[2873]: E0813 07:56:07.049491 2873 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 07:56:07.053136 kubelet[2873]: I0813 07:56:07.053117 2873 factory.go:221] Registration of the containerd container factory successfully Aug 13 07:56:07.081152 kubelet[2873]: I0813 07:56:07.081082 2873 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 07:56:07.089273 kubelet[2873]: I0813 07:56:07.088666 2873 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 07:56:07.089273 kubelet[2873]: I0813 07:56:07.088755 2873 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 07:56:07.089273 kubelet[2873]: I0813 07:56:07.088842 2873 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 07:56:07.096338 kubelet[2873]: E0813 07:56:07.096213 2873 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 07:56:07.195916 kubelet[2873]: I0813 07:56:07.195881 2873 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 07:56:07.196082 kubelet[2873]: I0813 07:56:07.195911 2873 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 07:56:07.196082 kubelet[2873]: I0813 07:56:07.195997 2873 state_mem.go:36] "Initialized new in-memory state store" Aug 13 07:56:07.196525 kubelet[2873]: I0813 07:56:07.196249 2873 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 07:56:07.196525 kubelet[2873]: E0813 07:56:07.196456 2873 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 13 07:56:07.196525 kubelet[2873]: I0813 07:56:07.196392 2873 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 07:56:07.196525 kubelet[2873]: I0813 07:56:07.196523 2873 policy_none.go:49] "None policy: Start" Aug 13 07:56:07.200095 kubelet[2873]: I0813 07:56:07.199862 2873 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 07:56:07.200095 kubelet[2873]: I0813 07:56:07.199967 2873 state_mem.go:35] "Initializing new in-memory state store" Aug 13 07:56:07.202442 kubelet[2873]: I0813 07:56:07.202404 2873 state_mem.go:75] "Updated machine memory state" Aug 13 07:56:07.207945 kubelet[2873]: I0813 07:56:07.207483 2873 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 07:56:07.208933 kubelet[2873]: I0813 07:56:07.208679 2873 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 07:56:07.208933 kubelet[2873]: I0813 07:56:07.208726 2873 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 07:56:07.213491 kubelet[2873]: I0813 07:56:07.213463 2873 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 07:56:07.331506 kubelet[2873]: I0813 07:56:07.331305 2873 kubelet_node_status.go:72] "Attempting to register node" node="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:07.345274 kubelet[2873]: I0813 07:56:07.344058 2873 kubelet_node_status.go:111] "Node was previously registered" node="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:07.345274 kubelet[2873]: I0813 07:56:07.344177 2873 kubelet_node_status.go:75] "Successfully registered node" node="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:07.415904 kubelet[2873]: W0813 07:56:07.415817 2873 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:56:07.417601 kubelet[2873]: W0813 07:56:07.416354 2873 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:56:07.418428 kubelet[2873]: W0813 07:56:07.418391 2873 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:56:07.444268 kubelet[2873]: I0813 07:56:07.443766 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3e4fcd98082e59363f150095b315311-k8s-certs\") pod \"kube-controller-manager-srv-er0cq.gb1.brightbox.com\" (UID: \"a3e4fcd98082e59363f150095b315311\") " pod="kube-system/kube-controller-manager-srv-er0cq.gb1.brightbox.com" Aug 13 07:56:07.444268 kubelet[2873]: I0813 07:56:07.443820 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3e4fcd98082e59363f150095b315311-kubeconfig\") pod \"kube-controller-manager-srv-er0cq.gb1.brightbox.com\" (UID: \"a3e4fcd98082e59363f150095b315311\") " pod="kube-system/kube-controller-manager-srv-er0cq.gb1.brightbox.com" Aug 13 07:56:07.444268 kubelet[2873]: I0813 07:56:07.443855 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7ac847a2be5afe4e19a3bbbcd16b0b3e-k8s-certs\") pod \"kube-apiserver-srv-er0cq.gb1.brightbox.com\" (UID: \"7ac847a2be5afe4e19a3bbbcd16b0b3e\") " pod="kube-system/kube-apiserver-srv-er0cq.gb1.brightbox.com" Aug 13 07:56:07.444268 kubelet[2873]: I0813 07:56:07.443881 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3e4fcd98082e59363f150095b315311-ca-certs\") pod \"kube-controller-manager-srv-er0cq.gb1.brightbox.com\" (UID: \"a3e4fcd98082e59363f150095b315311\") " pod="kube-system/kube-controller-manager-srv-er0cq.gb1.brightbox.com" Aug 13 07:56:07.444268 kubelet[2873]: I0813 07:56:07.443911 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3e4fcd98082e59363f150095b315311-flexvolume-dir\") pod \"kube-controller-manager-srv-er0cq.gb1.brightbox.com\" (UID: \"a3e4fcd98082e59363f150095b315311\") " pod="kube-system/kube-controller-manager-srv-er0cq.gb1.brightbox.com" Aug 13 07:56:07.445121 kubelet[2873]: I0813 07:56:07.443961 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3e4fcd98082e59363f150095b315311-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-er0cq.gb1.brightbox.com\" (UID: \"a3e4fcd98082e59363f150095b315311\") " pod="kube-system/kube-controller-manager-srv-er0cq.gb1.brightbox.com" Aug 13 07:56:07.445121 kubelet[2873]: I0813 07:56:07.444009 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/48b267584655eb6d56d9487290a98625-kubeconfig\") pod \"kube-scheduler-srv-er0cq.gb1.brightbox.com\" (UID: \"48b267584655eb6d56d9487290a98625\") " pod="kube-system/kube-scheduler-srv-er0cq.gb1.brightbox.com" Aug 13 07:56:07.445121 kubelet[2873]: I0813 07:56:07.444032 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7ac847a2be5afe4e19a3bbbcd16b0b3e-ca-certs\") pod \"kube-apiserver-srv-er0cq.gb1.brightbox.com\" (UID: \"7ac847a2be5afe4e19a3bbbcd16b0b3e\") " pod="kube-system/kube-apiserver-srv-er0cq.gb1.brightbox.com" Aug 13 07:56:07.445121 kubelet[2873]: I0813 07:56:07.444072 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7ac847a2be5afe4e19a3bbbcd16b0b3e-usr-share-ca-certificates\") pod \"kube-apiserver-srv-er0cq.gb1.brightbox.com\" (UID: \"7ac847a2be5afe4e19a3bbbcd16b0b3e\") " pod="kube-system/kube-apiserver-srv-er0cq.gb1.brightbox.com" Aug 13 07:56:07.976440 kubelet[2873]: I0813 07:56:07.976038 2873 apiserver.go:52] "Watching apiserver" Aug 13 07:56:08.041104 kubelet[2873]: I0813 07:56:08.041048 2873 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 07:56:08.155227 kubelet[2873]: W0813 07:56:08.153840 2873 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 13 07:56:08.155227 kubelet[2873]: E0813 07:56:08.153981 2873 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-er0cq.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-er0cq.gb1.brightbox.com" Aug 13 07:56:08.202091 kubelet[2873]: I0813 07:56:08.201884 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-er0cq.gb1.brightbox.com" podStartSLOduration=1.201853381 podStartE2EDuration="1.201853381s" podCreationTimestamp="2025-08-13 07:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:56:08.200933665 +0000 UTC m=+1.431606051" watchObservedRunningTime="2025-08-13 07:56:08.201853381 +0000 UTC m=+1.432525765" Aug 13 07:56:08.214414 kubelet[2873]: I0813 07:56:08.214291 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-er0cq.gb1.brightbox.com" podStartSLOduration=1.214273588 podStartE2EDuration="1.214273588s" podCreationTimestamp="2025-08-13 07:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:56:08.213795646 +0000 UTC m=+1.444468044" watchObservedRunningTime="2025-08-13 07:56:08.214273588 +0000 UTC m=+1.444945969" Aug 13 07:56:08.228927 kubelet[2873]: I0813 07:56:08.228383 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-er0cq.gb1.brightbox.com" podStartSLOduration=1.228365573 podStartE2EDuration="1.228365573s" podCreationTimestamp="2025-08-13 07:56:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:56:08.227117702 +0000 UTC m=+1.457790101" watchObservedRunningTime="2025-08-13 07:56:08.228365573 +0000 UTC m=+1.459037967" Aug 13 07:56:11.048422 kubelet[2873]: I0813 07:56:11.048341 2873 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 07:56:11.052570 kubelet[2873]: I0813 07:56:11.050392 2873 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 07:56:11.052650 containerd[1620]: time="2025-08-13T07:56:11.049869112Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 07:56:12.078266 kubelet[2873]: I0813 07:56:12.076927 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/047ebe0a-cf22-46c2-9293-e0da51edfaff-xtables-lock\") pod \"kube-proxy-qjg9j\" (UID: \"047ebe0a-cf22-46c2-9293-e0da51edfaff\") " pod="kube-system/kube-proxy-qjg9j" Aug 13 07:56:12.078266 kubelet[2873]: I0813 07:56:12.077027 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x8sm\" (UniqueName: \"kubernetes.io/projected/047ebe0a-cf22-46c2-9293-e0da51edfaff-kube-api-access-2x8sm\") pod \"kube-proxy-qjg9j\" (UID: \"047ebe0a-cf22-46c2-9293-e0da51edfaff\") " pod="kube-system/kube-proxy-qjg9j" Aug 13 07:56:12.078266 kubelet[2873]: I0813 07:56:12.077101 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/047ebe0a-cf22-46c2-9293-e0da51edfaff-kube-proxy\") pod \"kube-proxy-qjg9j\" (UID: \"047ebe0a-cf22-46c2-9293-e0da51edfaff\") " pod="kube-system/kube-proxy-qjg9j" Aug 13 07:56:12.078266 kubelet[2873]: I0813 07:56:12.077128 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/047ebe0a-cf22-46c2-9293-e0da51edfaff-lib-modules\") pod \"kube-proxy-qjg9j\" (UID: \"047ebe0a-cf22-46c2-9293-e0da51edfaff\") " pod="kube-system/kube-proxy-qjg9j" Aug 13 07:56:12.179975 kubelet[2873]: I0813 07:56:12.178599 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crmj6\" (UniqueName: \"kubernetes.io/projected/d251cd64-ad23-47a3-9ff9-24065c3d9950-kube-api-access-crmj6\") pod \"tigera-operator-5bf8dfcb4-xhnjv\" (UID: \"d251cd64-ad23-47a3-9ff9-24065c3d9950\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-xhnjv" Aug 13 07:56:12.179975 kubelet[2873]: I0813 07:56:12.178679 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d251cd64-ad23-47a3-9ff9-24065c3d9950-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-xhnjv\" (UID: \"d251cd64-ad23-47a3-9ff9-24065c3d9950\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-xhnjv" Aug 13 07:56:12.290507 containerd[1620]: time="2025-08-13T07:56:12.290396268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qjg9j,Uid:047ebe0a-cf22-46c2-9293-e0da51edfaff,Namespace:kube-system,Attempt:0,}" Aug 13 07:56:12.342471 containerd[1620]: time="2025-08-13T07:56:12.341188064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:56:12.342935 containerd[1620]: time="2025-08-13T07:56:12.342086508Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:56:12.342935 containerd[1620]: time="2025-08-13T07:56:12.342118210Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:12.342935 containerd[1620]: time="2025-08-13T07:56:12.342326611Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:12.417829 containerd[1620]: time="2025-08-13T07:56:12.417764655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qjg9j,Uid:047ebe0a-cf22-46c2-9293-e0da51edfaff,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f1b6b982d40f54247e0ecd12a72c181753ee7eff5ade3dee904585071a9f9fd\"" Aug 13 07:56:12.428351 containerd[1620]: time="2025-08-13T07:56:12.428307210Z" level=info msg="CreateContainer within sandbox \"8f1b6b982d40f54247e0ecd12a72c181753ee7eff5ade3dee904585071a9f9fd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 07:56:12.447666 containerd[1620]: time="2025-08-13T07:56:12.447475914Z" level=info msg="CreateContainer within sandbox \"8f1b6b982d40f54247e0ecd12a72c181753ee7eff5ade3dee904585071a9f9fd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7dcb76cd0bb26d460ed7c26a812c88c67c70faa46dd2869377bdfae2b4017073\"" Aug 13 07:56:12.451317 containerd[1620]: time="2025-08-13T07:56:12.449654222Z" level=info msg="StartContainer for \"7dcb76cd0bb26d460ed7c26a812c88c67c70faa46dd2869377bdfae2b4017073\"" Aug 13 07:56:12.485441 containerd[1620]: time="2025-08-13T07:56:12.485390016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-xhnjv,Uid:d251cd64-ad23-47a3-9ff9-24065c3d9950,Namespace:tigera-operator,Attempt:0,}" Aug 13 07:56:12.554945 containerd[1620]: time="2025-08-13T07:56:12.553917384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:56:12.554945 containerd[1620]: time="2025-08-13T07:56:12.553993936Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:56:12.554945 containerd[1620]: time="2025-08-13T07:56:12.554110829Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:12.554945 containerd[1620]: time="2025-08-13T07:56:12.554379568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:12.561501 containerd[1620]: time="2025-08-13T07:56:12.561456306Z" level=info msg="StartContainer for \"7dcb76cd0bb26d460ed7c26a812c88c67c70faa46dd2869377bdfae2b4017073\" returns successfully" Aug 13 07:56:12.654150 containerd[1620]: time="2025-08-13T07:56:12.653920029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-xhnjv,Uid:d251cd64-ad23-47a3-9ff9-24065c3d9950,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"26ad135025319cf09c86fd38d9ac45902b9645ba3ce666afc50a22af1be1a765\"" Aug 13 07:56:12.658217 containerd[1620]: time="2025-08-13T07:56:12.658186360Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 07:56:13.182705 kubelet[2873]: I0813 07:56:13.181822 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qjg9j" podStartSLOduration=2.181745775 podStartE2EDuration="2.181745775s" podCreationTimestamp="2025-08-13 07:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:56:13.181618346 +0000 UTC m=+6.412290733" watchObservedRunningTime="2025-08-13 07:56:13.181745775 +0000 UTC m=+6.412418158" Aug 13 07:56:13.223424 systemd[1]: run-containerd-runc-k8s.io-8f1b6b982d40f54247e0ecd12a72c181753ee7eff5ade3dee904585071a9f9fd-runc.Nak1jq.mount: Deactivated successfully. Aug 13 07:56:14.399512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3254417171.mount: Deactivated successfully. Aug 13 07:56:15.531594 containerd[1620]: time="2025-08-13T07:56:15.531399546Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:15.533198 containerd[1620]: time="2025-08-13T07:56:15.532822905Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 07:56:15.534514 containerd[1620]: time="2025-08-13T07:56:15.534442122Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:15.538636 containerd[1620]: time="2025-08-13T07:56:15.538573221Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:15.540099 containerd[1620]: time="2025-08-13T07:56:15.539889551Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.881434981s" Aug 13 07:56:15.540099 containerd[1620]: time="2025-08-13T07:56:15.539937602Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 07:56:15.544731 containerd[1620]: time="2025-08-13T07:56:15.544407925Z" level=info msg="CreateContainer within sandbox \"26ad135025319cf09c86fd38d9ac45902b9645ba3ce666afc50a22af1be1a765\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 07:56:15.562678 containerd[1620]: time="2025-08-13T07:56:15.562629084Z" level=info msg="CreateContainer within sandbox \"26ad135025319cf09c86fd38d9ac45902b9645ba3ce666afc50a22af1be1a765\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b0341560b53931075cf37a681b33a0dd395526525a99ea0ffb5a727709bd2c6c\"" Aug 13 07:56:15.564926 containerd[1620]: time="2025-08-13T07:56:15.564561814Z" level=info msg="StartContainer for \"b0341560b53931075cf37a681b33a0dd395526525a99ea0ffb5a727709bd2c6c\"" Aug 13 07:56:15.680597 containerd[1620]: time="2025-08-13T07:56:15.679904363Z" level=info msg="StartContainer for \"b0341560b53931075cf37a681b33a0dd395526525a99ea0ffb5a727709bd2c6c\" returns successfully" Aug 13 07:56:17.187713 kubelet[2873]: I0813 07:56:17.186847 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-xhnjv" podStartSLOduration=2.302409457 podStartE2EDuration="5.186803694s" podCreationTimestamp="2025-08-13 07:56:12 +0000 UTC" firstStartedPulling="2025-08-13 07:56:12.656984311 +0000 UTC m=+5.887656687" lastFinishedPulling="2025-08-13 07:56:15.541378549 +0000 UTC m=+8.772050924" observedRunningTime="2025-08-13 07:56:16.191927244 +0000 UTC m=+9.422599639" watchObservedRunningTime="2025-08-13 07:56:17.186803694 +0000 UTC m=+10.417476077" Aug 13 07:56:23.190159 sudo[1936]: pam_unix(sudo:session): session closed for user root Aug 13 07:56:23.344097 sshd[1932]: pam_unix(sshd:session): session closed for user core Aug 13 07:56:23.353328 systemd-logind[1591]: Session 11 logged out. Waiting for processes to exit. Aug 13 07:56:23.354347 systemd[1]: sshd@8-10.230.74.218:22-139.178.68.195:48960.service: Deactivated successfully. Aug 13 07:56:23.372083 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 07:56:23.376121 systemd-logind[1591]: Removed session 11. Aug 13 07:56:29.001520 kubelet[2873]: I0813 07:56:29.001057 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/baaf43e9-06ee-4d37-a7b7-c4a485547892-tigera-ca-bundle\") pod \"calico-typha-549d5cf4b7-c74s7\" (UID: \"baaf43e9-06ee-4d37-a7b7-c4a485547892\") " pod="calico-system/calico-typha-549d5cf4b7-c74s7" Aug 13 07:56:29.001520 kubelet[2873]: I0813 07:56:29.001171 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-86wr2\" (UniqueName: \"kubernetes.io/projected/baaf43e9-06ee-4d37-a7b7-c4a485547892-kube-api-access-86wr2\") pod \"calico-typha-549d5cf4b7-c74s7\" (UID: \"baaf43e9-06ee-4d37-a7b7-c4a485547892\") " pod="calico-system/calico-typha-549d5cf4b7-c74s7" Aug 13 07:56:29.001520 kubelet[2873]: I0813 07:56:29.001217 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/baaf43e9-06ee-4d37-a7b7-c4a485547892-typha-certs\") pod \"calico-typha-549d5cf4b7-c74s7\" (UID: \"baaf43e9-06ee-4d37-a7b7-c4a485547892\") " pod="calico-system/calico-typha-549d5cf4b7-c74s7" Aug 13 07:56:29.188202 containerd[1620]: time="2025-08-13T07:56:29.188063053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-549d5cf4b7-c74s7,Uid:baaf43e9-06ee-4d37-a7b7-c4a485547892,Namespace:calico-system,Attempt:0,}" Aug 13 07:56:29.206263 kubelet[2873]: I0813 07:56:29.202700 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a4531275-bbbe-485b-9e8d-2d3fbe4dff56-cni-log-dir\") pod \"calico-node-vx9m7\" (UID: \"a4531275-bbbe-485b-9e8d-2d3fbe4dff56\") " pod="calico-system/calico-node-vx9m7" Aug 13 07:56:29.206263 kubelet[2873]: I0813 07:56:29.202831 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a4531275-bbbe-485b-9e8d-2d3fbe4dff56-flexvol-driver-host\") pod \"calico-node-vx9m7\" (UID: \"a4531275-bbbe-485b-9e8d-2d3fbe4dff56\") " pod="calico-system/calico-node-vx9m7" Aug 13 07:56:29.206263 kubelet[2873]: I0813 07:56:29.202969 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a4531275-bbbe-485b-9e8d-2d3fbe4dff56-cni-bin-dir\") pod \"calico-node-vx9m7\" (UID: \"a4531275-bbbe-485b-9e8d-2d3fbe4dff56\") " pod="calico-system/calico-node-vx9m7" Aug 13 07:56:29.206263 kubelet[2873]: I0813 07:56:29.203004 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a4531275-bbbe-485b-9e8d-2d3fbe4dff56-cni-net-dir\") pod \"calico-node-vx9m7\" (UID: \"a4531275-bbbe-485b-9e8d-2d3fbe4dff56\") " pod="calico-system/calico-node-vx9m7" Aug 13 07:56:29.206263 kubelet[2873]: I0813 07:56:29.203080 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a4531275-bbbe-485b-9e8d-2d3fbe4dff56-var-lib-calico\") pod \"calico-node-vx9m7\" (UID: \"a4531275-bbbe-485b-9e8d-2d3fbe4dff56\") " pod="calico-system/calico-node-vx9m7" Aug 13 07:56:29.206760 kubelet[2873]: I0813 07:56:29.203111 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a4531275-bbbe-485b-9e8d-2d3fbe4dff56-tigera-ca-bundle\") pod \"calico-node-vx9m7\" (UID: \"a4531275-bbbe-485b-9e8d-2d3fbe4dff56\") " pod="calico-system/calico-node-vx9m7" Aug 13 07:56:29.206760 kubelet[2873]: I0813 07:56:29.203139 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a4531275-bbbe-485b-9e8d-2d3fbe4dff56-var-run-calico\") pod \"calico-node-vx9m7\" (UID: \"a4531275-bbbe-485b-9e8d-2d3fbe4dff56\") " pod="calico-system/calico-node-vx9m7" Aug 13 07:56:29.206760 kubelet[2873]: I0813 07:56:29.203177 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpk2f\" (UniqueName: \"kubernetes.io/projected/a4531275-bbbe-485b-9e8d-2d3fbe4dff56-kube-api-access-lpk2f\") pod \"calico-node-vx9m7\" (UID: \"a4531275-bbbe-485b-9e8d-2d3fbe4dff56\") " pod="calico-system/calico-node-vx9m7" Aug 13 07:56:29.206760 kubelet[2873]: I0813 07:56:29.203205 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a4531275-bbbe-485b-9e8d-2d3fbe4dff56-policysync\") pod \"calico-node-vx9m7\" (UID: \"a4531275-bbbe-485b-9e8d-2d3fbe4dff56\") " pod="calico-system/calico-node-vx9m7" Aug 13 07:56:29.206760 kubelet[2873]: I0813 07:56:29.203258 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a4531275-bbbe-485b-9e8d-2d3fbe4dff56-lib-modules\") pod \"calico-node-vx9m7\" (UID: \"a4531275-bbbe-485b-9e8d-2d3fbe4dff56\") " pod="calico-system/calico-node-vx9m7" Aug 13 07:56:29.209270 kubelet[2873]: I0813 07:56:29.203308 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a4531275-bbbe-485b-9e8d-2d3fbe4dff56-node-certs\") pod \"calico-node-vx9m7\" (UID: \"a4531275-bbbe-485b-9e8d-2d3fbe4dff56\") " pod="calico-system/calico-node-vx9m7" Aug 13 07:56:29.209270 kubelet[2873]: I0813 07:56:29.205365 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a4531275-bbbe-485b-9e8d-2d3fbe4dff56-xtables-lock\") pod \"calico-node-vx9m7\" (UID: \"a4531275-bbbe-485b-9e8d-2d3fbe4dff56\") " pod="calico-system/calico-node-vx9m7" Aug 13 07:56:29.276192 containerd[1620]: time="2025-08-13T07:56:29.270250330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:56:29.276192 containerd[1620]: time="2025-08-13T07:56:29.275679248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:56:29.276192 containerd[1620]: time="2025-08-13T07:56:29.275724808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:29.276634 containerd[1620]: time="2025-08-13T07:56:29.276492905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:29.315108 kubelet[2873]: E0813 07:56:29.314789 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.315108 kubelet[2873]: W0813 07:56:29.314877 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.315108 kubelet[2873]: E0813 07:56:29.314974 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.317581 kubelet[2873]: E0813 07:56:29.317050 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.317581 kubelet[2873]: W0813 07:56:29.317082 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.317581 kubelet[2873]: E0813 07:56:29.317119 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.322336 kubelet[2873]: E0813 07:56:29.319759 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.322336 kubelet[2873]: W0813 07:56:29.319789 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.322336 kubelet[2873]: E0813 07:56:29.319848 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.329250 kubelet[2873]: E0813 07:56:29.325975 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.329250 kubelet[2873]: W0813 07:56:29.326015 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.329250 kubelet[2873]: E0813 07:56:29.326330 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.329761 kubelet[2873]: E0813 07:56:29.329734 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.329761 kubelet[2873]: W0813 07:56:29.329755 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.332986 kubelet[2873]: E0813 07:56:29.330808 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.332986 kubelet[2873]: E0813 07:56:29.330964 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.332986 kubelet[2873]: W0813 07:56:29.331121 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.333700 kubelet[2873]: E0813 07:56:29.333180 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.334947 kubelet[2873]: E0813 07:56:29.333741 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.334947 kubelet[2873]: W0813 07:56:29.333757 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.334947 kubelet[2873]: E0813 07:56:29.334363 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.336215 kubelet[2873]: E0813 07:56:29.336022 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.336215 kubelet[2873]: W0813 07:56:29.336046 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.337271 kubelet[2873]: E0813 07:56:29.337126 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.339262 kubelet[2873]: E0813 07:56:29.338163 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.339262 kubelet[2873]: W0813 07:56:29.338186 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.339262 kubelet[2873]: E0813 07:56:29.338222 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.339933 kubelet[2873]: E0813 07:56:29.339905 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.339933 kubelet[2873]: W0813 07:56:29.339927 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.340123 kubelet[2873]: E0813 07:56:29.340088 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.341330 kubelet[2873]: E0813 07:56:29.341309 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.341330 kubelet[2873]: W0813 07:56:29.341331 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.341523 kubelet[2873]: E0813 07:56:29.341369 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.342870 kubelet[2873]: E0813 07:56:29.342835 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.342870 kubelet[2873]: W0813 07:56:29.342867 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.343108 kubelet[2873]: E0813 07:56:29.342905 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.345981 kubelet[2873]: E0813 07:56:29.345498 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.345981 kubelet[2873]: W0813 07:56:29.345521 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.345981 kubelet[2873]: E0813 07:56:29.345689 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.351192 kubelet[2873]: E0813 07:56:29.350918 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.351192 kubelet[2873]: W0813 07:56:29.350943 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.356562 kubelet[2873]: E0813 07:56:29.354348 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.357594 kubelet[2873]: E0813 07:56:29.357567 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.357691 kubelet[2873]: W0813 07:56:29.357591 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.362094 kubelet[2873]: E0813 07:56:29.362069 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.362094 kubelet[2873]: W0813 07:56:29.362091 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.365343 kubelet[2873]: E0813 07:56:29.365316 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.365343 kubelet[2873]: W0813 07:56:29.365339 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.370976 kubelet[2873]: E0813 07:56:29.368309 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.370976 kubelet[2873]: W0813 07:56:29.368330 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.370976 kubelet[2873]: E0813 07:56:29.368353 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.370976 kubelet[2873]: E0813 07:56:29.368555 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.370976 kubelet[2873]: E0813 07:56:29.368586 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.383620 kubelet[2873]: E0813 07:56:29.383247 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.383620 kubelet[2873]: W0813 07:56:29.383618 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.383935 kubelet[2873]: E0813 07:56:29.383666 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.383935 kubelet[2873]: E0813 07:56:29.383752 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.404276 containerd[1620]: time="2025-08-13T07:56:29.404001182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vx9m7,Uid:a4531275-bbbe-485b-9e8d-2d3fbe4dff56,Namespace:calico-system,Attempt:0,}" Aug 13 07:56:29.414672 kubelet[2873]: E0813 07:56:29.413925 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clt64" podUID="981413ed-74fe-461c-914c-e0dc01dda890" Aug 13 07:56:29.470014 kubelet[2873]: E0813 07:56:29.469760 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.470634 kubelet[2873]: W0813 07:56:29.470266 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.470762 kubelet[2873]: E0813 07:56:29.470737 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.472089 kubelet[2873]: E0813 07:56:29.471972 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.472408 kubelet[2873]: W0813 07:56:29.472272 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.472689 kubelet[2873]: E0813 07:56:29.472569 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.473733 kubelet[2873]: E0813 07:56:29.473714 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.474908 kubelet[2873]: W0813 07:56:29.474789 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.474908 kubelet[2873]: E0813 07:56:29.474822 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.475432 kubelet[2873]: E0813 07:56:29.475325 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.475432 kubelet[2873]: W0813 07:56:29.475344 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.475432 kubelet[2873]: E0813 07:56:29.475360 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.475973 kubelet[2873]: E0813 07:56:29.475835 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.475973 kubelet[2873]: W0813 07:56:29.475881 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.475973 kubelet[2873]: E0813 07:56:29.475898 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.477123 kubelet[2873]: E0813 07:56:29.477028 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.477123 kubelet[2873]: W0813 07:56:29.477047 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.477123 kubelet[2873]: E0813 07:56:29.477064 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.478455 kubelet[2873]: E0813 07:56:29.478138 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.478455 kubelet[2873]: W0813 07:56:29.478157 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.478455 kubelet[2873]: E0813 07:56:29.478174 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.479521 kubelet[2873]: E0813 07:56:29.479317 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.479521 kubelet[2873]: W0813 07:56:29.479360 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.479521 kubelet[2873]: E0813 07:56:29.479378 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.480886 kubelet[2873]: E0813 07:56:29.480747 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.480886 kubelet[2873]: W0813 07:56:29.480783 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.480886 kubelet[2873]: E0813 07:56:29.480804 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.482870 kubelet[2873]: E0813 07:56:29.482592 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.482870 kubelet[2873]: W0813 07:56:29.482623 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.482870 kubelet[2873]: E0813 07:56:29.482639 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.483988 kubelet[2873]: E0813 07:56:29.483813 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.483988 kubelet[2873]: W0813 07:56:29.483861 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.483988 kubelet[2873]: E0813 07:56:29.483880 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.484823 kubelet[2873]: E0813 07:56:29.484558 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.484823 kubelet[2873]: W0813 07:56:29.484577 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.484823 kubelet[2873]: E0813 07:56:29.484594 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.487596 kubelet[2873]: E0813 07:56:29.486947 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.487596 kubelet[2873]: W0813 07:56:29.486970 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.487596 kubelet[2873]: E0813 07:56:29.486989 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.487596 kubelet[2873]: E0813 07:56:29.487353 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.487596 kubelet[2873]: W0813 07:56:29.487375 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.487596 kubelet[2873]: E0813 07:56:29.487391 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.488393 kubelet[2873]: E0813 07:56:29.488038 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.488393 kubelet[2873]: W0813 07:56:29.488057 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.488393 kubelet[2873]: E0813 07:56:29.488073 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.489954 kubelet[2873]: E0813 07:56:29.489528 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.489954 kubelet[2873]: W0813 07:56:29.489543 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.489954 kubelet[2873]: E0813 07:56:29.489559 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.490452 kubelet[2873]: E0813 07:56:29.490296 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.490452 kubelet[2873]: W0813 07:56:29.490321 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.490452 kubelet[2873]: E0813 07:56:29.490338 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.491265 kubelet[2873]: E0813 07:56:29.490985 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.491265 kubelet[2873]: W0813 07:56:29.491005 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.491265 kubelet[2873]: E0813 07:56:29.491021 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.491992 kubelet[2873]: E0813 07:56:29.491971 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.492728 kubelet[2873]: W0813 07:56:29.492595 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.492728 kubelet[2873]: E0813 07:56:29.492624 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.494526 kubelet[2873]: E0813 07:56:29.494373 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.494526 kubelet[2873]: W0813 07:56:29.494408 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.494526 kubelet[2873]: E0813 07:56:29.494428 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.508328 kubelet[2873]: E0813 07:56:29.508076 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.508328 kubelet[2873]: W0813 07:56:29.508105 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.508328 kubelet[2873]: E0813 07:56:29.508132 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.508328 kubelet[2873]: I0813 07:56:29.508173 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/981413ed-74fe-461c-914c-e0dc01dda890-socket-dir\") pod \"csi-node-driver-clt64\" (UID: \"981413ed-74fe-461c-914c-e0dc01dda890\") " pod="calico-system/csi-node-driver-clt64" Aug 13 07:56:29.509737 kubelet[2873]: E0813 07:56:29.509339 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.509737 kubelet[2873]: W0813 07:56:29.509357 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.509737 kubelet[2873]: E0813 07:56:29.509408 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.509737 kubelet[2873]: I0813 07:56:29.509439 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/981413ed-74fe-461c-914c-e0dc01dda890-kubelet-dir\") pod \"csi-node-driver-clt64\" (UID: \"981413ed-74fe-461c-914c-e0dc01dda890\") " pod="calico-system/csi-node-driver-clt64" Aug 13 07:56:29.511088 kubelet[2873]: E0813 07:56:29.510109 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.511088 kubelet[2873]: W0813 07:56:29.510133 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.511088 kubelet[2873]: E0813 07:56:29.510151 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.511088 kubelet[2873]: I0813 07:56:29.510177 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/981413ed-74fe-461c-914c-e0dc01dda890-registration-dir\") pod \"csi-node-driver-clt64\" (UID: \"981413ed-74fe-461c-914c-e0dc01dda890\") " pod="calico-system/csi-node-driver-clt64" Aug 13 07:56:29.511770 kubelet[2873]: E0813 07:56:29.511565 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.511770 kubelet[2873]: W0813 07:56:29.511586 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.511770 kubelet[2873]: E0813 07:56:29.511610 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.511770 kubelet[2873]: I0813 07:56:29.511634 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/981413ed-74fe-461c-914c-e0dc01dda890-varrun\") pod \"csi-node-driver-clt64\" (UID: \"981413ed-74fe-461c-914c-e0dc01dda890\") " pod="calico-system/csi-node-driver-clt64" Aug 13 07:56:29.512284 kubelet[2873]: E0813 07:56:29.512228 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.512390 kubelet[2873]: W0813 07:56:29.512369 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.512781 kubelet[2873]: E0813 07:56:29.512626 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.512781 kubelet[2873]: I0813 07:56:29.512660 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfn29\" (UniqueName: \"kubernetes.io/projected/981413ed-74fe-461c-914c-e0dc01dda890-kube-api-access-nfn29\") pod \"csi-node-driver-clt64\" (UID: \"981413ed-74fe-461c-914c-e0dc01dda890\") " pod="calico-system/csi-node-driver-clt64" Aug 13 07:56:29.513929 kubelet[2873]: E0813 07:56:29.513413 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.513929 kubelet[2873]: W0813 07:56:29.513448 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.513929 kubelet[2873]: E0813 07:56:29.513470 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.514878 kubelet[2873]: E0813 07:56:29.514605 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.514878 kubelet[2873]: W0813 07:56:29.514624 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.514878 kubelet[2873]: E0813 07:56:29.514641 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.516551 kubelet[2873]: E0813 07:56:29.516340 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.516551 kubelet[2873]: W0813 07:56:29.516359 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.516551 kubelet[2873]: E0813 07:56:29.516385 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.518430 kubelet[2873]: E0813 07:56:29.517645 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.518430 kubelet[2873]: W0813 07:56:29.517664 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.518430 kubelet[2873]: E0813 07:56:29.517705 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.518430 kubelet[2873]: E0813 07:56:29.518304 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.518430 kubelet[2873]: W0813 07:56:29.518319 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.519403 kubelet[2873]: E0813 07:56:29.518780 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.520148 kubelet[2873]: E0813 07:56:29.519984 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.520148 kubelet[2873]: W0813 07:56:29.520004 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.520943 kubelet[2873]: E0813 07:56:29.520677 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.522052 kubelet[2873]: E0813 07:56:29.521756 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.522052 kubelet[2873]: W0813 07:56:29.521777 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.522052 kubelet[2873]: E0813 07:56:29.521803 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.523336 kubelet[2873]: E0813 07:56:29.522948 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.523336 kubelet[2873]: W0813 07:56:29.522967 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.523336 kubelet[2873]: E0813 07:56:29.522984 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.524441 kubelet[2873]: E0813 07:56:29.524042 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.524441 kubelet[2873]: W0813 07:56:29.524061 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.524441 kubelet[2873]: E0813 07:56:29.524208 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.526448 kubelet[2873]: E0813 07:56:29.526302 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.526448 kubelet[2873]: W0813 07:56:29.526322 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.526448 kubelet[2873]: E0813 07:56:29.526340 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.536620 containerd[1620]: time="2025-08-13T07:56:29.536017553Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:56:29.536620 containerd[1620]: time="2025-08-13T07:56:29.536100340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:56:29.536620 containerd[1620]: time="2025-08-13T07:56:29.536116819Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:29.536620 containerd[1620]: time="2025-08-13T07:56:29.536313931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:29.615427 kubelet[2873]: E0813 07:56:29.614723 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.615427 kubelet[2873]: W0813 07:56:29.614880 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.615427 kubelet[2873]: E0813 07:56:29.614938 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.618077 kubelet[2873]: E0813 07:56:29.617025 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.618077 kubelet[2873]: W0813 07:56:29.617045 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.618077 kubelet[2873]: E0813 07:56:29.617145 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.618077 kubelet[2873]: E0813 07:56:29.618021 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.618077 kubelet[2873]: W0813 07:56:29.618036 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.620309 kubelet[2873]: E0813 07:56:29.618498 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.620309 kubelet[2873]: E0813 07:56:29.620215 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.620309 kubelet[2873]: W0813 07:56:29.620278 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.620619 kubelet[2873]: E0813 07:56:29.620587 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.622427 kubelet[2873]: E0813 07:56:29.622101 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.622427 kubelet[2873]: W0813 07:56:29.622120 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.622427 kubelet[2873]: E0813 07:56:29.622201 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.625017 kubelet[2873]: E0813 07:56:29.624374 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.625017 kubelet[2873]: W0813 07:56:29.624460 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.625017 kubelet[2873]: E0813 07:56:29.624496 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.626129 kubelet[2873]: E0813 07:56:29.625456 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.626129 kubelet[2873]: W0813 07:56:29.626099 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.628252 kubelet[2873]: E0813 07:56:29.626519 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.628776 kubelet[2873]: E0813 07:56:29.628468 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.628776 kubelet[2873]: W0813 07:56:29.628496 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.629123 kubelet[2873]: E0813 07:56:29.629099 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.630975 kubelet[2873]: E0813 07:56:29.630622 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.630975 kubelet[2873]: W0813 07:56:29.630642 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.630975 kubelet[2873]: E0813 07:56:29.630759 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.632806 kubelet[2873]: E0813 07:56:29.632672 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.633716 kubelet[2873]: W0813 07:56:29.633028 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.633716 kubelet[2873]: E0813 07:56:29.633469 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.635338 kubelet[2873]: E0813 07:56:29.635021 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.635338 kubelet[2873]: W0813 07:56:29.635040 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.636771 kubelet[2873]: E0813 07:56:29.636366 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.636771 kubelet[2873]: W0813 07:56:29.636486 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.637857 kubelet[2873]: E0813 07:56:29.637609 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.637857 kubelet[2873]: W0813 07:56:29.637629 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.638423 kubelet[2873]: E0813 07:56:29.638291 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.638423 kubelet[2873]: W0813 07:56:29.638310 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.639573 kubelet[2873]: E0813 07:56:29.639405 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.639573 kubelet[2873]: W0813 07:56:29.639446 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.639573 kubelet[2873]: E0813 07:56:29.639466 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.640135 kubelet[2873]: E0813 07:56:29.639915 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.640135 kubelet[2873]: W0813 07:56:29.639934 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.640135 kubelet[2873]: E0813 07:56:29.639950 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.640608 kubelet[2873]: E0813 07:56:29.640419 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.640608 kubelet[2873]: W0813 07:56:29.640438 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.640608 kubelet[2873]: E0813 07:56:29.640458 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.641478 kubelet[2873]: E0813 07:56:29.640985 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.641478 kubelet[2873]: W0813 07:56:29.641003 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.641478 kubelet[2873]: E0813 07:56:29.641020 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.641478 kubelet[2873]: E0813 07:56:29.641356 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.641951 kubelet[2873]: E0813 07:56:29.641931 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.642061 kubelet[2873]: W0813 07:56:29.642041 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.642298 kubelet[2873]: E0813 07:56:29.642152 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.642790 kubelet[2873]: E0813 07:56:29.642571 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.642790 kubelet[2873]: W0813 07:56:29.642590 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.642790 kubelet[2873]: E0813 07:56:29.642635 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.643186 kubelet[2873]: E0813 07:56:29.643168 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.643635 kubelet[2873]: W0813 07:56:29.643313 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.643635 kubelet[2873]: E0813 07:56:29.643337 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.643635 kubelet[2873]: E0813 07:56:29.643371 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.644322 kubelet[2873]: E0813 07:56:29.644165 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.644322 kubelet[2873]: E0813 07:56:29.644193 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.645475 kubelet[2873]: E0813 07:56:29.644787 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.645475 kubelet[2873]: W0813 07:56:29.644827 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.645475 kubelet[2873]: E0813 07:56:29.644859 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.645990 kubelet[2873]: E0813 07:56:29.645821 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.645990 kubelet[2873]: W0813 07:56:29.645852 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.645990 kubelet[2873]: E0813 07:56:29.645872 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.646471 kubelet[2873]: E0813 07:56:29.646368 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.646471 kubelet[2873]: W0813 07:56:29.646386 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.646471 kubelet[2873]: E0813 07:56:29.646403 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.651506 kubelet[2873]: E0813 07:56:29.651480 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.651506 kubelet[2873]: W0813 07:56:29.651504 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.651679 kubelet[2873]: E0813 07:56:29.651525 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.671277 kubelet[2873]: E0813 07:56:29.670507 2873 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 07:56:29.672533 kubelet[2873]: W0813 07:56:29.670536 2873 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 07:56:29.672533 kubelet[2873]: E0813 07:56:29.671743 2873 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 07:56:29.680311 containerd[1620]: time="2025-08-13T07:56:29.679205550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-vx9m7,Uid:a4531275-bbbe-485b-9e8d-2d3fbe4dff56,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e29adff778da59991c530a375f7d7c4ce53d7744d276e05c3c3f319d60bb433\"" Aug 13 07:56:29.702549 containerd[1620]: time="2025-08-13T07:56:29.702506797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 07:56:29.726011 containerd[1620]: time="2025-08-13T07:56:29.725951596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-549d5cf4b7-c74s7,Uid:baaf43e9-06ee-4d37-a7b7-c4a485547892,Namespace:calico-system,Attempt:0,} returns sandbox id \"b7f747c9c979c1768ccc776c7dd08b763a8697d02b73087e0ac5da0f219482a8\"" Aug 13 07:56:31.090015 kubelet[2873]: E0813 07:56:31.089942 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clt64" podUID="981413ed-74fe-461c-914c-e0dc01dda890" Aug 13 07:56:31.544998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2237072586.mount: Deactivated successfully. Aug 13 07:56:31.675167 containerd[1620]: time="2025-08-13T07:56:31.675112709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:31.676728 containerd[1620]: time="2025-08-13T07:56:31.676277511Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=5939797" Aug 13 07:56:31.677397 containerd[1620]: time="2025-08-13T07:56:31.677317940Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:31.690063 containerd[1620]: time="2025-08-13T07:56:31.689969331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:31.691694 containerd[1620]: time="2025-08-13T07:56:31.691562745Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.988597722s" Aug 13 07:56:31.691694 containerd[1620]: time="2025-08-13T07:56:31.691640509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 07:56:31.694995 containerd[1620]: time="2025-08-13T07:56:31.694383799Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 07:56:31.700665 containerd[1620]: time="2025-08-13T07:56:31.700496416Z" level=info msg="CreateContainer within sandbox \"7e29adff778da59991c530a375f7d7c4ce53d7744d276e05c3c3f319d60bb433\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 07:56:31.725890 containerd[1620]: time="2025-08-13T07:56:31.725810009Z" level=info msg="CreateContainer within sandbox \"7e29adff778da59991c530a375f7d7c4ce53d7744d276e05c3c3f319d60bb433\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bacfc2f6add8c2f75ac32fea935a9ef9eb2f8f79b93ed31e3450986197457932\"" Aug 13 07:56:31.729651 containerd[1620]: time="2025-08-13T07:56:31.726908666Z" level=info msg="StartContainer for \"bacfc2f6add8c2f75ac32fea935a9ef9eb2f8f79b93ed31e3450986197457932\"" Aug 13 07:56:31.867908 containerd[1620]: time="2025-08-13T07:56:31.867858998Z" level=info msg="StartContainer for \"bacfc2f6add8c2f75ac32fea935a9ef9eb2f8f79b93ed31e3450986197457932\" returns successfully" Aug 13 07:56:32.014007 containerd[1620]: time="2025-08-13T07:56:31.972529746Z" level=info msg="shim disconnected" id=bacfc2f6add8c2f75ac32fea935a9ef9eb2f8f79b93ed31e3450986197457932 namespace=k8s.io Aug 13 07:56:32.014007 containerd[1620]: time="2025-08-13T07:56:32.013899713Z" level=warning msg="cleaning up after shim disconnected" id=bacfc2f6add8c2f75ac32fea935a9ef9eb2f8f79b93ed31e3450986197457932 namespace=k8s.io Aug 13 07:56:32.014007 containerd[1620]: time="2025-08-13T07:56:32.013934446Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:56:32.490693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bacfc2f6add8c2f75ac32fea935a9ef9eb2f8f79b93ed31e3450986197457932-rootfs.mount: Deactivated successfully. Aug 13 07:56:33.089950 kubelet[2873]: E0813 07:56:33.089858 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clt64" podUID="981413ed-74fe-461c-914c-e0dc01dda890" Aug 13 07:56:35.089526 kubelet[2873]: E0813 07:56:35.089356 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clt64" podUID="981413ed-74fe-461c-914c-e0dc01dda890" Aug 13 07:56:36.298866 containerd[1620]: time="2025-08-13T07:56:36.298694123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:36.301074 containerd[1620]: time="2025-08-13T07:56:36.300250180Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=33740523" Aug 13 07:56:36.304185 containerd[1620]: time="2025-08-13T07:56:36.304142137Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:36.307624 containerd[1620]: time="2025-08-13T07:56:36.307294707Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:36.309608 containerd[1620]: time="2025-08-13T07:56:36.309575281Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 4.615131862s" Aug 13 07:56:36.309778 containerd[1620]: time="2025-08-13T07:56:36.309751524Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 07:56:36.312093 containerd[1620]: time="2025-08-13T07:56:36.312064512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 07:56:36.341528 containerd[1620]: time="2025-08-13T07:56:36.341475033Z" level=info msg="CreateContainer within sandbox \"b7f747c9c979c1768ccc776c7dd08b763a8697d02b73087e0ac5da0f219482a8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 07:56:36.363213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947888925.mount: Deactivated successfully. Aug 13 07:56:36.369378 containerd[1620]: time="2025-08-13T07:56:36.369336803Z" level=info msg="CreateContainer within sandbox \"b7f747c9c979c1768ccc776c7dd08b763a8697d02b73087e0ac5da0f219482a8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e4626c7dd74640cdc1666bfbd614826f7ac7967d471afcc21bfb0939f3dba3ff\"" Aug 13 07:56:36.371722 containerd[1620]: time="2025-08-13T07:56:36.371544942Z" level=info msg="StartContainer for \"e4626c7dd74640cdc1666bfbd614826f7ac7967d471afcc21bfb0939f3dba3ff\"" Aug 13 07:56:36.495332 containerd[1620]: time="2025-08-13T07:56:36.495100319Z" level=info msg="StartContainer for \"e4626c7dd74640cdc1666bfbd614826f7ac7967d471afcc21bfb0939f3dba3ff\" returns successfully" Aug 13 07:56:37.096419 kubelet[2873]: E0813 07:56:37.096281 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clt64" podUID="981413ed-74fe-461c-914c-e0dc01dda890" Aug 13 07:56:37.301128 kubelet[2873]: I0813 07:56:37.300995 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-549d5cf4b7-c74s7" podStartSLOduration=2.7192256500000003 podStartE2EDuration="9.300954807s" podCreationTimestamp="2025-08-13 07:56:28 +0000 UTC" firstStartedPulling="2025-08-13 07:56:29.729859111 +0000 UTC m=+22.960531486" lastFinishedPulling="2025-08-13 07:56:36.311588249 +0000 UTC m=+29.542260643" observedRunningTime="2025-08-13 07:56:37.300898467 +0000 UTC m=+30.531570854" watchObservedRunningTime="2025-08-13 07:56:37.300954807 +0000 UTC m=+30.531627182" Aug 13 07:56:38.284286 kubelet[2873]: I0813 07:56:38.284216 2873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:56:39.090304 kubelet[2873]: E0813 07:56:39.090211 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clt64" podUID="981413ed-74fe-461c-914c-e0dc01dda890" Aug 13 07:56:41.050112 containerd[1620]: time="2025-08-13T07:56:41.049918382Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:41.054086 containerd[1620]: time="2025-08-13T07:56:41.053974137Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 07:56:41.059248 containerd[1620]: time="2025-08-13T07:56:41.058553520Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:41.064502 containerd[1620]: time="2025-08-13T07:56:41.064453436Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:41.068038 containerd[1620]: time="2025-08-13T07:56:41.068001790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.754878455s" Aug 13 07:56:41.068142 containerd[1620]: time="2025-08-13T07:56:41.068062546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 07:56:41.072764 containerd[1620]: time="2025-08-13T07:56:41.072703044Z" level=info msg="CreateContainer within sandbox \"7e29adff778da59991c530a375f7d7c4ce53d7744d276e05c3c3f319d60bb433\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 07:56:41.090574 kubelet[2873]: E0813 07:56:41.090395 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-clt64" podUID="981413ed-74fe-461c-914c-e0dc01dda890" Aug 13 07:56:41.139526 containerd[1620]: time="2025-08-13T07:56:41.139474618Z" level=info msg="CreateContainer within sandbox \"7e29adff778da59991c530a375f7d7c4ce53d7744d276e05c3c3f319d60bb433\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"37f2e0bacfd8e15c05336080fe5c068c623bb42760ea37f3859caedf98ff90f3\"" Aug 13 07:56:41.142476 containerd[1620]: time="2025-08-13T07:56:41.141376332Z" level=info msg="StartContainer for \"37f2e0bacfd8e15c05336080fe5c068c623bb42760ea37f3859caedf98ff90f3\"" Aug 13 07:56:41.298566 containerd[1620]: time="2025-08-13T07:56:41.298492043Z" level=info msg="StartContainer for \"37f2e0bacfd8e15c05336080fe5c068c623bb42760ea37f3859caedf98ff90f3\" returns successfully" Aug 13 07:56:42.712804 kubelet[2873]: I0813 07:56:42.708441 2873 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 07:56:42.727353 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37f2e0bacfd8e15c05336080fe5c068c623bb42760ea37f3859caedf98ff90f3-rootfs.mount: Deactivated successfully. Aug 13 07:56:42.746692 containerd[1620]: time="2025-08-13T07:56:42.727042760Z" level=info msg="shim disconnected" id=37f2e0bacfd8e15c05336080fe5c068c623bb42760ea37f3859caedf98ff90f3 namespace=k8s.io Aug 13 07:56:42.746692 containerd[1620]: time="2025-08-13T07:56:42.746400376Z" level=warning msg="cleaning up after shim disconnected" id=37f2e0bacfd8e15c05336080fe5c068c623bb42760ea37f3859caedf98ff90f3 namespace=k8s.io Aug 13 07:56:42.746692 containerd[1620]: time="2025-08-13T07:56:42.746433481Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 07:56:42.881265 kubelet[2873]: I0813 07:56:42.880070 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f05ad85b-d4f3-4c2d-b462-454f0dd5790f-config-volume\") pod \"coredns-7c65d6cfc9-twz7x\" (UID: \"f05ad85b-d4f3-4c2d-b462-454f0dd5790f\") " pod="kube-system/coredns-7c65d6cfc9-twz7x" Aug 13 07:56:42.881265 kubelet[2873]: I0813 07:56:42.880148 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n76hj\" (UniqueName: \"kubernetes.io/projected/f05ad85b-d4f3-4c2d-b462-454f0dd5790f-kube-api-access-n76hj\") pod \"coredns-7c65d6cfc9-twz7x\" (UID: \"f05ad85b-d4f3-4c2d-b462-454f0dd5790f\") " pod="kube-system/coredns-7c65d6cfc9-twz7x" Aug 13 07:56:42.980726 kubelet[2873]: I0813 07:56:42.980498 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f547ffd7-7d10-4436-85b6-ec353b820f63-tigera-ca-bundle\") pod \"calico-kube-controllers-85fc769f-kftlc\" (UID: \"f547ffd7-7d10-4436-85b6-ec353b820f63\") " pod="calico-system/calico-kube-controllers-85fc769f-kftlc" Aug 13 07:56:42.981014 kubelet[2873]: I0813 07:56:42.980975 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/11f4b399-8c5b-42d8-8ee1-f13c6bb84b22-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-t8sp4\" (UID: \"11f4b399-8c5b-42d8-8ee1-f13c6bb84b22\") " pod="calico-system/goldmane-58fd7646b9-t8sp4" Aug 13 07:56:42.981298 kubelet[2873]: I0813 07:56:42.981273 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3cecc162-5b6e-4863-8dff-bed08c37d53a-calico-apiserver-certs\") pod \"calico-apiserver-75f8484686-4hn7r\" (UID: \"3cecc162-5b6e-4863-8dff-bed08c37d53a\") " pod="calico-apiserver/calico-apiserver-75f8484686-4hn7r" Aug 13 07:56:42.981487 kubelet[2873]: I0813 07:56:42.981462 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9clhp\" (UniqueName: \"kubernetes.io/projected/3cecc162-5b6e-4863-8dff-bed08c37d53a-kube-api-access-9clhp\") pod \"calico-apiserver-75f8484686-4hn7r\" (UID: \"3cecc162-5b6e-4863-8dff-bed08c37d53a\") " pod="calico-apiserver/calico-apiserver-75f8484686-4hn7r" Aug 13 07:56:42.981664 kubelet[2873]: I0813 07:56:42.981620 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/11f4b399-8c5b-42d8-8ee1-f13c6bb84b22-config\") pod \"goldmane-58fd7646b9-t8sp4\" (UID: \"11f4b399-8c5b-42d8-8ee1-f13c6bb84b22\") " pod="calico-system/goldmane-58fd7646b9-t8sp4" Aug 13 07:56:42.982161 kubelet[2873]: I0813 07:56:42.981869 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/11f4b399-8c5b-42d8-8ee1-f13c6bb84b22-goldmane-key-pair\") pod \"goldmane-58fd7646b9-t8sp4\" (UID: \"11f4b399-8c5b-42d8-8ee1-f13c6bb84b22\") " pod="calico-system/goldmane-58fd7646b9-t8sp4" Aug 13 07:56:42.982161 kubelet[2873]: I0813 07:56:42.981928 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b6d807f6-bf8b-4276-bd29-e9b753213504-calico-apiserver-certs\") pod \"calico-apiserver-75f8484686-ncws4\" (UID: \"b6d807f6-bf8b-4276-bd29-e9b753213504\") " pod="calico-apiserver/calico-apiserver-75f8484686-ncws4" Aug 13 07:56:42.982161 kubelet[2873]: I0813 07:56:42.982000 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgdjd\" (UniqueName: \"kubernetes.io/projected/cb2516e5-58ff-4e99-8bda-62cb038aee7c-kube-api-access-fgdjd\") pod \"coredns-7c65d6cfc9-wd7mn\" (UID: \"cb2516e5-58ff-4e99-8bda-62cb038aee7c\") " pod="kube-system/coredns-7c65d6cfc9-wd7mn" Aug 13 07:56:42.982680 kubelet[2873]: I0813 07:56:42.982442 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb2516e5-58ff-4e99-8bda-62cb038aee7c-config-volume\") pod \"coredns-7c65d6cfc9-wd7mn\" (UID: \"cb2516e5-58ff-4e99-8bda-62cb038aee7c\") " pod="kube-system/coredns-7c65d6cfc9-wd7mn" Aug 13 07:56:42.982680 kubelet[2873]: I0813 07:56:42.982539 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/781d81dc-8a62-400a-b15b-2c67ff1291a5-whisker-backend-key-pair\") pod \"whisker-bb7577cf9-r5blg\" (UID: \"781d81dc-8a62-400a-b15b-2c67ff1291a5\") " pod="calico-system/whisker-bb7577cf9-r5blg" Aug 13 07:56:42.982891 kubelet[2873]: I0813 07:56:42.982862 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n84jz\" (UniqueName: \"kubernetes.io/projected/f547ffd7-7d10-4436-85b6-ec353b820f63-kube-api-access-n84jz\") pod \"calico-kube-controllers-85fc769f-kftlc\" (UID: \"f547ffd7-7d10-4436-85b6-ec353b820f63\") " pod="calico-system/calico-kube-controllers-85fc769f-kftlc" Aug 13 07:56:42.983039 kubelet[2873]: I0813 07:56:42.983008 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/781d81dc-8a62-400a-b15b-2c67ff1291a5-whisker-ca-bundle\") pod \"whisker-bb7577cf9-r5blg\" (UID: \"781d81dc-8a62-400a-b15b-2c67ff1291a5\") " pod="calico-system/whisker-bb7577cf9-r5blg" Aug 13 07:56:42.983221 kubelet[2873]: I0813 07:56:42.983183 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw4fl\" (UniqueName: \"kubernetes.io/projected/11f4b399-8c5b-42d8-8ee1-f13c6bb84b22-kube-api-access-dw4fl\") pod \"goldmane-58fd7646b9-t8sp4\" (UID: \"11f4b399-8c5b-42d8-8ee1-f13c6bb84b22\") " pod="calico-system/goldmane-58fd7646b9-t8sp4" Aug 13 07:56:42.983434 kubelet[2873]: I0813 07:56:42.983299 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmzwz\" (UniqueName: \"kubernetes.io/projected/781d81dc-8a62-400a-b15b-2c67ff1291a5-kube-api-access-qmzwz\") pod \"whisker-bb7577cf9-r5blg\" (UID: \"781d81dc-8a62-400a-b15b-2c67ff1291a5\") " pod="calico-system/whisker-bb7577cf9-r5blg" Aug 13 07:56:42.983705 kubelet[2873]: I0813 07:56:42.983566 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8jwj\" (UniqueName: \"kubernetes.io/projected/b6d807f6-bf8b-4276-bd29-e9b753213504-kube-api-access-h8jwj\") pod \"calico-apiserver-75f8484686-ncws4\" (UID: \"b6d807f6-bf8b-4276-bd29-e9b753213504\") " pod="calico-apiserver/calico-apiserver-75f8484686-ncws4" Aug 13 07:56:43.145488 containerd[1620]: time="2025-08-13T07:56:43.144964819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clt64,Uid:981413ed-74fe-461c-914c-e0dc01dda890,Namespace:calico-system,Attempt:0,}" Aug 13 07:56:43.185842 containerd[1620]: time="2025-08-13T07:56:43.185556365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-twz7x,Uid:f05ad85b-d4f3-4c2d-b462-454f0dd5790f,Namespace:kube-system,Attempt:0,}" Aug 13 07:56:43.230083 containerd[1620]: time="2025-08-13T07:56:43.230005353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f8484686-ncws4,Uid:b6d807f6-bf8b-4276-bd29-e9b753213504,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:56:43.230907 containerd[1620]: time="2025-08-13T07:56:43.230367775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wd7mn,Uid:cb2516e5-58ff-4e99-8bda-62cb038aee7c,Namespace:kube-system,Attempt:0,}" Aug 13 07:56:43.230907 containerd[1620]: time="2025-08-13T07:56:43.230692228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-t8sp4,Uid:11f4b399-8c5b-42d8-8ee1-f13c6bb84b22,Namespace:calico-system,Attempt:0,}" Aug 13 07:56:43.232253 containerd[1620]: time="2025-08-13T07:56:43.231929700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f8484686-4hn7r,Uid:3cecc162-5b6e-4863-8dff-bed08c37d53a,Namespace:calico-apiserver,Attempt:0,}" Aug 13 07:56:43.233639 containerd[1620]: time="2025-08-13T07:56:43.233197843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fc769f-kftlc,Uid:f547ffd7-7d10-4436-85b6-ec353b820f63,Namespace:calico-system,Attempt:0,}" Aug 13 07:56:43.233995 containerd[1620]: time="2025-08-13T07:56:43.233951376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bb7577cf9-r5blg,Uid:781d81dc-8a62-400a-b15b-2c67ff1291a5,Namespace:calico-system,Attempt:0,}" Aug 13 07:56:43.380349 containerd[1620]: time="2025-08-13T07:56:43.380115497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 07:56:43.704517 containerd[1620]: time="2025-08-13T07:56:43.704431214Z" level=error msg="Failed to destroy network for sandbox \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.711988 containerd[1620]: time="2025-08-13T07:56:43.711941545Z" level=error msg="Failed to destroy network for sandbox \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.721369 containerd[1620]: time="2025-08-13T07:56:43.721314201Z" level=error msg="encountered an error cleaning up failed sandbox \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.724156 containerd[1620]: time="2025-08-13T07:56:43.723076445Z" level=error msg="encountered an error cleaning up failed sandbox \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.777657 containerd[1620]: time="2025-08-13T07:56:43.777451682Z" level=error msg="Failed to destroy network for sandbox \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.785308 containerd[1620]: time="2025-08-13T07:56:43.783922980Z" level=error msg="Failed to destroy network for sandbox \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.785308 containerd[1620]: time="2025-08-13T07:56:43.785011073Z" level=error msg="encountered an error cleaning up failed sandbox \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.788268 containerd[1620]: time="2025-08-13T07:56:43.786414450Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bb7577cf9-r5blg,Uid:781d81dc-8a62-400a-b15b-2c67ff1291a5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.788268 containerd[1620]: time="2025-08-13T07:56:43.786589870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f8484686-4hn7r,Uid:3cecc162-5b6e-4863-8dff-bed08c37d53a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.788268 containerd[1620]: time="2025-08-13T07:56:43.786818944Z" level=error msg="Failed to destroy network for sandbox \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.787527 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb-shm.mount: Deactivated successfully. Aug 13 07:56:43.796737 containerd[1620]: time="2025-08-13T07:56:43.792785559Z" level=error msg="encountered an error cleaning up failed sandbox \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.796737 containerd[1620]: time="2025-08-13T07:56:43.794647108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fc769f-kftlc,Uid:f547ffd7-7d10-4436-85b6-ec353b820f63,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.796737 containerd[1620]: time="2025-08-13T07:56:43.794770699Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-twz7x,Uid:f05ad85b-d4f3-4c2d-b462-454f0dd5790f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.796737 containerd[1620]: time="2025-08-13T07:56:43.794962279Z" level=error msg="Failed to destroy network for sandbox \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.796737 containerd[1620]: time="2025-08-13T07:56:43.795633460Z" level=error msg="encountered an error cleaning up failed sandbox \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.797690 kubelet[2873]: E0813 07:56:43.795122 2873 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.797690 kubelet[2873]: E0813 07:56:43.795187 2873 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.797690 kubelet[2873]: E0813 07:56:43.795307 2873 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bb7577cf9-r5blg" Aug 13 07:56:43.797690 kubelet[2873]: E0813 07:56:43.795376 2873 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-bb7577cf9-r5blg" Aug 13 07:56:43.793950 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f-shm.mount: Deactivated successfully. Aug 13 07:56:43.803318 kubelet[2873]: E0813 07:56:43.795464 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-bb7577cf9-r5blg_calico-system(781d81dc-8a62-400a-b15b-2c67ff1291a5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-bb7577cf9-r5blg_calico-system(781d81dc-8a62-400a-b15b-2c67ff1291a5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bb7577cf9-r5blg" podUID="781d81dc-8a62-400a-b15b-2c67ff1291a5" Aug 13 07:56:43.803318 kubelet[2873]: E0813 07:56:43.795697 2873 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75f8484686-4hn7r" Aug 13 07:56:43.803318 kubelet[2873]: E0813 07:56:43.795803 2873 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75f8484686-4hn7r" Aug 13 07:56:43.794313 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a-shm.mount: Deactivated successfully. Aug 13 07:56:43.803626 kubelet[2873]: E0813 07:56:43.795859 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75f8484686-4hn7r_calico-apiserver(3cecc162-5b6e-4863-8dff-bed08c37d53a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75f8484686-4hn7r_calico-apiserver(3cecc162-5b6e-4863-8dff-bed08c37d53a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75f8484686-4hn7r" podUID="3cecc162-5b6e-4863-8dff-bed08c37d53a" Aug 13 07:56:43.803626 kubelet[2873]: E0813 07:56:43.795927 2873 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.803626 kubelet[2873]: E0813 07:56:43.795967 2873 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fc769f-kftlc" Aug 13 07:56:43.807843 kubelet[2873]: E0813 07:56:43.795998 2873 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fc769f-kftlc" Aug 13 07:56:43.807843 kubelet[2873]: E0813 07:56:43.796080 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85fc769f-kftlc_calico-system(f547ffd7-7d10-4436-85b6-ec353b820f63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85fc769f-kftlc_calico-system(f547ffd7-7d10-4436-85b6-ec353b820f63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85fc769f-kftlc" podUID="f547ffd7-7d10-4436-85b6-ec353b820f63" Aug 13 07:56:43.807843 kubelet[2873]: E0813 07:56:43.795122 2873 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.806600 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663-shm.mount: Deactivated successfully. Aug 13 07:56:43.810435 kubelet[2873]: E0813 07:56:43.796179 2873 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-twz7x" Aug 13 07:56:43.810435 kubelet[2873]: E0813 07:56:43.796202 2873 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-twz7x" Aug 13 07:56:43.810435 kubelet[2873]: E0813 07:56:43.798509 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-twz7x_kube-system(f05ad85b-d4f3-4c2d-b462-454f0dd5790f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-twz7x_kube-system(f05ad85b-d4f3-4c2d-b462-454f0dd5790f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-twz7x" podUID="f05ad85b-d4f3-4c2d-b462-454f0dd5790f" Aug 13 07:56:43.814903 containerd[1620]: time="2025-08-13T07:56:43.811004483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f8484686-ncws4,Uid:b6d807f6-bf8b-4276-bd29-e9b753213504,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.815059 kubelet[2873]: E0813 07:56:43.814613 2873 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.815059 kubelet[2873]: E0813 07:56:43.814668 2873 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75f8484686-ncws4" Aug 13 07:56:43.815059 kubelet[2873]: E0813 07:56:43.814698 2873 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-75f8484686-ncws4" Aug 13 07:56:43.815318 kubelet[2873]: E0813 07:56:43.814800 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-75f8484686-ncws4_calico-apiserver(b6d807f6-bf8b-4276-bd29-e9b753213504)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-75f8484686-ncws4_calico-apiserver(b6d807f6-bf8b-4276-bd29-e9b753213504)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75f8484686-ncws4" podUID="b6d807f6-bf8b-4276-bd29-e9b753213504" Aug 13 07:56:43.833020 containerd[1620]: time="2025-08-13T07:56:43.832951911Z" level=error msg="encountered an error cleaning up failed sandbox \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.833302 containerd[1620]: time="2025-08-13T07:56:43.833266844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clt64,Uid:981413ed-74fe-461c-914c-e0dc01dda890,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.835657 kubelet[2873]: E0813 07:56:43.833701 2873 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.835657 kubelet[2873]: E0813 07:56:43.833791 2873 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-clt64" Aug 13 07:56:43.835657 kubelet[2873]: E0813 07:56:43.833822 2873 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-clt64" Aug 13 07:56:43.835876 kubelet[2873]: E0813 07:56:43.833872 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-clt64_calico-system(981413ed-74fe-461c-914c-e0dc01dda890)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-clt64_calico-system(981413ed-74fe-461c-914c-e0dc01dda890)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-clt64" podUID="981413ed-74fe-461c-914c-e0dc01dda890" Aug 13 07:56:43.848253 containerd[1620]: time="2025-08-13T07:56:43.848131518Z" level=error msg="Failed to destroy network for sandbox \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.850424 containerd[1620]: time="2025-08-13T07:56:43.850248450Z" level=error msg="Failed to destroy network for sandbox \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.850862 containerd[1620]: time="2025-08-13T07:56:43.850771604Z" level=error msg="encountered an error cleaning up failed sandbox \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.850969 containerd[1620]: time="2025-08-13T07:56:43.850902084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wd7mn,Uid:cb2516e5-58ff-4e99-8bda-62cb038aee7c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.851421 kubelet[2873]: E0813 07:56:43.851328 2873 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.851634 kubelet[2873]: E0813 07:56:43.851581 2873 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wd7mn" Aug 13 07:56:43.851866 kubelet[2873]: E0813 07:56:43.851812 2873 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-wd7mn" Aug 13 07:56:43.852124 kubelet[2873]: E0813 07:56:43.852058 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-wd7mn_kube-system(cb2516e5-58ff-4e99-8bda-62cb038aee7c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-wd7mn_kube-system(cb2516e5-58ff-4e99-8bda-62cb038aee7c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wd7mn" podUID="cb2516e5-58ff-4e99-8bda-62cb038aee7c" Aug 13 07:56:43.852553 containerd[1620]: time="2025-08-13T07:56:43.852500395Z" level=error msg="encountered an error cleaning up failed sandbox \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.852739 containerd[1620]: time="2025-08-13T07:56:43.852661192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-t8sp4,Uid:11f4b399-8c5b-42d8-8ee1-f13c6bb84b22,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.853842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2-shm.mount: Deactivated successfully. Aug 13 07:56:43.855707 kubelet[2873]: E0813 07:56:43.855115 2873 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:43.855707 kubelet[2873]: E0813 07:56:43.855181 2873 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-t8sp4" Aug 13 07:56:43.855707 kubelet[2873]: E0813 07:56:43.855207 2873 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-t8sp4" Aug 13 07:56:43.857182 kubelet[2873]: E0813 07:56:43.855288 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-t8sp4_calico-system(11f4b399-8c5b-42d8-8ee1-f13c6bb84b22)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-t8sp4_calico-system(11f4b399-8c5b-42d8-8ee1-f13c6bb84b22)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-t8sp4" podUID="11f4b399-8c5b-42d8-8ee1-f13c6bb84b22" Aug 13 07:56:44.329490 kubelet[2873]: I0813 07:56:44.329435 2873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Aug 13 07:56:44.331189 kubelet[2873]: I0813 07:56:44.331003 2873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Aug 13 07:56:44.371277 kubelet[2873]: I0813 07:56:44.369907 2873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Aug 13 07:56:44.376881 kubelet[2873]: I0813 07:56:44.376173 2873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Aug 13 07:56:44.380571 kubelet[2873]: I0813 07:56:44.380153 2873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Aug 13 07:56:44.388566 containerd[1620]: time="2025-08-13T07:56:44.388518623Z" level=info msg="StopPodSandbox for \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\"" Aug 13 07:56:44.390499 containerd[1620]: time="2025-08-13T07:56:44.389427484Z" level=info msg="StopPodSandbox for \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\"" Aug 13 07:56:44.390586 containerd[1620]: time="2025-08-13T07:56:44.390509950Z" level=info msg="Ensure that sandbox 4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92 in task-service has been cleanup successfully" Aug 13 07:56:44.390704 containerd[1620]: time="2025-08-13T07:56:44.390675376Z" level=info msg="Ensure that sandbox 68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb in task-service has been cleanup successfully" Aug 13 07:56:44.395724 containerd[1620]: time="2025-08-13T07:56:44.395584794Z" level=info msg="StopPodSandbox for \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\"" Aug 13 07:56:44.395849 containerd[1620]: time="2025-08-13T07:56:44.395820807Z" level=info msg="Ensure that sandbox e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a in task-service has been cleanup successfully" Aug 13 07:56:44.398089 kubelet[2873]: I0813 07:56:44.397527 2873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Aug 13 07:56:44.399434 containerd[1620]: time="2025-08-13T07:56:44.399079123Z" level=info msg="StopPodSandbox for \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\"" Aug 13 07:56:44.400358 containerd[1620]: time="2025-08-13T07:56:44.400082212Z" level=info msg="Ensure that sandbox aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663 in task-service has been cleanup successfully" Aug 13 07:56:44.402669 containerd[1620]: time="2025-08-13T07:56:44.402630896Z" level=info msg="StopPodSandbox for \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\"" Aug 13 07:56:44.402989 containerd[1620]: time="2025-08-13T07:56:44.402959676Z" level=info msg="Ensure that sandbox c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f in task-service has been cleanup successfully" Aug 13 07:56:44.406221 containerd[1620]: time="2025-08-13T07:56:44.405703672Z" level=info msg="StopPodSandbox for \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\"" Aug 13 07:56:44.406221 containerd[1620]: time="2025-08-13T07:56:44.405952634Z" level=info msg="Ensure that sandbox 998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65 in task-service has been cleanup successfully" Aug 13 07:56:44.408703 kubelet[2873]: I0813 07:56:44.408619 2873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Aug 13 07:56:44.410794 containerd[1620]: time="2025-08-13T07:56:44.410738587Z" level=info msg="StopPodSandbox for \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\"" Aug 13 07:56:44.414052 kubelet[2873]: I0813 07:56:44.414004 2873 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Aug 13 07:56:44.416804 containerd[1620]: time="2025-08-13T07:56:44.416609921Z" level=info msg="Ensure that sandbox 8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46 in task-service has been cleanup successfully" Aug 13 07:56:44.424432 containerd[1620]: time="2025-08-13T07:56:44.423804355Z" level=info msg="StopPodSandbox for \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\"" Aug 13 07:56:44.424432 containerd[1620]: time="2025-08-13T07:56:44.424123028Z" level=info msg="Ensure that sandbox f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2 in task-service has been cleanup successfully" Aug 13 07:56:44.530868 containerd[1620]: time="2025-08-13T07:56:44.530797278Z" level=error msg="StopPodSandbox for \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\" failed" error="failed to destroy network for sandbox \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:44.538696 kubelet[2873]: E0813 07:56:44.531105 2873 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Aug 13 07:56:44.552461 kubelet[2873]: E0813 07:56:44.538745 2873 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92"} Aug 13 07:56:44.552638 kubelet[2873]: E0813 07:56:44.552573 2873 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3cecc162-5b6e-4863-8dff-bed08c37d53a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:56:44.552752 kubelet[2873]: E0813 07:56:44.552643 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3cecc162-5b6e-4863-8dff-bed08c37d53a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75f8484686-4hn7r" podUID="3cecc162-5b6e-4863-8dff-bed08c37d53a" Aug 13 07:56:44.568956 containerd[1620]: time="2025-08-13T07:56:44.568684140Z" level=error msg="StopPodSandbox for \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\" failed" error="failed to destroy network for sandbox \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:44.569118 kubelet[2873]: E0813 07:56:44.569023 2873 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Aug 13 07:56:44.569118 kubelet[2873]: E0813 07:56:44.569085 2873 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65"} Aug 13 07:56:44.569804 kubelet[2873]: E0813 07:56:44.569173 2873 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb2516e5-58ff-4e99-8bda-62cb038aee7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:56:44.569804 kubelet[2873]: E0813 07:56:44.569220 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb2516e5-58ff-4e99-8bda-62cb038aee7c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-wd7mn" podUID="cb2516e5-58ff-4e99-8bda-62cb038aee7c" Aug 13 07:56:44.569804 kubelet[2873]: E0813 07:56:44.569726 2873 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Aug 13 07:56:44.570079 containerd[1620]: time="2025-08-13T07:56:44.569535141Z" level=error msg="StopPodSandbox for \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\" failed" error="failed to destroy network for sandbox \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:44.570127 kubelet[2873]: E0813 07:56:44.569805 2873 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a"} Aug 13 07:56:44.570127 kubelet[2873]: E0813 07:56:44.569889 2873 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b6d807f6-bf8b-4276-bd29-e9b753213504\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:56:44.570127 kubelet[2873]: E0813 07:56:44.569958 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b6d807f6-bf8b-4276-bd29-e9b753213504\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-75f8484686-ncws4" podUID="b6d807f6-bf8b-4276-bd29-e9b753213504" Aug 13 07:56:44.597966 containerd[1620]: time="2025-08-13T07:56:44.597815085Z" level=error msg="StopPodSandbox for \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\" failed" error="failed to destroy network for sandbox \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:44.600705 kubelet[2873]: E0813 07:56:44.600645 2873 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Aug 13 07:56:44.600831 kubelet[2873]: E0813 07:56:44.600732 2873 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46"} Aug 13 07:56:44.600831 kubelet[2873]: E0813 07:56:44.600801 2873 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f05ad85b-d4f3-4c2d-b462-454f0dd5790f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:56:44.601580 kubelet[2873]: E0813 07:56:44.600840 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f05ad85b-d4f3-4c2d-b462-454f0dd5790f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-twz7x" podUID="f05ad85b-d4f3-4c2d-b462-454f0dd5790f" Aug 13 07:56:44.610981 containerd[1620]: time="2025-08-13T07:56:44.610914664Z" level=error msg="StopPodSandbox for \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\" failed" error="failed to destroy network for sandbox \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:44.611394 kubelet[2873]: E0813 07:56:44.611205 2873 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Aug 13 07:56:44.611630 kubelet[2873]: E0813 07:56:44.611456 2873 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663"} Aug 13 07:56:44.611748 kubelet[2873]: E0813 07:56:44.611675 2873 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"981413ed-74fe-461c-914c-e0dc01dda890\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:56:44.611972 kubelet[2873]: E0813 07:56:44.611890 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"981413ed-74fe-461c-914c-e0dc01dda890\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-clt64" podUID="981413ed-74fe-461c-914c-e0dc01dda890" Aug 13 07:56:44.612071 containerd[1620]: time="2025-08-13T07:56:44.612010648Z" level=error msg="StopPodSandbox for \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\" failed" error="failed to destroy network for sandbox \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:44.612821 kubelet[2873]: E0813 07:56:44.612229 2873 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Aug 13 07:56:44.612821 kubelet[2873]: E0813 07:56:44.612515 2873 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2"} Aug 13 07:56:44.612821 kubelet[2873]: E0813 07:56:44.612652 2873 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"11f4b399-8c5b-42d8-8ee1-f13c6bb84b22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:56:44.612821 kubelet[2873]: E0813 07:56:44.612704 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"11f4b399-8c5b-42d8-8ee1-f13c6bb84b22\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-t8sp4" podUID="11f4b399-8c5b-42d8-8ee1-f13c6bb84b22" Aug 13 07:56:44.614302 containerd[1620]: time="2025-08-13T07:56:44.614220121Z" level=error msg="StopPodSandbox for \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\" failed" error="failed to destroy network for sandbox \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:44.614942 kubelet[2873]: E0813 07:56:44.614704 2873 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Aug 13 07:56:44.614942 kubelet[2873]: E0813 07:56:44.614764 2873 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb"} Aug 13 07:56:44.614942 kubelet[2873]: E0813 07:56:44.614836 2873 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"781d81dc-8a62-400a-b15b-2c67ff1291a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:56:44.614942 kubelet[2873]: E0813 07:56:44.614877 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"781d81dc-8a62-400a-b15b-2c67ff1291a5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-bb7577cf9-r5blg" podUID="781d81dc-8a62-400a-b15b-2c67ff1291a5" Aug 13 07:56:44.619916 containerd[1620]: time="2025-08-13T07:56:44.619802161Z" level=error msg="StopPodSandbox for \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\" failed" error="failed to destroy network for sandbox \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 07:56:44.620063 kubelet[2873]: E0813 07:56:44.620012 2873 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Aug 13 07:56:44.620178 kubelet[2873]: E0813 07:56:44.620072 2873 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f"} Aug 13 07:56:44.620178 kubelet[2873]: E0813 07:56:44.620144 2873 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f547ffd7-7d10-4436-85b6-ec353b820f63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 13 07:56:44.620385 kubelet[2873]: E0813 07:56:44.620182 2873 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f547ffd7-7d10-4436-85b6-ec353b820f63\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85fc769f-kftlc" podUID="f547ffd7-7d10-4436-85b6-ec353b820f63" Aug 13 07:56:44.719413 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65-shm.mount: Deactivated successfully. Aug 13 07:56:49.639299 kubelet[2873]: I0813 07:56:49.638050 2873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:56:53.692410 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1647634001.mount: Deactivated successfully. Aug 13 07:56:53.797889 containerd[1620]: time="2025-08-13T07:56:53.797751469Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:53.814647 containerd[1620]: time="2025-08-13T07:56:53.814474265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 07:56:53.841076 containerd[1620]: time="2025-08-13T07:56:53.840984475Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:53.844441 containerd[1620]: time="2025-08-13T07:56:53.844392389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:56:53.849271 containerd[1620]: time="2025-08-13T07:56:53.849165619Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 10.465670796s" Aug 13 07:56:53.849374 containerd[1620]: time="2025-08-13T07:56:53.849278868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 07:56:53.911948 containerd[1620]: time="2025-08-13T07:56:53.911883750Z" level=info msg="CreateContainer within sandbox \"7e29adff778da59991c530a375f7d7c4ce53d7744d276e05c3c3f319d60bb433\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 07:56:53.940835 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:56:53.935942 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:56:53.936217 systemd-resolved[1515]: Flushed all caches. Aug 13 07:56:53.976769 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2620195214.mount: Deactivated successfully. Aug 13 07:56:53.986576 containerd[1620]: time="2025-08-13T07:56:53.986494628Z" level=info msg="CreateContainer within sandbox \"7e29adff778da59991c530a375f7d7c4ce53d7744d276e05c3c3f319d60bb433\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ab0e4eacbffd93644512486ae0ed0742139b4f68662558bc7fbdc2a597ad98cf\"" Aug 13 07:56:53.987540 containerd[1620]: time="2025-08-13T07:56:53.987481789Z" level=info msg="StartContainer for \"ab0e4eacbffd93644512486ae0ed0742139b4f68662558bc7fbdc2a597ad98cf\"" Aug 13 07:56:54.279911 containerd[1620]: time="2025-08-13T07:56:54.279600863Z" level=info msg="StartContainer for \"ab0e4eacbffd93644512486ae0ed0742139b4f68662558bc7fbdc2a597ad98cf\" returns successfully" Aug 13 07:56:54.539493 kubelet[2873]: I0813 07:56:54.526779 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-vx9m7" podStartSLOduration=1.365573733 podStartE2EDuration="25.519440352s" podCreationTimestamp="2025-08-13 07:56:29 +0000 UTC" firstStartedPulling="2025-08-13 07:56:29.696940621 +0000 UTC m=+22.927613002" lastFinishedPulling="2025-08-13 07:56:53.850807239 +0000 UTC m=+47.081479621" observedRunningTime="2025-08-13 07:56:54.517647631 +0000 UTC m=+47.748320025" watchObservedRunningTime="2025-08-13 07:56:54.519440352 +0000 UTC m=+47.750112733" Aug 13 07:56:54.609260 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 07:56:54.611425 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 07:56:54.887102 containerd[1620]: time="2025-08-13T07:56:54.887050983Z" level=info msg="StopPodSandbox for \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\"" Aug 13 07:56:55.562088 containerd[1620]: 2025-08-13 07:56:55.243 [INFO][4049] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Aug 13 07:56:55.562088 containerd[1620]: 2025-08-13 07:56:55.265 [INFO][4049] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" iface="eth0" netns="/var/run/netns/cni-9e694055-3e3f-8db9-ee77-f1101deac069" Aug 13 07:56:55.562088 containerd[1620]: 2025-08-13 07:56:55.266 [INFO][4049] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" iface="eth0" netns="/var/run/netns/cni-9e694055-3e3f-8db9-ee77-f1101deac069" Aug 13 07:56:55.562088 containerd[1620]: 2025-08-13 07:56:55.267 [INFO][4049] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" iface="eth0" netns="/var/run/netns/cni-9e694055-3e3f-8db9-ee77-f1101deac069" Aug 13 07:56:55.562088 containerd[1620]: 2025-08-13 07:56:55.267 [INFO][4049] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Aug 13 07:56:55.562088 containerd[1620]: 2025-08-13 07:56:55.268 [INFO][4049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Aug 13 07:56:55.562088 containerd[1620]: 2025-08-13 07:56:55.535 [INFO][4057] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" HandleID="k8s-pod-network.68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Workload="srv--er0cq.gb1.brightbox.com-k8s-whisker--bb7577cf9--r5blg-eth0" Aug 13 07:56:55.562088 containerd[1620]: 2025-08-13 07:56:55.538 [INFO][4057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:56:55.562088 containerd[1620]: 2025-08-13 07:56:55.538 [INFO][4057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:56:55.562088 containerd[1620]: 2025-08-13 07:56:55.553 [WARNING][4057] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" HandleID="k8s-pod-network.68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Workload="srv--er0cq.gb1.brightbox.com-k8s-whisker--bb7577cf9--r5blg-eth0" Aug 13 07:56:55.562088 containerd[1620]: 2025-08-13 07:56:55.553 [INFO][4057] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" HandleID="k8s-pod-network.68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Workload="srv--er0cq.gb1.brightbox.com-k8s-whisker--bb7577cf9--r5blg-eth0" Aug 13 07:56:55.562088 containerd[1620]: 2025-08-13 07:56:55.555 [INFO][4057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:56:55.562088 containerd[1620]: 2025-08-13 07:56:55.559 [INFO][4049] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Aug 13 07:56:55.569992 systemd[1]: run-netns-cni\x2d9e694055\x2d3e3f\x2d8db9\x2dee77\x2df1101deac069.mount: Deactivated successfully. Aug 13 07:56:55.589918 containerd[1620]: time="2025-08-13T07:56:55.589836158Z" level=info msg="TearDown network for sandbox \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\" successfully" Aug 13 07:56:55.590212 containerd[1620]: time="2025-08-13T07:56:55.590185612Z" level=info msg="StopPodSandbox for \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\" returns successfully" Aug 13 07:56:55.715390 kubelet[2873]: I0813 07:56:55.715306 2873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qmzwz\" (UniqueName: \"kubernetes.io/projected/781d81dc-8a62-400a-b15b-2c67ff1291a5-kube-api-access-qmzwz\") pod \"781d81dc-8a62-400a-b15b-2c67ff1291a5\" (UID: \"781d81dc-8a62-400a-b15b-2c67ff1291a5\") " Aug 13 07:56:55.716276 kubelet[2873]: I0813 07:56:55.716084 2873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/781d81dc-8a62-400a-b15b-2c67ff1291a5-whisker-backend-key-pair\") pod \"781d81dc-8a62-400a-b15b-2c67ff1291a5\" (UID: \"781d81dc-8a62-400a-b15b-2c67ff1291a5\") " Aug 13 07:56:55.723127 kubelet[2873]: I0813 07:56:55.722628 2873 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/781d81dc-8a62-400a-b15b-2c67ff1291a5-whisker-ca-bundle\") pod \"781d81dc-8a62-400a-b15b-2c67ff1291a5\" (UID: \"781d81dc-8a62-400a-b15b-2c67ff1291a5\") " Aug 13 07:56:55.753363 systemd[1]: var-lib-kubelet-pods-781d81dc\x2d8a62\x2d400a\x2db15b\x2d2c67ff1291a5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqmzwz.mount: Deactivated successfully. Aug 13 07:56:55.754021 systemd[1]: var-lib-kubelet-pods-781d81dc\x2d8a62\x2d400a\x2db15b\x2d2c67ff1291a5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 07:56:55.767261 kubelet[2873]: I0813 07:56:55.763845 2873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/781d81dc-8a62-400a-b15b-2c67ff1291a5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "781d81dc-8a62-400a-b15b-2c67ff1291a5" (UID: "781d81dc-8a62-400a-b15b-2c67ff1291a5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 07:56:55.767261 kubelet[2873]: I0813 07:56:55.766316 2873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781d81dc-8a62-400a-b15b-2c67ff1291a5-kube-api-access-qmzwz" (OuterVolumeSpecName: "kube-api-access-qmzwz") pod "781d81dc-8a62-400a-b15b-2c67ff1291a5" (UID: "781d81dc-8a62-400a-b15b-2c67ff1291a5"). InnerVolumeSpecName "kube-api-access-qmzwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 07:56:55.773408 kubelet[2873]: I0813 07:56:55.773226 2873 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781d81dc-8a62-400a-b15b-2c67ff1291a5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "781d81dc-8a62-400a-b15b-2c67ff1291a5" (UID: "781d81dc-8a62-400a-b15b-2c67ff1291a5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 07:56:55.823815 kubelet[2873]: I0813 07:56:55.823756 2873 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/781d81dc-8a62-400a-b15b-2c67ff1291a5-whisker-backend-key-pair\") on node \"srv-er0cq.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:56:55.823815 kubelet[2873]: I0813 07:56:55.823814 2873 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/781d81dc-8a62-400a-b15b-2c67ff1291a5-whisker-ca-bundle\") on node \"srv-er0cq.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:56:55.824049 kubelet[2873]: I0813 07:56:55.823833 2873 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qmzwz\" (UniqueName: \"kubernetes.io/projected/781d81dc-8a62-400a-b15b-2c67ff1291a5-kube-api-access-qmzwz\") on node \"srv-er0cq.gb1.brightbox.com\" DevicePath \"\"" Aug 13 07:56:56.737278 kubelet[2873]: I0813 07:56:56.736573 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blrvr\" (UniqueName: \"kubernetes.io/projected/5c034248-a1bc-489a-b627-ac6089ed2313-kube-api-access-blrvr\") pod \"whisker-85b7cdd6b9-dbvzq\" (UID: \"5c034248-a1bc-489a-b627-ac6089ed2313\") " pod="calico-system/whisker-85b7cdd6b9-dbvzq" Aug 13 07:56:56.737278 kubelet[2873]: I0813 07:56:56.736639 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5c034248-a1bc-489a-b627-ac6089ed2313-whisker-backend-key-pair\") pod \"whisker-85b7cdd6b9-dbvzq\" (UID: \"5c034248-a1bc-489a-b627-ac6089ed2313\") " pod="calico-system/whisker-85b7cdd6b9-dbvzq" Aug 13 07:56:56.737278 kubelet[2873]: I0813 07:56:56.736704 2873 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c034248-a1bc-489a-b627-ac6089ed2313-whisker-ca-bundle\") pod \"whisker-85b7cdd6b9-dbvzq\" (UID: \"5c034248-a1bc-489a-b627-ac6089ed2313\") " pod="calico-system/whisker-85b7cdd6b9-dbvzq" Aug 13 07:56:57.000980 containerd[1620]: time="2025-08-13T07:56:57.000584249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85b7cdd6b9-dbvzq,Uid:5c034248-a1bc-489a-b627-ac6089ed2313,Namespace:calico-system,Attempt:0,}" Aug 13 07:56:57.123953 containerd[1620]: time="2025-08-13T07:56:57.123668556Z" level=info msg="StopPodSandbox for \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\"" Aug 13 07:56:57.132033 containerd[1620]: time="2025-08-13T07:56:57.131973794Z" level=info msg="StopPodSandbox for \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\"" Aug 13 07:56:57.155320 kubelet[2873]: I0813 07:56:57.151963 2873 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="781d81dc-8a62-400a-b15b-2c67ff1291a5" path="/var/lib/kubelet/pods/781d81dc-8a62-400a-b15b-2c67ff1291a5/volumes" Aug 13 07:56:57.892306 containerd[1620]: 2025-08-13 07:56:57.481 [INFO][4233] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Aug 13 07:56:57.892306 containerd[1620]: 2025-08-13 07:56:57.484 [INFO][4233] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" iface="eth0" netns="/var/run/netns/cni-32a691cf-3df7-44b8-21e6-8b8a1887e64e" Aug 13 07:56:57.892306 containerd[1620]: 2025-08-13 07:56:57.484 [INFO][4233] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" iface="eth0" netns="/var/run/netns/cni-32a691cf-3df7-44b8-21e6-8b8a1887e64e" Aug 13 07:56:57.892306 containerd[1620]: 2025-08-13 07:56:57.519 [INFO][4233] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" iface="eth0" netns="/var/run/netns/cni-32a691cf-3df7-44b8-21e6-8b8a1887e64e" Aug 13 07:56:57.892306 containerd[1620]: 2025-08-13 07:56:57.519 [INFO][4233] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Aug 13 07:56:57.892306 containerd[1620]: 2025-08-13 07:56:57.519 [INFO][4233] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Aug 13 07:56:57.892306 containerd[1620]: 2025-08-13 07:56:57.777 [INFO][4249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" HandleID="k8s-pod-network.998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:56:57.892306 containerd[1620]: 2025-08-13 07:56:57.785 [INFO][4249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:56:57.892306 containerd[1620]: 2025-08-13 07:56:57.785 [INFO][4249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:56:57.892306 containerd[1620]: 2025-08-13 07:56:57.823 [WARNING][4249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" HandleID="k8s-pod-network.998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:56:57.892306 containerd[1620]: 2025-08-13 07:56:57.823 [INFO][4249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" HandleID="k8s-pod-network.998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:56:57.892306 containerd[1620]: 2025-08-13 07:56:57.825 [INFO][4249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:56:57.892306 containerd[1620]: 2025-08-13 07:56:57.845 [INFO][4233] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Aug 13 07:56:57.915499 systemd[1]: run-netns-cni\x2d32a691cf\x2d3df7\x2d44b8\x2d21e6\x2d8b8a1887e64e.mount: Deactivated successfully. Aug 13 07:56:57.928817 containerd[1620]: time="2025-08-13T07:56:57.927976095Z" level=info msg="TearDown network for sandbox \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\" successfully" Aug 13 07:56:57.928817 containerd[1620]: time="2025-08-13T07:56:57.928091981Z" level=info msg="StopPodSandbox for \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\" returns successfully" Aug 13 07:56:57.933801 containerd[1620]: time="2025-08-13T07:56:57.933306423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wd7mn,Uid:cb2516e5-58ff-4e99-8bda-62cb038aee7c,Namespace:kube-system,Attempt:1,}" Aug 13 07:56:58.163022 containerd[1620]: 2025-08-13 07:56:57.613 [INFO][4236] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Aug 13 07:56:58.163022 containerd[1620]: 2025-08-13 07:56:57.620 [INFO][4236] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" iface="eth0" netns="/var/run/netns/cni-3e987665-fc7c-53c5-40a0-23eb3b4032de" Aug 13 07:56:58.163022 containerd[1620]: 2025-08-13 07:56:57.620 [INFO][4236] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" iface="eth0" netns="/var/run/netns/cni-3e987665-fc7c-53c5-40a0-23eb3b4032de" Aug 13 07:56:58.163022 containerd[1620]: 2025-08-13 07:56:57.625 [INFO][4236] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" iface="eth0" netns="/var/run/netns/cni-3e987665-fc7c-53c5-40a0-23eb3b4032de" Aug 13 07:56:58.163022 containerd[1620]: 2025-08-13 07:56:57.625 [INFO][4236] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Aug 13 07:56:58.163022 containerd[1620]: 2025-08-13 07:56:57.625 [INFO][4236] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Aug 13 07:56:58.163022 containerd[1620]: 2025-08-13 07:56:57.898 [INFO][4263] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" HandleID="k8s-pod-network.aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Workload="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:56:58.163022 containerd[1620]: 2025-08-13 07:56:57.898 [INFO][4263] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:56:58.163022 containerd[1620]: 2025-08-13 07:56:58.100 [INFO][4263] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:56:58.163022 containerd[1620]: 2025-08-13 07:56:58.115 [WARNING][4263] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" HandleID="k8s-pod-network.aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Workload="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:56:58.163022 containerd[1620]: 2025-08-13 07:56:58.115 [INFO][4263] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" HandleID="k8s-pod-network.aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Workload="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:56:58.163022 containerd[1620]: 2025-08-13 07:56:58.120 [INFO][4263] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:56:58.163022 containerd[1620]: 2025-08-13 07:56:58.134 [INFO][4236] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Aug 13 07:56:58.171824 containerd[1620]: time="2025-08-13T07:56:58.166354936Z" level=info msg="TearDown network for sandbox \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\" successfully" Aug 13 07:56:58.171824 containerd[1620]: time="2025-08-13T07:56:58.166424640Z" level=info msg="StopPodSandbox for \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\" returns successfully" Aug 13 07:56:58.177010 systemd[1]: run-netns-cni\x2d3e987665\x2dfc7c\x2d53c5\x2d40a0\x2d23eb3b4032de.mount: Deactivated successfully. Aug 13 07:56:58.185343 containerd[1620]: time="2025-08-13T07:56:58.184767116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clt64,Uid:981413ed-74fe-461c-914c-e0dc01dda890,Namespace:calico-system,Attempt:1,}" Aug 13 07:56:58.188017 systemd-networkd[1260]: cali66504bb2734: Link UP Aug 13 07:56:58.190463 systemd-networkd[1260]: cali66504bb2734: Gained carrier Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:57.450 [INFO][4197] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:57.515 [INFO][4197] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--er0cq.gb1.brightbox.com-k8s-whisker--85b7cdd6b9--dbvzq-eth0 whisker-85b7cdd6b9- calico-system 5c034248-a1bc-489a-b627-ac6089ed2313 945 0 2025-08-13 07:56:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:85b7cdd6b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-er0cq.gb1.brightbox.com whisker-85b7cdd6b9-dbvzq eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali66504bb2734 [] [] }} ContainerID="682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" Namespace="calico-system" Pod="whisker-85b7cdd6b9-dbvzq" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-whisker--85b7cdd6b9--dbvzq-" Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:57.521 [INFO][4197] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" Namespace="calico-system" Pod="whisker-85b7cdd6b9-dbvzq" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-whisker--85b7cdd6b9--dbvzq-eth0" Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:57.838 [INFO][4252] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" HandleID="k8s-pod-network.682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" Workload="srv--er0cq.gb1.brightbox.com-k8s-whisker--85b7cdd6b9--dbvzq-eth0" Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:57.843 [INFO][4252] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" HandleID="k8s-pod-network.682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" Workload="srv--er0cq.gb1.brightbox.com-k8s-whisker--85b7cdd6b9--dbvzq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00041edb0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-er0cq.gb1.brightbox.com", "pod":"whisker-85b7cdd6b9-dbvzq", "timestamp":"2025-08-13 07:56:57.838618974 +0000 UTC"}, Hostname:"srv-er0cq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:57.843 [INFO][4252] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:57.843 [INFO][4252] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:57.844 [INFO][4252] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-er0cq.gb1.brightbox.com' Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:57.922 [INFO][4252] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:57.964 [INFO][4252] ipam/ipam.go 394: Looking up existing affinities for host host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:58.002 [INFO][4252] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:58.018 [INFO][4252] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:58.033 [INFO][4252] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:58.033 [INFO][4252] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:58.045 [INFO][4252] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38 Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:58.085 [INFO][4252] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:58.099 [INFO][4252] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.193/26] block=192.168.23.192/26 handle="k8s-pod-network.682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:58.100 [INFO][4252] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.193/26] handle="k8s-pod-network.682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:58.100 [INFO][4252] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:56:58.266660 containerd[1620]: 2025-08-13 07:56:58.100 [INFO][4252] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.193/26] IPv6=[] ContainerID="682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" HandleID="k8s-pod-network.682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" Workload="srv--er0cq.gb1.brightbox.com-k8s-whisker--85b7cdd6b9--dbvzq-eth0" Aug 13 07:56:58.270708 containerd[1620]: 2025-08-13 07:56:58.110 [INFO][4197] cni-plugin/k8s.go 418: Populated endpoint ContainerID="682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" Namespace="calico-system" Pod="whisker-85b7cdd6b9-dbvzq" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-whisker--85b7cdd6b9--dbvzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-whisker--85b7cdd6b9--dbvzq-eth0", GenerateName:"whisker-85b7cdd6b9-", Namespace:"calico-system", SelfLink:"", UID:"5c034248-a1bc-489a-b627-ac6089ed2313", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85b7cdd6b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"", Pod:"whisker-85b7cdd6b9-dbvzq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.23.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali66504bb2734", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:56:58.270708 containerd[1620]: 2025-08-13 07:56:58.112 [INFO][4197] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.193/32] ContainerID="682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" Namespace="calico-system" Pod="whisker-85b7cdd6b9-dbvzq" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-whisker--85b7cdd6b9--dbvzq-eth0" Aug 13 07:56:58.270708 containerd[1620]: 2025-08-13 07:56:58.112 [INFO][4197] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali66504bb2734 ContainerID="682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" Namespace="calico-system" Pod="whisker-85b7cdd6b9-dbvzq" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-whisker--85b7cdd6b9--dbvzq-eth0" Aug 13 07:56:58.270708 containerd[1620]: 2025-08-13 07:56:58.137 [INFO][4197] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" Namespace="calico-system" Pod="whisker-85b7cdd6b9-dbvzq" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-whisker--85b7cdd6b9--dbvzq-eth0" Aug 13 07:56:58.270708 containerd[1620]: 2025-08-13 07:56:58.138 [INFO][4197] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" Namespace="calico-system" Pod="whisker-85b7cdd6b9-dbvzq" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-whisker--85b7cdd6b9--dbvzq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-whisker--85b7cdd6b9--dbvzq-eth0", GenerateName:"whisker-85b7cdd6b9-", Namespace:"calico-system", SelfLink:"", UID:"5c034248-a1bc-489a-b627-ac6089ed2313", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"85b7cdd6b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38", Pod:"whisker-85b7cdd6b9-dbvzq", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.23.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali66504bb2734", MAC:"9e:f8:71:76:f5:3f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:56:58.270708 containerd[1620]: 2025-08-13 07:56:58.182 [INFO][4197] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38" Namespace="calico-system" Pod="whisker-85b7cdd6b9-dbvzq" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-whisker--85b7cdd6b9--dbvzq-eth0" Aug 13 07:56:58.499925 containerd[1620]: time="2025-08-13T07:56:58.498563451Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:56:58.499925 containerd[1620]: time="2025-08-13T07:56:58.499256143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:56:58.499925 containerd[1620]: time="2025-08-13T07:56:58.499275731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:58.502339 containerd[1620]: time="2025-08-13T07:56:58.501158483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:58.643346 kernel: bpftool[4392]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Aug 13 07:56:58.656360 systemd-networkd[1260]: calibe300a38d40: Link UP Aug 13 07:56:58.663003 systemd-networkd[1260]: calibe300a38d40: Gained carrier Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.291 [INFO][4273] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.445 [INFO][4273] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0 coredns-7c65d6cfc9- kube-system cb2516e5-58ff-4e99-8bda-62cb038aee7c 950 0 2025-08-13 07:56:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-er0cq.gb1.brightbox.com coredns-7c65d6cfc9-wd7mn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibe300a38d40 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wd7mn" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-" Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.446 [INFO][4273] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wd7mn" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.552 [INFO][4347] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" HandleID="k8s-pod-network.47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.552 [INFO][4347] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" HandleID="k8s-pod-network.47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f850), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-er0cq.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-wd7mn", "timestamp":"2025-08-13 07:56:58.551986621 +0000 UTC"}, Hostname:"srv-er0cq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.552 [INFO][4347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.552 [INFO][4347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.552 [INFO][4347] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-er0cq.gb1.brightbox.com' Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.568 [INFO][4347] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.577 [INFO][4347] ipam/ipam.go 394: Looking up existing affinities for host host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.600 [INFO][4347] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.604 [INFO][4347] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.608 [INFO][4347] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.608 [INFO][4347] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.610 [INFO][4347] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3 Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.619 [INFO][4347] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.633 [INFO][4347] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.194/26] block=192.168.23.192/26 handle="k8s-pod-network.47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.633 [INFO][4347] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.194/26] handle="k8s-pod-network.47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.633 [INFO][4347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:56:58.714750 containerd[1620]: 2025-08-13 07:56:58.634 [INFO][4347] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.194/26] IPv6=[] ContainerID="47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" HandleID="k8s-pod-network.47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:56:58.719714 containerd[1620]: 2025-08-13 07:56:58.646 [INFO][4273] cni-plugin/k8s.go 418: Populated endpoint ContainerID="47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wd7mn" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cb2516e5-58ff-4e99-8bda-62cb038aee7c", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-wd7mn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe300a38d40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:56:58.719714 containerd[1620]: 2025-08-13 07:56:58.647 [INFO][4273] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.194/32] ContainerID="47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wd7mn" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:56:58.719714 containerd[1620]: 2025-08-13 07:56:58.647 [INFO][4273] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibe300a38d40 ContainerID="47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wd7mn" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:56:58.719714 containerd[1620]: 2025-08-13 07:56:58.667 [INFO][4273] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wd7mn" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:56:58.719714 containerd[1620]: 2025-08-13 07:56:58.672 [INFO][4273] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wd7mn" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cb2516e5-58ff-4e99-8bda-62cb038aee7c", ResourceVersion:"950", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3", Pod:"coredns-7c65d6cfc9-wd7mn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe300a38d40", MAC:"2a:35:df:45:17:5b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:56:58.719714 containerd[1620]: 2025-08-13 07:56:58.706 [INFO][4273] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3" Namespace="kube-system" Pod="coredns-7c65d6cfc9-wd7mn" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:56:58.768452 containerd[1620]: time="2025-08-13T07:56:58.767999051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-85b7cdd6b9-dbvzq,Uid:5c034248-a1bc-489a-b627-ac6089ed2313,Namespace:calico-system,Attempt:0,} returns sandbox id \"682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38\"" Aug 13 07:56:58.785653 containerd[1620]: time="2025-08-13T07:56:58.785472482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Aug 13 07:56:58.792531 systemd-networkd[1260]: cali65fad7f54fc: Link UP Aug 13 07:56:58.795685 systemd-networkd[1260]: cali65fad7f54fc: Gained carrier Aug 13 07:56:58.819851 containerd[1620]: time="2025-08-13T07:56:58.814758876Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:56:58.819851 containerd[1620]: time="2025-08-13T07:56:58.814870797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:56:58.819851 containerd[1620]: time="2025-08-13T07:56:58.814889409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:58.819851 containerd[1620]: time="2025-08-13T07:56:58.815034866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.420 [INFO][4292] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.462 [INFO][4292] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0 csi-node-driver- calico-system 981413ed-74fe-461c-914c-e0dc01dda890 951 0 2025-08-13 07:56:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-er0cq.gb1.brightbox.com csi-node-driver-clt64 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali65fad7f54fc [] [] }} ContainerID="33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" Namespace="calico-system" Pod="csi-node-driver-clt64" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-" Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.462 [INFO][4292] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" Namespace="calico-system" Pod="csi-node-driver-clt64" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.602 [INFO][4352] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" HandleID="k8s-pod-network.33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" Workload="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.602 [INFO][4352] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" HandleID="k8s-pod-network.33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" Workload="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000386a20), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-er0cq.gb1.brightbox.com", "pod":"csi-node-driver-clt64", "timestamp":"2025-08-13 07:56:58.602430123 +0000 UTC"}, Hostname:"srv-er0cq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.602 [INFO][4352] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.634 [INFO][4352] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.634 [INFO][4352] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-er0cq.gb1.brightbox.com' Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.681 [INFO][4352] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.708 [INFO][4352] ipam/ipam.go 394: Looking up existing affinities for host host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.725 [INFO][4352] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.728 [INFO][4352] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.734 [INFO][4352] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.734 [INFO][4352] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.739 [INFO][4352] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6 Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.753 [INFO][4352] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.767 [INFO][4352] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.195/26] block=192.168.23.192/26 handle="k8s-pod-network.33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.767 [INFO][4352] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.195/26] handle="k8s-pod-network.33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.767 [INFO][4352] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:56:58.827544 containerd[1620]: 2025-08-13 07:56:58.768 [INFO][4352] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.195/26] IPv6=[] ContainerID="33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" HandleID="k8s-pod-network.33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" Workload="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:56:58.830898 containerd[1620]: 2025-08-13 07:56:58.780 [INFO][4292] cni-plugin/k8s.go 418: Populated endpoint ContainerID="33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" Namespace="calico-system" Pod="csi-node-driver-clt64" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"981413ed-74fe-461c-914c-e0dc01dda890", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-clt64", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65fad7f54fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:56:58.830898 containerd[1620]: 2025-08-13 07:56:58.782 [INFO][4292] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.195/32] ContainerID="33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" Namespace="calico-system" Pod="csi-node-driver-clt64" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:56:58.830898 containerd[1620]: 2025-08-13 07:56:58.782 [INFO][4292] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali65fad7f54fc ContainerID="33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" Namespace="calico-system" Pod="csi-node-driver-clt64" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:56:58.830898 containerd[1620]: 2025-08-13 07:56:58.797 [INFO][4292] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" Namespace="calico-system" Pod="csi-node-driver-clt64" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:56:58.830898 containerd[1620]: 2025-08-13 07:56:58.797 [INFO][4292] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" Namespace="calico-system" Pod="csi-node-driver-clt64" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"981413ed-74fe-461c-914c-e0dc01dda890", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6", Pod:"csi-node-driver-clt64", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65fad7f54fc", MAC:"d2:93:8b:55:42:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:56:58.830898 containerd[1620]: 2025-08-13 07:56:58.822 [INFO][4292] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6" Namespace="calico-system" Pod="csi-node-driver-clt64" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:56:58.958003 containerd[1620]: time="2025-08-13T07:56:58.956703285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:56:58.958003 containerd[1620]: time="2025-08-13T07:56:58.956806794Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:56:58.958003 containerd[1620]: time="2025-08-13T07:56:58.957706727Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:58.963826 containerd[1620]: time="2025-08-13T07:56:58.959668228Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:56:59.100377 containerd[1620]: time="2025-08-13T07:56:59.100298738Z" level=info msg="StopPodSandbox for \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\"" Aug 13 07:56:59.104728 containerd[1620]: time="2025-08-13T07:56:59.104270212Z" level=info msg="StopPodSandbox for \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\"" Aug 13 07:56:59.109805 containerd[1620]: time="2025-08-13T07:56:59.109663785Z" level=info msg="StopPodSandbox for \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\"" Aug 13 07:56:59.112388 containerd[1620]: time="2025-08-13T07:56:59.112358486Z" level=info msg="StopPodSandbox for \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\"" Aug 13 07:56:59.157372 containerd[1620]: time="2025-08-13T07:56:59.157323711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-wd7mn,Uid:cb2516e5-58ff-4e99-8bda-62cb038aee7c,Namespace:kube-system,Attempt:1,} returns sandbox id \"47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3\"" Aug 13 07:56:59.173262 containerd[1620]: time="2025-08-13T07:56:59.173065134Z" level=info msg="CreateContainer within sandbox \"47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:56:59.318473 containerd[1620]: time="2025-08-13T07:56:59.317047824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-clt64,Uid:981413ed-74fe-461c-914c-e0dc01dda890,Namespace:calico-system,Attempt:1,} returns sandbox id \"33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6\"" Aug 13 07:56:59.365641 systemd-networkd[1260]: vxlan.calico: Link UP Aug 13 07:56:59.365655 systemd-networkd[1260]: vxlan.calico: Gained carrier Aug 13 07:56:59.374727 systemd-networkd[1260]: cali66504bb2734: Gained IPv6LL Aug 13 07:56:59.501596 containerd[1620]: time="2025-08-13T07:56:59.491169410Z" level=info msg="CreateContainer within sandbox \"47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a75396e48c156c3476fbf6652cf8bf31035f9690c1fe0b2c77dabeb27239d766\"" Aug 13 07:56:59.501596 containerd[1620]: time="2025-08-13T07:56:59.492532928Z" level=info msg="StartContainer for \"a75396e48c156c3476fbf6652cf8bf31035f9690c1fe0b2c77dabeb27239d766\"" Aug 13 07:56:59.497188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1604985010.mount: Deactivated successfully. Aug 13 07:56:59.979896 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:56:59.975712 systemd-networkd[1260]: calibe300a38d40: Gained IPv6LL Aug 13 07:56:59.976884 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:56:59.976900 systemd-resolved[1515]: Flushed all caches. Aug 13 07:57:00.096974 containerd[1620]: time="2025-08-13T07:57:00.096024299Z" level=info msg="StopPodSandbox for \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\"" Aug 13 07:57:00.116896 containerd[1620]: time="2025-08-13T07:57:00.116789665Z" level=info msg="StartContainer for \"a75396e48c156c3476fbf6652cf8bf31035f9690c1fe0b2c77dabeb27239d766\" returns successfully" Aug 13 07:57:00.143757 containerd[1620]: 2025-08-13 07:56:59.721 [INFO][4520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Aug 13 07:57:00.143757 containerd[1620]: 2025-08-13 07:56:59.722 [INFO][4520] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" iface="eth0" netns="/var/run/netns/cni-cace4d81-c362-ed7c-e7c6-9feb3cccc2d8" Aug 13 07:57:00.143757 containerd[1620]: 2025-08-13 07:56:59.727 [INFO][4520] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" iface="eth0" netns="/var/run/netns/cni-cace4d81-c362-ed7c-e7c6-9feb3cccc2d8" Aug 13 07:57:00.143757 containerd[1620]: 2025-08-13 07:56:59.760 [INFO][4520] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" iface="eth0" netns="/var/run/netns/cni-cace4d81-c362-ed7c-e7c6-9feb3cccc2d8" Aug 13 07:57:00.143757 containerd[1620]: 2025-08-13 07:56:59.761 [INFO][4520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Aug 13 07:57:00.143757 containerd[1620]: 2025-08-13 07:56:59.761 [INFO][4520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Aug 13 07:57:00.143757 containerd[1620]: 2025-08-13 07:57:00.037 [INFO][4623] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" HandleID="k8s-pod-network.4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:00.143757 containerd[1620]: 2025-08-13 07:57:00.038 [INFO][4623] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:00.143757 containerd[1620]: 2025-08-13 07:57:00.058 [INFO][4623] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:00.143757 containerd[1620]: 2025-08-13 07:57:00.087 [WARNING][4623] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" HandleID="k8s-pod-network.4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:00.143757 containerd[1620]: 2025-08-13 07:57:00.088 [INFO][4623] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" HandleID="k8s-pod-network.4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:00.143757 containerd[1620]: 2025-08-13 07:57:00.099 [INFO][4623] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:00.143757 containerd[1620]: 2025-08-13 07:57:00.130 [INFO][4520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Aug 13 07:57:00.151158 containerd[1620]: time="2025-08-13T07:57:00.149885606Z" level=info msg="TearDown network for sandbox \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\" successfully" Aug 13 07:57:00.151158 containerd[1620]: time="2025-08-13T07:57:00.149928702Z" level=info msg="StopPodSandbox for \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\" returns successfully" Aug 13 07:57:00.154743 containerd[1620]: time="2025-08-13T07:57:00.154577117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f8484686-4hn7r,Uid:3cecc162-5b6e-4863-8dff-bed08c37d53a,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:57:00.156145 systemd[1]: run-netns-cni\x2dcace4d81\x2dc362\x2ded7c\x2de7c6\x2d9feb3cccc2d8.mount: Deactivated successfully. Aug 13 07:57:00.181159 containerd[1620]: 2025-08-13 07:56:59.699 [INFO][4541] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Aug 13 07:57:00.181159 containerd[1620]: 2025-08-13 07:56:59.699 [INFO][4541] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" iface="eth0" netns="/var/run/netns/cni-43dadbee-8525-4f00-72e4-0de7d4619d57" Aug 13 07:57:00.181159 containerd[1620]: 2025-08-13 07:56:59.702 [INFO][4541] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" iface="eth0" netns="/var/run/netns/cni-43dadbee-8525-4f00-72e4-0de7d4619d57" Aug 13 07:57:00.181159 containerd[1620]: 2025-08-13 07:56:59.706 [INFO][4541] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" iface="eth0" netns="/var/run/netns/cni-43dadbee-8525-4f00-72e4-0de7d4619d57" Aug 13 07:57:00.181159 containerd[1620]: 2025-08-13 07:56:59.707 [INFO][4541] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Aug 13 07:57:00.181159 containerd[1620]: 2025-08-13 07:56:59.707 [INFO][4541] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Aug 13 07:57:00.181159 containerd[1620]: 2025-08-13 07:57:00.064 [INFO][4613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" HandleID="k8s-pod-network.c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:00.181159 containerd[1620]: 2025-08-13 07:57:00.064 [INFO][4613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:00.181159 containerd[1620]: 2025-08-13 07:57:00.120 [INFO][4613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:00.181159 containerd[1620]: 2025-08-13 07:57:00.153 [WARNING][4613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" HandleID="k8s-pod-network.c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:00.181159 containerd[1620]: 2025-08-13 07:57:00.153 [INFO][4613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" HandleID="k8s-pod-network.c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:00.181159 containerd[1620]: 2025-08-13 07:57:00.163 [INFO][4613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:00.181159 containerd[1620]: 2025-08-13 07:57:00.170 [INFO][4541] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Aug 13 07:57:00.184882 containerd[1620]: time="2025-08-13T07:57:00.184844877Z" level=info msg="TearDown network for sandbox \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\" successfully" Aug 13 07:57:00.185041 containerd[1620]: time="2025-08-13T07:57:00.185002016Z" level=info msg="StopPodSandbox for \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\" returns successfully" Aug 13 07:57:00.186067 containerd[1620]: time="2025-08-13T07:57:00.186035568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fc769f-kftlc,Uid:f547ffd7-7d10-4436-85b6-ec353b820f63,Namespace:calico-system,Attempt:1,}" Aug 13 07:57:00.196076 systemd[1]: run-netns-cni\x2d43dadbee\x2d8525\x2d4f00\x2d72e4\x2d0de7d4619d57.mount: Deactivated successfully. Aug 13 07:57:00.253590 containerd[1620]: 2025-08-13 07:56:59.794 [INFO][4559] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Aug 13 07:57:00.253590 containerd[1620]: 2025-08-13 07:56:59.798 [INFO][4559] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" iface="eth0" netns="/var/run/netns/cni-152ddc47-3e9a-dbea-9e70-fef2405ca063" Aug 13 07:57:00.253590 containerd[1620]: 2025-08-13 07:56:59.802 [INFO][4559] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" iface="eth0" netns="/var/run/netns/cni-152ddc47-3e9a-dbea-9e70-fef2405ca063" Aug 13 07:57:00.253590 containerd[1620]: 2025-08-13 07:56:59.803 [INFO][4559] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" iface="eth0" netns="/var/run/netns/cni-152ddc47-3e9a-dbea-9e70-fef2405ca063" Aug 13 07:57:00.253590 containerd[1620]: 2025-08-13 07:56:59.803 [INFO][4559] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Aug 13 07:57:00.253590 containerd[1620]: 2025-08-13 07:56:59.803 [INFO][4559] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Aug 13 07:57:00.253590 containerd[1620]: 2025-08-13 07:57:00.094 [INFO][4630] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" HandleID="k8s-pod-network.e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:00.253590 containerd[1620]: 2025-08-13 07:57:00.095 [INFO][4630] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:00.253590 containerd[1620]: 2025-08-13 07:57:00.163 [INFO][4630] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:00.253590 containerd[1620]: 2025-08-13 07:57:00.222 [WARNING][4630] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" HandleID="k8s-pod-network.e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:00.253590 containerd[1620]: 2025-08-13 07:57:00.222 [INFO][4630] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" HandleID="k8s-pod-network.e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:00.253590 containerd[1620]: 2025-08-13 07:57:00.227 [INFO][4630] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:00.253590 containerd[1620]: 2025-08-13 07:57:00.244 [INFO][4559] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Aug 13 07:57:00.256166 containerd[1620]: time="2025-08-13T07:57:00.255828749Z" level=info msg="TearDown network for sandbox \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\" successfully" Aug 13 07:57:00.256678 containerd[1620]: time="2025-08-13T07:57:00.256574692Z" level=info msg="StopPodSandbox for \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\" returns successfully" Aug 13 07:57:00.261049 containerd[1620]: time="2025-08-13T07:57:00.260973095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f8484686-ncws4,Uid:b6d807f6-bf8b-4276-bd29-e9b753213504,Namespace:calico-apiserver,Attempt:1,}" Aug 13 07:57:00.267737 systemd[1]: run-netns-cni\x2d152ddc47\x2d3e9a\x2ddbea\x2d9e70\x2dfef2405ca063.mount: Deactivated successfully. Aug 13 07:57:00.303327 containerd[1620]: 2025-08-13 07:56:59.756 [INFO][4529] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Aug 13 07:57:00.303327 containerd[1620]: 2025-08-13 07:56:59.756 [INFO][4529] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" iface="eth0" netns="/var/run/netns/cni-6e49d29e-49ed-82a8-eadc-40dbae0a8cab" Aug 13 07:57:00.303327 containerd[1620]: 2025-08-13 07:56:59.760 [INFO][4529] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" iface="eth0" netns="/var/run/netns/cni-6e49d29e-49ed-82a8-eadc-40dbae0a8cab" Aug 13 07:57:00.303327 containerd[1620]: 2025-08-13 07:56:59.782 [INFO][4529] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" iface="eth0" netns="/var/run/netns/cni-6e49d29e-49ed-82a8-eadc-40dbae0a8cab" Aug 13 07:57:00.303327 containerd[1620]: 2025-08-13 07:56:59.782 [INFO][4529] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Aug 13 07:57:00.303327 containerd[1620]: 2025-08-13 07:56:59.782 [INFO][4529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Aug 13 07:57:00.303327 containerd[1620]: 2025-08-13 07:57:00.120 [INFO][4625] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" HandleID="k8s-pod-network.f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Workload="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:00.303327 containerd[1620]: 2025-08-13 07:57:00.130 [INFO][4625] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:00.303327 containerd[1620]: 2025-08-13 07:57:00.228 [INFO][4625] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:00.303327 containerd[1620]: 2025-08-13 07:57:00.274 [WARNING][4625] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" HandleID="k8s-pod-network.f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Workload="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:00.303327 containerd[1620]: 2025-08-13 07:57:00.274 [INFO][4625] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" HandleID="k8s-pod-network.f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Workload="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:00.303327 containerd[1620]: 2025-08-13 07:57:00.280 [INFO][4625] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:00.303327 containerd[1620]: 2025-08-13 07:57:00.296 [INFO][4529] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Aug 13 07:57:00.305386 containerd[1620]: time="2025-08-13T07:57:00.303495670Z" level=info msg="TearDown network for sandbox \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\" successfully" Aug 13 07:57:00.305386 containerd[1620]: time="2025-08-13T07:57:00.303527707Z" level=info msg="StopPodSandbox for \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\" returns successfully" Aug 13 07:57:00.305913 containerd[1620]: time="2025-08-13T07:57:00.305565847Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-t8sp4,Uid:11f4b399-8c5b-42d8-8ee1-f13c6bb84b22,Namespace:calico-system,Attempt:1,}" Aug 13 07:57:00.399149 systemd-networkd[1260]: cali65fad7f54fc: Gained IPv6LL Aug 13 07:57:00.701873 kubelet[2873]: I0813 07:57:00.700217 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-wd7mn" podStartSLOduration=48.70015178 podStartE2EDuration="48.70015178s" podCreationTimestamp="2025-08-13 07:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:57:00.694763272 +0000 UTC m=+53.925435662" watchObservedRunningTime="2025-08-13 07:57:00.70015178 +0000 UTC m=+53.930824164" Aug 13 07:57:00.881702 containerd[1620]: 2025-08-13 07:57:00.365 [INFO][4705] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Aug 13 07:57:00.881702 containerd[1620]: 2025-08-13 07:57:00.368 [INFO][4705] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" iface="eth0" netns="/var/run/netns/cni-12e43e80-e76f-2578-e738-ef47f5232669" Aug 13 07:57:00.881702 containerd[1620]: 2025-08-13 07:57:00.368 [INFO][4705] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" iface="eth0" netns="/var/run/netns/cni-12e43e80-e76f-2578-e738-ef47f5232669" Aug 13 07:57:00.881702 containerd[1620]: 2025-08-13 07:57:00.375 [INFO][4705] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" iface="eth0" netns="/var/run/netns/cni-12e43e80-e76f-2578-e738-ef47f5232669" Aug 13 07:57:00.881702 containerd[1620]: 2025-08-13 07:57:00.375 [INFO][4705] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Aug 13 07:57:00.881702 containerd[1620]: 2025-08-13 07:57:00.375 [INFO][4705] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Aug 13 07:57:00.881702 containerd[1620]: 2025-08-13 07:57:00.785 [INFO][4751] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" HandleID="k8s-pod-network.8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:00.881702 containerd[1620]: 2025-08-13 07:57:00.794 [INFO][4751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:00.881702 containerd[1620]: 2025-08-13 07:57:00.797 [INFO][4751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:00.881702 containerd[1620]: 2025-08-13 07:57:00.840 [WARNING][4751] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" HandleID="k8s-pod-network.8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:00.881702 containerd[1620]: 2025-08-13 07:57:00.840 [INFO][4751] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" HandleID="k8s-pod-network.8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:00.881702 containerd[1620]: 2025-08-13 07:57:00.851 [INFO][4751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:00.881702 containerd[1620]: 2025-08-13 07:57:00.862 [INFO][4705] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Aug 13 07:57:00.887705 containerd[1620]: time="2025-08-13T07:57:00.887077999Z" level=info msg="TearDown network for sandbox \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\" successfully" Aug 13 07:57:00.887705 containerd[1620]: time="2025-08-13T07:57:00.887666049Z" level=info msg="StopPodSandbox for \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\" returns successfully" Aug 13 07:57:00.899717 containerd[1620]: time="2025-08-13T07:57:00.899602481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-twz7x,Uid:f05ad85b-d4f3-4c2d-b462-454f0dd5790f,Namespace:kube-system,Attempt:1,}" Aug 13 07:57:00.930493 systemd[1]: run-netns-cni\x2d6e49d29e\x2d49ed\x2d82a8\x2deadc\x2d40dbae0a8cab.mount: Deactivated successfully. Aug 13 07:57:00.930815 systemd[1]: run-netns-cni\x2d12e43e80\x2de76f\x2d2578\x2de738\x2def47f5232669.mount: Deactivated successfully. Aug 13 07:57:01.010411 systemd-networkd[1260]: calicf5b0b2e370: Link UP Aug 13 07:57:01.014657 systemd-networkd[1260]: calicf5b0b2e370: Gained carrier Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.573 [INFO][4715] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0 calico-apiserver-75f8484686- calico-apiserver 3cecc162-5b6e-4863-8dff-bed08c37d53a 975 0 2025-08-13 07:56:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75f8484686 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-er0cq.gb1.brightbox.com calico-apiserver-75f8484686-4hn7r eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calicf5b0b2e370 [] [] }} ContainerID="55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-4hn7r" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-" Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.578 [INFO][4715] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-4hn7r" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.868 [INFO][4776] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" HandleID="k8s-pod-network.55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.868 [INFO][4776] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" HandleID="k8s-pod-network.55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a8120), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-er0cq.gb1.brightbox.com", "pod":"calico-apiserver-75f8484686-4hn7r", "timestamp":"2025-08-13 07:57:00.868482579 +0000 UTC"}, Hostname:"srv-er0cq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.868 [INFO][4776] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.868 [INFO][4776] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.868 [INFO][4776] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-er0cq.gb1.brightbox.com' Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.887 [INFO][4776] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.899 [INFO][4776] ipam/ipam.go 394: Looking up existing affinities for host host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.931 [INFO][4776] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.941 [INFO][4776] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.958 [INFO][4776] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.958 [INFO][4776] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.964 [INFO][4776] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.973 [INFO][4776] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.988 [INFO][4776] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.196/26] block=192.168.23.192/26 handle="k8s-pod-network.55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.988 [INFO][4776] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.196/26] handle="k8s-pod-network.55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.988 [INFO][4776] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:01.066190 containerd[1620]: 2025-08-13 07:57:00.988 [INFO][4776] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.196/26] IPv6=[] ContainerID="55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" HandleID="k8s-pod-network.55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:01.068456 containerd[1620]: 2025-08-13 07:57:00.998 [INFO][4715] cni-plugin/k8s.go 418: Populated endpoint ContainerID="55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-4hn7r" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0", GenerateName:"calico-apiserver-75f8484686-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cecc162-5b6e-4863-8dff-bed08c37d53a", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f8484686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-75f8484686-4hn7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf5b0b2e370", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:01.068456 containerd[1620]: 2025-08-13 07:57:00.998 [INFO][4715] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.196/32] ContainerID="55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-4hn7r" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:01.068456 containerd[1620]: 2025-08-13 07:57:00.998 [INFO][4715] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicf5b0b2e370 ContainerID="55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-4hn7r" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:01.068456 containerd[1620]: 2025-08-13 07:57:01.016 [INFO][4715] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-4hn7r" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:01.068456 containerd[1620]: 2025-08-13 07:57:01.019 [INFO][4715] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-4hn7r" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0", GenerateName:"calico-apiserver-75f8484686-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cecc162-5b6e-4863-8dff-bed08c37d53a", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f8484686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a", Pod:"calico-apiserver-75f8484686-4hn7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf5b0b2e370", MAC:"da:19:a5:78:0d:85", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:01.068456 containerd[1620]: 2025-08-13 07:57:01.058 [INFO][4715] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-4hn7r" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:01.215049 systemd-networkd[1260]: cali857ee685224: Link UP Aug 13 07:57:01.219189 systemd-networkd[1260]: cali857ee685224: Gained carrier Aug 13 07:57:01.231323 systemd-networkd[1260]: vxlan.calico: Gained IPv6LL Aug 13 07:57:01.248000 containerd[1620]: time="2025-08-13T07:57:01.245628649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:57:01.248000 containerd[1620]: time="2025-08-13T07:57:01.245712824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:57:01.248000 containerd[1620]: time="2025-08-13T07:57:01.245731637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:57:01.248931 containerd[1620]: time="2025-08-13T07:57:01.248664271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:00.587 [INFO][4729] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0 calico-apiserver-75f8484686- calico-apiserver b6d807f6-bf8b-4276-bd29-e9b753213504 977 0 2025-08-13 07:56:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:75f8484686 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-er0cq.gb1.brightbox.com calico-apiserver-75f8484686-ncws4 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali857ee685224 [] [] }} ContainerID="0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-ncws4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-" Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:00.587 [INFO][4729] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-ncws4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:00.895 [INFO][4774] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" HandleID="k8s-pod-network.0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:00.896 [INFO][4774] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" HandleID="k8s-pod-network.0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00033cc30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-er0cq.gb1.brightbox.com", "pod":"calico-apiserver-75f8484686-ncws4", "timestamp":"2025-08-13 07:57:00.895049361 +0000 UTC"}, Hostname:"srv-er0cq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:00.896 [INFO][4774] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:00.988 [INFO][4774] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:00.988 [INFO][4774] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-er0cq.gb1.brightbox.com' Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:01.009 [INFO][4774] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:01.023 [INFO][4774] ipam/ipam.go 394: Looking up existing affinities for host host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:01.030 [INFO][4774] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:01.053 [INFO][4774] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:01.063 [INFO][4774] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:01.063 [INFO][4774] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:01.067 [INFO][4774] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:01.079 [INFO][4774] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:01.099 [INFO][4774] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.197/26] block=192.168.23.192/26 handle="k8s-pod-network.0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:01.099 [INFO][4774] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.197/26] handle="k8s-pod-network.0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:01.100 [INFO][4774] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:01.285429 containerd[1620]: 2025-08-13 07:57:01.100 [INFO][4774] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.197/26] IPv6=[] ContainerID="0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" HandleID="k8s-pod-network.0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:01.286721 containerd[1620]: 2025-08-13 07:57:01.135 [INFO][4729] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-ncws4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0", GenerateName:"calico-apiserver-75f8484686-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6d807f6-bf8b-4276-bd29-e9b753213504", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f8484686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-75f8484686-ncws4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali857ee685224", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:01.286721 containerd[1620]: 2025-08-13 07:57:01.135 [INFO][4729] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.197/32] ContainerID="0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-ncws4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:01.286721 containerd[1620]: 2025-08-13 07:57:01.135 [INFO][4729] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali857ee685224 ContainerID="0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-ncws4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:01.286721 containerd[1620]: 2025-08-13 07:57:01.229 [INFO][4729] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-ncws4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:01.286721 containerd[1620]: 2025-08-13 07:57:01.233 [INFO][4729] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-ncws4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0", GenerateName:"calico-apiserver-75f8484686-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6d807f6-bf8b-4276-bd29-e9b753213504", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f8484686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd", Pod:"calico-apiserver-75f8484686-ncws4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali857ee685224", MAC:"da:d4:79:39:16:7b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:01.286721 containerd[1620]: 2025-08-13 07:57:01.257 [INFO][4729] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd" Namespace="calico-apiserver" Pod="calico-apiserver-75f8484686-ncws4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:01.325856 systemd-networkd[1260]: calic7643c35c22: Link UP Aug 13 07:57:01.336138 systemd-networkd[1260]: calic7643c35c22: Gained carrier Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:00.577 [INFO][4740] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0 calico-kube-controllers-85fc769f- calico-system f547ffd7-7d10-4436-85b6-ec353b820f63 974 0 2025-08-13 07:56:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85fc769f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-er0cq.gb1.brightbox.com calico-kube-controllers-85fc769f-kftlc eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic7643c35c22 [] [] }} ContainerID="3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" Namespace="calico-system" Pod="calico-kube-controllers-85fc769f-kftlc" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-" Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:00.581 [INFO][4740] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" Namespace="calico-system" Pod="calico-kube-controllers-85fc769f-kftlc" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:00.921 [INFO][4778] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" HandleID="k8s-pod-network.3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:00.921 [INFO][4778] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" HandleID="k8s-pod-network.3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000602440), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-er0cq.gb1.brightbox.com", "pod":"calico-kube-controllers-85fc769f-kftlc", "timestamp":"2025-08-13 07:57:00.921388582 +0000 UTC"}, Hostname:"srv-er0cq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:00.921 [INFO][4778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:01.104 [INFO][4778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:01.105 [INFO][4778] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-er0cq.gb1.brightbox.com' Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:01.120 [INFO][4778] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:01.145 [INFO][4778] ipam/ipam.go 394: Looking up existing affinities for host host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:01.214 [INFO][4778] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:01.221 [INFO][4778] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:01.230 [INFO][4778] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:01.230 [INFO][4778] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:01.237 [INFO][4778] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:01.249 [INFO][4778] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:01.283 [INFO][4778] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.198/26] block=192.168.23.192/26 handle="k8s-pod-network.3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:01.284 [INFO][4778] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.198/26] handle="k8s-pod-network.3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:01.284 [INFO][4778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:01.439417 containerd[1620]: 2025-08-13 07:57:01.284 [INFO][4778] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.198/26] IPv6=[] ContainerID="3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" HandleID="k8s-pod-network.3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:01.441933 containerd[1620]: 2025-08-13 07:57:01.301 [INFO][4740] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" Namespace="calico-system" Pod="calico-kube-controllers-85fc769f-kftlc" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0", GenerateName:"calico-kube-controllers-85fc769f-", Namespace:"calico-system", SelfLink:"", UID:"f547ffd7-7d10-4436-85b6-ec353b820f63", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85fc769f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-85fc769f-kftlc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7643c35c22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:01.441933 containerd[1620]: 2025-08-13 07:57:01.305 [INFO][4740] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.198/32] ContainerID="3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" Namespace="calico-system" Pod="calico-kube-controllers-85fc769f-kftlc" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:01.441933 containerd[1620]: 2025-08-13 07:57:01.305 [INFO][4740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7643c35c22 ContainerID="3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" Namespace="calico-system" Pod="calico-kube-controllers-85fc769f-kftlc" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:01.441933 containerd[1620]: 2025-08-13 07:57:01.347 [INFO][4740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" Namespace="calico-system" Pod="calico-kube-controllers-85fc769f-kftlc" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:01.441933 containerd[1620]: 2025-08-13 07:57:01.365 [INFO][4740] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" Namespace="calico-system" Pod="calico-kube-controllers-85fc769f-kftlc" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0", GenerateName:"calico-kube-controllers-85fc769f-", Namespace:"calico-system", SelfLink:"", UID:"f547ffd7-7d10-4436-85b6-ec353b820f63", ResourceVersion:"974", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85fc769f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef", Pod:"calico-kube-controllers-85fc769f-kftlc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7643c35c22", MAC:"d2:8f:29:2c:56:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:01.441933 containerd[1620]: 2025-08-13 07:57:01.407 [INFO][4740] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef" Namespace="calico-system" Pod="calico-kube-controllers-85fc769f-kftlc" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:01.450533 systemd-networkd[1260]: cali811ab0c35bd: Link UP Aug 13 07:57:01.463059 systemd-networkd[1260]: cali811ab0c35bd: Gained carrier Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:00.780 [INFO][4752] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0 goldmane-58fd7646b9- calico-system 11f4b399-8c5b-42d8-8ee1-f13c6bb84b22 976 0 2025-08-13 07:56:28 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-er0cq.gb1.brightbox.com goldmane-58fd7646b9-t8sp4 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali811ab0c35bd [] [] }} ContainerID="828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" Namespace="calico-system" Pod="goldmane-58fd7646b9-t8sp4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-" Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:00.783 [INFO][4752] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" Namespace="calico-system" Pod="goldmane-58fd7646b9-t8sp4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.061 [INFO][4795] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" HandleID="k8s-pod-network.828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" Workload="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.061 [INFO][4795] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" HandleID="k8s-pod-network.828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" Workload="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00040d520), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-er0cq.gb1.brightbox.com", "pod":"goldmane-58fd7646b9-t8sp4", "timestamp":"2025-08-13 07:57:01.06144944 +0000 UTC"}, Hostname:"srv-er0cq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.061 [INFO][4795] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.287 [INFO][4795] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.287 [INFO][4795] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-er0cq.gb1.brightbox.com' Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.301 [INFO][4795] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.308 [INFO][4795] ipam/ipam.go 394: Looking up existing affinities for host host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.319 [INFO][4795] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.330 [INFO][4795] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.343 [INFO][4795] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.344 [INFO][4795] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.361 [INFO][4795] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8 Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.376 [INFO][4795] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.406 [INFO][4795] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.199/26] block=192.168.23.192/26 handle="k8s-pod-network.828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.406 [INFO][4795] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.199/26] handle="k8s-pod-network.828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.406 [INFO][4795] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:01.530576 containerd[1620]: 2025-08-13 07:57:01.406 [INFO][4795] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.199/26] IPv6=[] ContainerID="828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" HandleID="k8s-pod-network.828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" Workload="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:01.531752 containerd[1620]: 2025-08-13 07:57:01.424 [INFO][4752] cni-plugin/k8s.go 418: Populated endpoint ContainerID="828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" Namespace="calico-system" Pod="goldmane-58fd7646b9-t8sp4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"11f4b399-8c5b-42d8-8ee1-f13c6bb84b22", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-58fd7646b9-t8sp4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.23.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali811ab0c35bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:01.531752 containerd[1620]: 2025-08-13 07:57:01.424 [INFO][4752] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.199/32] ContainerID="828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" Namespace="calico-system" Pod="goldmane-58fd7646b9-t8sp4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:01.531752 containerd[1620]: 2025-08-13 07:57:01.425 [INFO][4752] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali811ab0c35bd ContainerID="828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" Namespace="calico-system" Pod="goldmane-58fd7646b9-t8sp4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:01.531752 containerd[1620]: 2025-08-13 07:57:01.477 [INFO][4752] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" Namespace="calico-system" Pod="goldmane-58fd7646b9-t8sp4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:01.531752 containerd[1620]: 2025-08-13 07:57:01.482 [INFO][4752] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" Namespace="calico-system" Pod="goldmane-58fd7646b9-t8sp4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"11f4b399-8c5b-42d8-8ee1-f13c6bb84b22", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8", Pod:"goldmane-58fd7646b9-t8sp4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.23.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali811ab0c35bd", MAC:"32:1d:ec:8e:24:47", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:01.531752 containerd[1620]: 2025-08-13 07:57:01.507 [INFO][4752] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8" Namespace="calico-system" Pod="goldmane-58fd7646b9-t8sp4" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:01.555590 containerd[1620]: time="2025-08-13T07:57:01.553471780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:57:01.561958 containerd[1620]: time="2025-08-13T07:57:01.555509769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:57:01.561958 containerd[1620]: time="2025-08-13T07:57:01.560464297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:57:01.571128 containerd[1620]: time="2025-08-13T07:57:01.566385098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:57:01.618518 containerd[1620]: time="2025-08-13T07:57:01.614426213Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:57:01.618518 containerd[1620]: time="2025-08-13T07:57:01.614514956Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:57:01.618518 containerd[1620]: time="2025-08-13T07:57:01.614531964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:57:01.618518 containerd[1620]: time="2025-08-13T07:57:01.614682559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:57:01.723736 containerd[1620]: time="2025-08-13T07:57:01.723643648Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f8484686-4hn7r,Uid:3cecc162-5b6e-4863-8dff-bed08c37d53a,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a\"" Aug 13 07:57:01.781503 containerd[1620]: time="2025-08-13T07:57:01.781256097Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:57:01.781503 containerd[1620]: time="2025-08-13T07:57:01.781449617Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:57:01.784411 containerd[1620]: time="2025-08-13T07:57:01.781479212Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:57:01.786749 containerd[1620]: time="2025-08-13T07:57:01.786023410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:57:01.864853 systemd-networkd[1260]: cali69932d82b8b: Link UP Aug 13 07:57:01.892417 systemd-networkd[1260]: cali69932d82b8b: Gained carrier Aug 13 07:57:01.955572 containerd[1620]: time="2025-08-13T07:57:01.955472535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-75f8484686-ncws4,Uid:b6d807f6-bf8b-4276-bd29-e9b753213504,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd\"" Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.226 [INFO][4808] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0 coredns-7c65d6cfc9- kube-system f05ad85b-d4f3-4c2d-b462-454f0dd5790f 981 0 2025-08-13 07:56:12 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-er0cq.gb1.brightbox.com coredns-7c65d6cfc9-twz7x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali69932d82b8b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-twz7x" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-" Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.227 [INFO][4808] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-twz7x" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.691 [INFO][4857] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" HandleID="k8s-pod-network.fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.691 [INFO][4857] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" HandleID="k8s-pod-network.fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fd70), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-er0cq.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-twz7x", "timestamp":"2025-08-13 07:57:01.691078128 +0000 UTC"}, Hostname:"srv-er0cq.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.692 [INFO][4857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.692 [INFO][4857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.692 [INFO][4857] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-er0cq.gb1.brightbox.com' Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.715 [INFO][4857] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.732 [INFO][4857] ipam/ipam.go 394: Looking up existing affinities for host host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.742 [INFO][4857] ipam/ipam.go 511: Trying affinity for 192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.748 [INFO][4857] ipam/ipam.go 158: Attempting to load block cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.756 [INFO][4857] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.23.192/26 host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.756 [INFO][4857] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.23.192/26 handle="k8s-pod-network.fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.759 [INFO][4857] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.788 [INFO][4857] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.23.192/26 handle="k8s-pod-network.fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.812 [INFO][4857] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.23.200/26] block=192.168.23.192/26 handle="k8s-pod-network.fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.812 [INFO][4857] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.23.200/26] handle="k8s-pod-network.fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" host="srv-er0cq.gb1.brightbox.com" Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.812 [INFO][4857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:01.998852 containerd[1620]: 2025-08-13 07:57:01.812 [INFO][4857] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.23.200/26] IPv6=[] ContainerID="fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" HandleID="k8s-pod-network.fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:02.001670 containerd[1620]: 2025-08-13 07:57:01.840 [INFO][4808] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-twz7x" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f05ad85b-d4f3-4c2d-b462-454f0dd5790f", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-twz7x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69932d82b8b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:02.001670 containerd[1620]: 2025-08-13 07:57:01.841 [INFO][4808] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.23.200/32] ContainerID="fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-twz7x" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:02.001670 containerd[1620]: 2025-08-13 07:57:01.841 [INFO][4808] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali69932d82b8b ContainerID="fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-twz7x" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:02.001670 containerd[1620]: 2025-08-13 07:57:01.895 [INFO][4808] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-twz7x" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:02.001670 containerd[1620]: 2025-08-13 07:57:01.904 [INFO][4808] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-twz7x" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f05ad85b-d4f3-4c2d-b462-454f0dd5790f", ResourceVersion:"981", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e", Pod:"coredns-7c65d6cfc9-twz7x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69932d82b8b", MAC:"0a:10:77:f3:12:55", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:02.001670 containerd[1620]: 2025-08-13 07:57:01.972 [INFO][4808] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e" Namespace="kube-system" Pod="coredns-7c65d6cfc9-twz7x" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:02.071602 containerd[1620]: time="2025-08-13T07:57:02.071481810Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 13 07:57:02.071967 containerd[1620]: time="2025-08-13T07:57:02.071550631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 13 07:57:02.071967 containerd[1620]: time="2025-08-13T07:57:02.071579533Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:57:02.071967 containerd[1620]: time="2025-08-13T07:57:02.071716710Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 13 07:57:02.080477 containerd[1620]: time="2025-08-13T07:57:02.080331831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fc769f-kftlc,Uid:f547ffd7-7d10-4436-85b6-ec353b820f63,Namespace:calico-system,Attempt:1,} returns sandbox id \"3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef\"" Aug 13 07:57:02.111096 containerd[1620]: time="2025-08-13T07:57:02.111051653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-t8sp4,Uid:11f4b399-8c5b-42d8-8ee1-f13c6bb84b22,Namespace:calico-system,Attempt:1,} returns sandbox id \"828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8\"" Aug 13 07:57:02.209505 containerd[1620]: time="2025-08-13T07:57:02.208854493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-twz7x,Uid:f05ad85b-d4f3-4c2d-b462-454f0dd5790f,Namespace:kube-system,Attempt:1,} returns sandbox id \"fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e\"" Aug 13 07:57:02.215491 containerd[1620]: time="2025-08-13T07:57:02.215412126Z" level=info msg="CreateContainer within sandbox \"fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 07:57:02.220328 containerd[1620]: time="2025-08-13T07:57:02.220147266Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:02.235481 containerd[1620]: time="2025-08-13T07:57:02.235364593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Aug 13 07:57:02.237723 containerd[1620]: time="2025-08-13T07:57:02.237616763Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:02.240964 containerd[1620]: time="2025-08-13T07:57:02.239904880Z" level=info msg="CreateContainer within sandbox \"fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b56e0c5bd20cc81aa5976171edc86d1da22d55c4fb85835e71a2ab4538c3799a\"" Aug 13 07:57:02.241829 containerd[1620]: time="2025-08-13T07:57:02.241797332Z" level=info msg="StartContainer for \"b56e0c5bd20cc81aa5976171edc86d1da22d55c4fb85835e71a2ab4538c3799a\"" Aug 13 07:57:02.246692 containerd[1620]: time="2025-08-13T07:57:02.246608588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:02.251196 containerd[1620]: time="2025-08-13T07:57:02.251068259Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 3.465476367s" Aug 13 07:57:02.251196 containerd[1620]: time="2025-08-13T07:57:02.251117888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Aug 13 07:57:02.253904 containerd[1620]: time="2025-08-13T07:57:02.253606241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 07:57:02.256438 containerd[1620]: time="2025-08-13T07:57:02.256385516Z" level=info msg="CreateContainer within sandbox \"682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Aug 13 07:57:02.293044 containerd[1620]: time="2025-08-13T07:57:02.292916067Z" level=info msg="CreateContainer within sandbox \"682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"70dafd49f6cb8039690ee6d039695f1e2c2aaa299a3ace4e7193b1966ddfdf12\"" Aug 13 07:57:02.298946 containerd[1620]: time="2025-08-13T07:57:02.298105881Z" level=info msg="StartContainer for \"70dafd49f6cb8039690ee6d039695f1e2c2aaa299a3ace4e7193b1966ddfdf12\"" Aug 13 07:57:02.341245 containerd[1620]: time="2025-08-13T07:57:02.341186570Z" level=info msg="StartContainer for \"b56e0c5bd20cc81aa5976171edc86d1da22d55c4fb85835e71a2ab4538c3799a\" returns successfully" Aug 13 07:57:02.446413 systemd-networkd[1260]: cali857ee685224: Gained IPv6LL Aug 13 07:57:02.448920 containerd[1620]: time="2025-08-13T07:57:02.448879248Z" level=info msg="StartContainer for \"70dafd49f6cb8039690ee6d039695f1e2c2aaa299a3ace4e7193b1966ddfdf12\" returns successfully" Aug 13 07:57:02.574734 systemd-networkd[1260]: calicf5b0b2e370: Gained IPv6LL Aug 13 07:57:02.699607 kubelet[2873]: I0813 07:57:02.698910 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-twz7x" podStartSLOduration=50.698888811 podStartE2EDuration="50.698888811s" podCreationTimestamp="2025-08-13 07:56:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 07:57:02.696418778 +0000 UTC m=+55.927091170" watchObservedRunningTime="2025-08-13 07:57:02.698888811 +0000 UTC m=+55.929561192" Aug 13 07:57:03.022852 systemd-networkd[1260]: cali811ab0c35bd: Gained IPv6LL Aug 13 07:57:03.150758 systemd-networkd[1260]: calic7643c35c22: Gained IPv6LL Aug 13 07:57:03.662507 systemd-networkd[1260]: cali69932d82b8b: Gained IPv6LL Aug 13 07:57:04.135755 containerd[1620]: time="2025-08-13T07:57:04.135654443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:04.137678 containerd[1620]: time="2025-08-13T07:57:04.137545365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 07:57:04.139166 containerd[1620]: time="2025-08-13T07:57:04.138726836Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:04.142658 containerd[1620]: time="2025-08-13T07:57:04.142624291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:04.144613 containerd[1620]: time="2025-08-13T07:57:04.144505597Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.890853907s" Aug 13 07:57:04.144692 containerd[1620]: time="2025-08-13T07:57:04.144656128Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 07:57:04.147059 containerd[1620]: time="2025-08-13T07:57:04.147028924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:57:04.152771 containerd[1620]: time="2025-08-13T07:57:04.152714700Z" level=info msg="CreateContainer within sandbox \"33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 07:57:04.192798 containerd[1620]: time="2025-08-13T07:57:04.192607778Z" level=info msg="CreateContainer within sandbox \"33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"20c1e762a310f18320f7f8283936d0b63b12be95313035b58a80f311fbe56729\"" Aug 13 07:57:04.194779 containerd[1620]: time="2025-08-13T07:57:04.193943942Z" level=info msg="StartContainer for \"20c1e762a310f18320f7f8283936d0b63b12be95313035b58a80f311fbe56729\"" Aug 13 07:57:04.262262 systemd[1]: run-containerd-runc-k8s.io-20c1e762a310f18320f7f8283936d0b63b12be95313035b58a80f311fbe56729-runc.fbjVYd.mount: Deactivated successfully. Aug 13 07:57:04.305938 containerd[1620]: time="2025-08-13T07:57:04.305820077Z" level=info msg="StartContainer for \"20c1e762a310f18320f7f8283936d0b63b12be95313035b58a80f311fbe56729\" returns successfully" Aug 13 07:57:07.278161 containerd[1620]: time="2025-08-13T07:57:07.278014325Z" level=info msg="StopPodSandbox for \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\"" Aug 13 07:57:07.556410 containerd[1620]: 2025-08-13 07:57:07.430 [WARNING][5229] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cb2516e5-58ff-4e99-8bda-62cb038aee7c", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3", Pod:"coredns-7c65d6cfc9-wd7mn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe300a38d40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:07.556410 containerd[1620]: 2025-08-13 07:57:07.431 [INFO][5229] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Aug 13 07:57:07.556410 containerd[1620]: 2025-08-13 07:57:07.431 [INFO][5229] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" iface="eth0" netns="" Aug 13 07:57:07.556410 containerd[1620]: 2025-08-13 07:57:07.431 [INFO][5229] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Aug 13 07:57:07.556410 containerd[1620]: 2025-08-13 07:57:07.431 [INFO][5229] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Aug 13 07:57:07.556410 containerd[1620]: 2025-08-13 07:57:07.529 [INFO][5237] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" HandleID="k8s-pod-network.998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:57:07.556410 containerd[1620]: 2025-08-13 07:57:07.529 [INFO][5237] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:07.556410 containerd[1620]: 2025-08-13 07:57:07.531 [INFO][5237] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:07.556410 containerd[1620]: 2025-08-13 07:57:07.545 [WARNING][5237] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" HandleID="k8s-pod-network.998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:57:07.556410 containerd[1620]: 2025-08-13 07:57:07.545 [INFO][5237] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" HandleID="k8s-pod-network.998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:57:07.556410 containerd[1620]: 2025-08-13 07:57:07.548 [INFO][5237] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:07.556410 containerd[1620]: 2025-08-13 07:57:07.552 [INFO][5229] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Aug 13 07:57:07.559927 containerd[1620]: time="2025-08-13T07:57:07.556462649Z" level=info msg="TearDown network for sandbox \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\" successfully" Aug 13 07:57:07.559927 containerd[1620]: time="2025-08-13T07:57:07.557648486Z" level=info msg="StopPodSandbox for \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\" returns successfully" Aug 13 07:57:07.563865 containerd[1620]: time="2025-08-13T07:57:07.563818385Z" level=info msg="RemovePodSandbox for \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\"" Aug 13 07:57:07.563960 containerd[1620]: time="2025-08-13T07:57:07.563879925Z" level=info msg="Forcibly stopping sandbox \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\"" Aug 13 07:57:07.744942 containerd[1620]: 2025-08-13 07:57:07.648 [WARNING][5251] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cb2516e5-58ff-4e99-8bda-62cb038aee7c", ResourceVersion:"986", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"47d7836a5120f6ed5ec0b57fc68378013f6b3a2ebf199cc6e672d40de67714e3", Pod:"coredns-7c65d6cfc9-wd7mn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibe300a38d40", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:07.744942 containerd[1620]: 2025-08-13 07:57:07.651 [INFO][5251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Aug 13 07:57:07.744942 containerd[1620]: 2025-08-13 07:57:07.651 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" iface="eth0" netns="" Aug 13 07:57:07.744942 containerd[1620]: 2025-08-13 07:57:07.651 [INFO][5251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Aug 13 07:57:07.744942 containerd[1620]: 2025-08-13 07:57:07.651 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Aug 13 07:57:07.744942 containerd[1620]: 2025-08-13 07:57:07.724 [INFO][5259] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" HandleID="k8s-pod-network.998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:57:07.744942 containerd[1620]: 2025-08-13 07:57:07.725 [INFO][5259] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:07.744942 containerd[1620]: 2025-08-13 07:57:07.725 [INFO][5259] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:07.744942 containerd[1620]: 2025-08-13 07:57:07.735 [WARNING][5259] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" HandleID="k8s-pod-network.998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:57:07.744942 containerd[1620]: 2025-08-13 07:57:07.736 [INFO][5259] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" HandleID="k8s-pod-network.998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--wd7mn-eth0" Aug 13 07:57:07.744942 containerd[1620]: 2025-08-13 07:57:07.738 [INFO][5259] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:07.744942 containerd[1620]: 2025-08-13 07:57:07.741 [INFO][5251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65" Aug 13 07:57:07.746256 containerd[1620]: time="2025-08-13T07:57:07.745362629Z" level=info msg="TearDown network for sandbox \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\" successfully" Aug 13 07:57:07.757129 containerd[1620]: time="2025-08-13T07:57:07.757056202Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:57:07.757558 containerd[1620]: time="2025-08-13T07:57:07.757389904Z" level=info msg="RemovePodSandbox \"998c7791803250f2b1ddcd3c215f959b9cbbcfef5c4da960e9caa29c6dd80b65\" returns successfully" Aug 13 07:57:07.758797 containerd[1620]: time="2025-08-13T07:57:07.758768222Z" level=info msg="StopPodSandbox for \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\"" Aug 13 07:57:07.936385 containerd[1620]: 2025-08-13 07:57:07.841 [WARNING][5274] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0", GenerateName:"calico-apiserver-75f8484686-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cecc162-5b6e-4863-8dff-bed08c37d53a", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f8484686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a", Pod:"calico-apiserver-75f8484686-4hn7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf5b0b2e370", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:07.936385 containerd[1620]: 2025-08-13 07:57:07.841 [INFO][5274] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Aug 13 07:57:07.936385 containerd[1620]: 2025-08-13 07:57:07.841 [INFO][5274] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" iface="eth0" netns="" Aug 13 07:57:07.936385 containerd[1620]: 2025-08-13 07:57:07.841 [INFO][5274] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Aug 13 07:57:07.936385 containerd[1620]: 2025-08-13 07:57:07.841 [INFO][5274] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Aug 13 07:57:07.936385 containerd[1620]: 2025-08-13 07:57:07.915 [INFO][5281] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" HandleID="k8s-pod-network.4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:07.936385 containerd[1620]: 2025-08-13 07:57:07.916 [INFO][5281] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:07.936385 containerd[1620]: 2025-08-13 07:57:07.916 [INFO][5281] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:07.936385 containerd[1620]: 2025-08-13 07:57:07.927 [WARNING][5281] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" HandleID="k8s-pod-network.4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:07.936385 containerd[1620]: 2025-08-13 07:57:07.927 [INFO][5281] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" HandleID="k8s-pod-network.4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:07.936385 containerd[1620]: 2025-08-13 07:57:07.929 [INFO][5281] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:07.936385 containerd[1620]: 2025-08-13 07:57:07.933 [INFO][5274] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Aug 13 07:57:07.936385 containerd[1620]: time="2025-08-13T07:57:07.935972463Z" level=info msg="TearDown network for sandbox \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\" successfully" Aug 13 07:57:07.936385 containerd[1620]: time="2025-08-13T07:57:07.936015377Z" level=info msg="StopPodSandbox for \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\" returns successfully" Aug 13 07:57:07.938079 containerd[1620]: time="2025-08-13T07:57:07.938036498Z" level=info msg="RemovePodSandbox for \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\"" Aug 13 07:57:07.938154 containerd[1620]: time="2025-08-13T07:57:07.938084744Z" level=info msg="Forcibly stopping sandbox \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\"" Aug 13 07:57:08.176919 containerd[1620]: 2025-08-13 07:57:08.055 [WARNING][5295] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0", GenerateName:"calico-apiserver-75f8484686-", Namespace:"calico-apiserver", SelfLink:"", UID:"3cecc162-5b6e-4863-8dff-bed08c37d53a", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f8484686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a", Pod:"calico-apiserver-75f8484686-4hn7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calicf5b0b2e370", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:08.176919 containerd[1620]: 2025-08-13 07:57:08.055 [INFO][5295] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Aug 13 07:57:08.176919 containerd[1620]: 2025-08-13 07:57:08.055 [INFO][5295] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" iface="eth0" netns="" Aug 13 07:57:08.176919 containerd[1620]: 2025-08-13 07:57:08.055 [INFO][5295] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Aug 13 07:57:08.176919 containerd[1620]: 2025-08-13 07:57:08.055 [INFO][5295] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Aug 13 07:57:08.176919 containerd[1620]: 2025-08-13 07:57:08.149 [INFO][5302] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" HandleID="k8s-pod-network.4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:08.176919 containerd[1620]: 2025-08-13 07:57:08.150 [INFO][5302] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:08.176919 containerd[1620]: 2025-08-13 07:57:08.150 [INFO][5302] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:08.176919 containerd[1620]: 2025-08-13 07:57:08.163 [WARNING][5302] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" HandleID="k8s-pod-network.4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:08.176919 containerd[1620]: 2025-08-13 07:57:08.165 [INFO][5302] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" HandleID="k8s-pod-network.4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--4hn7r-eth0" Aug 13 07:57:08.176919 containerd[1620]: 2025-08-13 07:57:08.168 [INFO][5302] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:08.176919 containerd[1620]: 2025-08-13 07:57:08.174 [INFO][5295] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92" Aug 13 07:57:08.178929 containerd[1620]: time="2025-08-13T07:57:08.176978903Z" level=info msg="TearDown network for sandbox \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\" successfully" Aug 13 07:57:08.198226 containerd[1620]: time="2025-08-13T07:57:08.197140045Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:57:08.198226 containerd[1620]: time="2025-08-13T07:57:08.198341128Z" level=info msg="RemovePodSandbox \"4872a12c651bf61d817f5d91685ae88e4f03801f5abdf76eb8355de88bd53d92\" returns successfully" Aug 13 07:57:08.199684 containerd[1620]: time="2025-08-13T07:57:08.199647609Z" level=info msg="StopPodSandbox for \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\"" Aug 13 07:57:08.464512 containerd[1620]: 2025-08-13 07:57:08.320 [WARNING][5316] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"11f4b399-8c5b-42d8-8ee1-f13c6bb84b22", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8", Pod:"goldmane-58fd7646b9-t8sp4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.23.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali811ab0c35bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:08.464512 containerd[1620]: 2025-08-13 07:57:08.320 [INFO][5316] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Aug 13 07:57:08.464512 containerd[1620]: 2025-08-13 07:57:08.320 [INFO][5316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" iface="eth0" netns="" Aug 13 07:57:08.464512 containerd[1620]: 2025-08-13 07:57:08.320 [INFO][5316] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Aug 13 07:57:08.464512 containerd[1620]: 2025-08-13 07:57:08.320 [INFO][5316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Aug 13 07:57:08.464512 containerd[1620]: 2025-08-13 07:57:08.413 [INFO][5324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" HandleID="k8s-pod-network.f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Workload="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:08.464512 containerd[1620]: 2025-08-13 07:57:08.414 [INFO][5324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:08.464512 containerd[1620]: 2025-08-13 07:57:08.414 [INFO][5324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:08.464512 containerd[1620]: 2025-08-13 07:57:08.441 [WARNING][5324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" HandleID="k8s-pod-network.f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Workload="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:08.464512 containerd[1620]: 2025-08-13 07:57:08.441 [INFO][5324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" HandleID="k8s-pod-network.f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Workload="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:08.464512 containerd[1620]: 2025-08-13 07:57:08.444 [INFO][5324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:08.464512 containerd[1620]: 2025-08-13 07:57:08.450 [INFO][5316] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Aug 13 07:57:08.470706 containerd[1620]: time="2025-08-13T07:57:08.467955427Z" level=info msg="TearDown network for sandbox \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\" successfully" Aug 13 07:57:08.470706 containerd[1620]: time="2025-08-13T07:57:08.468531077Z" level=info msg="StopPodSandbox for \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\" returns successfully" Aug 13 07:57:08.473662 containerd[1620]: time="2025-08-13T07:57:08.472731006Z" level=info msg="RemovePodSandbox for \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\"" Aug 13 07:57:08.473662 containerd[1620]: time="2025-08-13T07:57:08.472994403Z" level=info msg="Forcibly stopping sandbox \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\"" Aug 13 07:57:08.753978 containerd[1620]: 2025-08-13 07:57:08.622 [WARNING][5339] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"11f4b399-8c5b-42d8-8ee1-f13c6bb84b22", ResourceVersion:"1004", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8", Pod:"goldmane-58fd7646b9-t8sp4", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.23.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali811ab0c35bd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:08.753978 containerd[1620]: 2025-08-13 07:57:08.625 [INFO][5339] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Aug 13 07:57:08.753978 containerd[1620]: 2025-08-13 07:57:08.626 [INFO][5339] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" iface="eth0" netns="" Aug 13 07:57:08.753978 containerd[1620]: 2025-08-13 07:57:08.627 [INFO][5339] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Aug 13 07:57:08.753978 containerd[1620]: 2025-08-13 07:57:08.627 [INFO][5339] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Aug 13 07:57:08.753978 containerd[1620]: 2025-08-13 07:57:08.729 [INFO][5346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" HandleID="k8s-pod-network.f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Workload="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:08.753978 containerd[1620]: 2025-08-13 07:57:08.730 [INFO][5346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:08.753978 containerd[1620]: 2025-08-13 07:57:08.730 [INFO][5346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:08.753978 containerd[1620]: 2025-08-13 07:57:08.740 [WARNING][5346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" HandleID="k8s-pod-network.f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Workload="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:08.753978 containerd[1620]: 2025-08-13 07:57:08.740 [INFO][5346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" HandleID="k8s-pod-network.f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Workload="srv--er0cq.gb1.brightbox.com-k8s-goldmane--58fd7646b9--t8sp4-eth0" Aug 13 07:57:08.753978 containerd[1620]: 2025-08-13 07:57:08.742 [INFO][5346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:08.753978 containerd[1620]: 2025-08-13 07:57:08.747 [INFO][5339] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2" Aug 13 07:57:08.753978 containerd[1620]: time="2025-08-13T07:57:08.752561745Z" level=info msg="TearDown network for sandbox \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\" successfully" Aug 13 07:57:08.797644 containerd[1620]: time="2025-08-13T07:57:08.797012728Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:57:08.797644 containerd[1620]: time="2025-08-13T07:57:08.797170831Z" level=info msg="RemovePodSandbox \"f77d2acac12efeb1c1d350fada0773117969881612dffcb02929faa5da9fc2e2\" returns successfully" Aug 13 07:57:08.799384 containerd[1620]: time="2025-08-13T07:57:08.799351202Z" level=info msg="StopPodSandbox for \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\"" Aug 13 07:57:08.955224 containerd[1620]: 2025-08-13 07:57:08.871 [WARNING][5360] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0", GenerateName:"calico-kube-controllers-85fc769f-", Namespace:"calico-system", SelfLink:"", UID:"f547ffd7-7d10-4436-85b6-ec353b820f63", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85fc769f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef", Pod:"calico-kube-controllers-85fc769f-kftlc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7643c35c22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:08.955224 containerd[1620]: 2025-08-13 07:57:08.872 [INFO][5360] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Aug 13 07:57:08.955224 containerd[1620]: 2025-08-13 07:57:08.872 [INFO][5360] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" iface="eth0" netns="" Aug 13 07:57:08.955224 containerd[1620]: 2025-08-13 07:57:08.872 [INFO][5360] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Aug 13 07:57:08.955224 containerd[1620]: 2025-08-13 07:57:08.872 [INFO][5360] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Aug 13 07:57:08.955224 containerd[1620]: 2025-08-13 07:57:08.925 [INFO][5367] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" HandleID="k8s-pod-network.c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:08.955224 containerd[1620]: 2025-08-13 07:57:08.925 [INFO][5367] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:08.955224 containerd[1620]: 2025-08-13 07:57:08.925 [INFO][5367] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:08.955224 containerd[1620]: 2025-08-13 07:57:08.941 [WARNING][5367] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" HandleID="k8s-pod-network.c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:08.955224 containerd[1620]: 2025-08-13 07:57:08.941 [INFO][5367] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" HandleID="k8s-pod-network.c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:08.955224 containerd[1620]: 2025-08-13 07:57:08.944 [INFO][5367] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:08.955224 containerd[1620]: 2025-08-13 07:57:08.948 [INFO][5360] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Aug 13 07:57:08.956752 containerd[1620]: time="2025-08-13T07:57:08.956344623Z" level=info msg="TearDown network for sandbox \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\" successfully" Aug 13 07:57:08.956752 containerd[1620]: time="2025-08-13T07:57:08.956380221Z" level=info msg="StopPodSandbox for \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\" returns successfully" Aug 13 07:57:08.957286 containerd[1620]: time="2025-08-13T07:57:08.957053646Z" level=info msg="RemovePodSandbox for \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\"" Aug 13 07:57:08.957286 containerd[1620]: time="2025-08-13T07:57:08.957107608Z" level=info msg="Forcibly stopping sandbox \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\"" Aug 13 07:57:09.137891 containerd[1620]: 2025-08-13 07:57:09.057 [WARNING][5382] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0", GenerateName:"calico-kube-controllers-85fc769f-", Namespace:"calico-system", SelfLink:"", UID:"f547ffd7-7d10-4436-85b6-ec353b820f63", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85fc769f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef", Pod:"calico-kube-controllers-85fc769f-kftlc", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.23.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic7643c35c22", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:09.137891 containerd[1620]: 2025-08-13 07:57:09.058 [INFO][5382] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Aug 13 07:57:09.137891 containerd[1620]: 2025-08-13 07:57:09.058 [INFO][5382] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" iface="eth0" netns="" Aug 13 07:57:09.137891 containerd[1620]: 2025-08-13 07:57:09.058 [INFO][5382] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Aug 13 07:57:09.137891 containerd[1620]: 2025-08-13 07:57:09.058 [INFO][5382] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Aug 13 07:57:09.137891 containerd[1620]: 2025-08-13 07:57:09.115 [INFO][5389] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" HandleID="k8s-pod-network.c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:09.137891 containerd[1620]: 2025-08-13 07:57:09.115 [INFO][5389] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:09.137891 containerd[1620]: 2025-08-13 07:57:09.115 [INFO][5389] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:09.137891 containerd[1620]: 2025-08-13 07:57:09.127 [WARNING][5389] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" HandleID="k8s-pod-network.c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:09.137891 containerd[1620]: 2025-08-13 07:57:09.127 [INFO][5389] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" HandleID="k8s-pod-network.c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--kube--controllers--85fc769f--kftlc-eth0" Aug 13 07:57:09.137891 containerd[1620]: 2025-08-13 07:57:09.129 [INFO][5389] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:09.137891 containerd[1620]: 2025-08-13 07:57:09.133 [INFO][5382] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f" Aug 13 07:57:09.139806 containerd[1620]: time="2025-08-13T07:57:09.137948558Z" level=info msg="TearDown network for sandbox \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\" successfully" Aug 13 07:57:09.143382 containerd[1620]: time="2025-08-13T07:57:09.143337613Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:57:09.143466 containerd[1620]: time="2025-08-13T07:57:09.143417599Z" level=info msg="RemovePodSandbox \"c2f4eac079ca26e612fc1c0f37623f85a1900fb6f19a847f05e746b83427164f\" returns successfully" Aug 13 07:57:09.144289 containerd[1620]: time="2025-08-13T07:57:09.144187903Z" level=info msg="StopPodSandbox for \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\"" Aug 13 07:57:09.318747 containerd[1620]: 2025-08-13 07:57:09.216 [WARNING][5403] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"981413ed-74fe-461c-914c-e0dc01dda890", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6", Pod:"csi-node-driver-clt64", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65fad7f54fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:09.318747 containerd[1620]: 2025-08-13 07:57:09.216 [INFO][5403] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Aug 13 07:57:09.318747 containerd[1620]: 2025-08-13 07:57:09.218 [INFO][5403] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" iface="eth0" netns="" Aug 13 07:57:09.318747 containerd[1620]: 2025-08-13 07:57:09.218 [INFO][5403] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Aug 13 07:57:09.318747 containerd[1620]: 2025-08-13 07:57:09.218 [INFO][5403] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Aug 13 07:57:09.318747 containerd[1620]: 2025-08-13 07:57:09.273 [INFO][5410] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" HandleID="k8s-pod-network.aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Workload="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:57:09.318747 containerd[1620]: 2025-08-13 07:57:09.274 [INFO][5410] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:09.318747 containerd[1620]: 2025-08-13 07:57:09.274 [INFO][5410] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:09.318747 containerd[1620]: 2025-08-13 07:57:09.296 [WARNING][5410] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" HandleID="k8s-pod-network.aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Workload="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:57:09.318747 containerd[1620]: 2025-08-13 07:57:09.296 [INFO][5410] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" HandleID="k8s-pod-network.aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Workload="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:57:09.318747 containerd[1620]: 2025-08-13 07:57:09.302 [INFO][5410] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:09.318747 containerd[1620]: 2025-08-13 07:57:09.307 [INFO][5403] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Aug 13 07:57:09.320709 containerd[1620]: time="2025-08-13T07:57:09.319291607Z" level=info msg="TearDown network for sandbox \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\" successfully" Aug 13 07:57:09.320709 containerd[1620]: time="2025-08-13T07:57:09.319384090Z" level=info msg="StopPodSandbox for \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\" returns successfully" Aug 13 07:57:09.324270 containerd[1620]: time="2025-08-13T07:57:09.323613841Z" level=info msg="RemovePodSandbox for \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\"" Aug 13 07:57:09.324270 containerd[1620]: time="2025-08-13T07:57:09.323660757Z" level=info msg="Forcibly stopping sandbox \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\"" Aug 13 07:57:09.398364 containerd[1620]: time="2025-08-13T07:57:09.394854753Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:09.398364 containerd[1620]: time="2025-08-13T07:57:09.397346478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Aug 13 07:57:09.401427 containerd[1620]: time="2025-08-13T07:57:09.399433580Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:09.405227 containerd[1620]: time="2025-08-13T07:57:09.405157322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:09.405863 containerd[1620]: time="2025-08-13T07:57:09.405813225Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 5.258605075s" Aug 13 07:57:09.406735 containerd[1620]: time="2025-08-13T07:57:09.406684755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:57:09.414144 containerd[1620]: time="2025-08-13T07:57:09.413644985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Aug 13 07:57:09.421295 containerd[1620]: time="2025-08-13T07:57:09.421245317Z" level=info msg="CreateContainer within sandbox \"55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:57:09.443697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2717205462.mount: Deactivated successfully. Aug 13 07:57:09.446354 containerd[1620]: time="2025-08-13T07:57:09.445491185Z" level=info msg="CreateContainer within sandbox \"55a7cb44ede70ddfcd9a2068eefc3931a6a1a4cf2bf433b66594d1bbc0e6033a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"de0fd722b8dde2d3fe4b705725742942c913a87fa9f5e93665ced51f863df98c\"" Aug 13 07:57:09.449753 containerd[1620]: time="2025-08-13T07:57:09.448189330Z" level=info msg="StartContainer for \"de0fd722b8dde2d3fe4b705725742942c913a87fa9f5e93665ced51f863df98c\"" Aug 13 07:57:09.520746 containerd[1620]: 2025-08-13 07:57:09.413 [WARNING][5424] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"981413ed-74fe-461c-914c-e0dc01dda890", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6", Pod:"csi-node-driver-clt64", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.23.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali65fad7f54fc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:09.520746 containerd[1620]: 2025-08-13 07:57:09.416 [INFO][5424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Aug 13 07:57:09.520746 containerd[1620]: 2025-08-13 07:57:09.416 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" iface="eth0" netns="" Aug 13 07:57:09.520746 containerd[1620]: 2025-08-13 07:57:09.416 [INFO][5424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Aug 13 07:57:09.520746 containerd[1620]: 2025-08-13 07:57:09.416 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Aug 13 07:57:09.520746 containerd[1620]: 2025-08-13 07:57:09.497 [INFO][5435] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" HandleID="k8s-pod-network.aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Workload="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:57:09.520746 containerd[1620]: 2025-08-13 07:57:09.497 [INFO][5435] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:09.520746 containerd[1620]: 2025-08-13 07:57:09.497 [INFO][5435] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:09.520746 containerd[1620]: 2025-08-13 07:57:09.509 [WARNING][5435] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" HandleID="k8s-pod-network.aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Workload="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:57:09.520746 containerd[1620]: 2025-08-13 07:57:09.509 [INFO][5435] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" HandleID="k8s-pod-network.aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Workload="srv--er0cq.gb1.brightbox.com-k8s-csi--node--driver--clt64-eth0" Aug 13 07:57:09.520746 containerd[1620]: 2025-08-13 07:57:09.511 [INFO][5435] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:09.520746 containerd[1620]: 2025-08-13 07:57:09.515 [INFO][5424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663" Aug 13 07:57:09.524590 containerd[1620]: time="2025-08-13T07:57:09.523298965Z" level=info msg="TearDown network for sandbox \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\" successfully" Aug 13 07:57:09.541409 containerd[1620]: time="2025-08-13T07:57:09.531454523Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:57:09.541409 containerd[1620]: time="2025-08-13T07:57:09.541328043Z" level=info msg="RemovePodSandbox \"aef37d09f986012817d4768959c034a5547f46ec7cfc7a4855a805addfcfc663\" returns successfully" Aug 13 07:57:09.543221 containerd[1620]: time="2025-08-13T07:57:09.542898187Z" level=info msg="StopPodSandbox for \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\"" Aug 13 07:57:09.580354 systemd[1]: run-containerd-runc-k8s.io-de0fd722b8dde2d3fe4b705725742942c913a87fa9f5e93665ced51f863df98c-runc.TDEmWW.mount: Deactivated successfully. Aug 13 07:57:09.683330 containerd[1620]: time="2025-08-13T07:57:09.682429230Z" level=info msg="StartContainer for \"de0fd722b8dde2d3fe4b705725742942c913a87fa9f5e93665ced51f863df98c\" returns successfully" Aug 13 07:57:09.708025 containerd[1620]: 2025-08-13 07:57:09.625 [WARNING][5459] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-whisker--bb7577cf9--r5blg-eth0" Aug 13 07:57:09.708025 containerd[1620]: 2025-08-13 07:57:09.626 [INFO][5459] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Aug 13 07:57:09.708025 containerd[1620]: 2025-08-13 07:57:09.626 [INFO][5459] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" iface="eth0" netns="" Aug 13 07:57:09.708025 containerd[1620]: 2025-08-13 07:57:09.626 [INFO][5459] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Aug 13 07:57:09.708025 containerd[1620]: 2025-08-13 07:57:09.626 [INFO][5459] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Aug 13 07:57:09.708025 containerd[1620]: 2025-08-13 07:57:09.687 [INFO][5482] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" HandleID="k8s-pod-network.68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Workload="srv--er0cq.gb1.brightbox.com-k8s-whisker--bb7577cf9--r5blg-eth0" Aug 13 07:57:09.708025 containerd[1620]: 2025-08-13 07:57:09.689 [INFO][5482] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:09.708025 containerd[1620]: 2025-08-13 07:57:09.689 [INFO][5482] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:09.708025 containerd[1620]: 2025-08-13 07:57:09.699 [WARNING][5482] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" HandleID="k8s-pod-network.68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Workload="srv--er0cq.gb1.brightbox.com-k8s-whisker--bb7577cf9--r5blg-eth0" Aug 13 07:57:09.708025 containerd[1620]: 2025-08-13 07:57:09.699 [INFO][5482] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" HandleID="k8s-pod-network.68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Workload="srv--er0cq.gb1.brightbox.com-k8s-whisker--bb7577cf9--r5blg-eth0" Aug 13 07:57:09.708025 containerd[1620]: 2025-08-13 07:57:09.701 [INFO][5482] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:09.708025 containerd[1620]: 2025-08-13 07:57:09.704 [INFO][5459] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Aug 13 07:57:09.708885 containerd[1620]: time="2025-08-13T07:57:09.708135559Z" level=info msg="TearDown network for sandbox \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\" successfully" Aug 13 07:57:09.708885 containerd[1620]: time="2025-08-13T07:57:09.708175107Z" level=info msg="StopPodSandbox for \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\" returns successfully" Aug 13 07:57:09.709391 containerd[1620]: time="2025-08-13T07:57:09.708988744Z" level=info msg="RemovePodSandbox for \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\"" Aug 13 07:57:09.709475 containerd[1620]: time="2025-08-13T07:57:09.709405388Z" level=info msg="Forcibly stopping sandbox \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\"" Aug 13 07:57:09.766392 kubelet[2873]: I0813 07:57:09.762909 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75f8484686-4hn7r" podStartSLOduration=39.077344233 podStartE2EDuration="46.758472152s" podCreationTimestamp="2025-08-13 07:56:23 +0000 UTC" firstStartedPulling="2025-08-13 07:57:01.727363579 +0000 UTC m=+54.958035948" lastFinishedPulling="2025-08-13 07:57:09.408491485 +0000 UTC m=+62.639163867" observedRunningTime="2025-08-13 07:57:09.755702233 +0000 UTC m=+62.986374628" watchObservedRunningTime="2025-08-13 07:57:09.758472152 +0000 UTC m=+62.989144527" Aug 13 07:57:09.862205 containerd[1620]: time="2025-08-13T07:57:09.860700608Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:09.864884 containerd[1620]: time="2025-08-13T07:57:09.864792433Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Aug 13 07:57:09.873320 containerd[1620]: time="2025-08-13T07:57:09.873274232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 459.584177ms" Aug 13 07:57:09.873529 containerd[1620]: time="2025-08-13T07:57:09.873487401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Aug 13 07:57:09.876285 containerd[1620]: time="2025-08-13T07:57:09.876256424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 07:57:09.881709 containerd[1620]: time="2025-08-13T07:57:09.881488654Z" level=info msg="CreateContainer within sandbox \"0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 13 07:57:09.909037 containerd[1620]: 2025-08-13 07:57:09.811 [WARNING][5505] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" WorkloadEndpoint="srv--er0cq.gb1.brightbox.com-k8s-whisker--bb7577cf9--r5blg-eth0" Aug 13 07:57:09.909037 containerd[1620]: 2025-08-13 07:57:09.811 [INFO][5505] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Aug 13 07:57:09.909037 containerd[1620]: 2025-08-13 07:57:09.811 [INFO][5505] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" iface="eth0" netns="" Aug 13 07:57:09.909037 containerd[1620]: 2025-08-13 07:57:09.811 [INFO][5505] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Aug 13 07:57:09.909037 containerd[1620]: 2025-08-13 07:57:09.811 [INFO][5505] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Aug 13 07:57:09.909037 containerd[1620]: 2025-08-13 07:57:09.874 [INFO][5515] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" HandleID="k8s-pod-network.68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Workload="srv--er0cq.gb1.brightbox.com-k8s-whisker--bb7577cf9--r5blg-eth0" Aug 13 07:57:09.909037 containerd[1620]: 2025-08-13 07:57:09.879 [INFO][5515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:09.909037 containerd[1620]: 2025-08-13 07:57:09.879 [INFO][5515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:09.909037 containerd[1620]: 2025-08-13 07:57:09.900 [WARNING][5515] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" HandleID="k8s-pod-network.68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Workload="srv--er0cq.gb1.brightbox.com-k8s-whisker--bb7577cf9--r5blg-eth0" Aug 13 07:57:09.909037 containerd[1620]: 2025-08-13 07:57:09.900 [INFO][5515] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" HandleID="k8s-pod-network.68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Workload="srv--er0cq.gb1.brightbox.com-k8s-whisker--bb7577cf9--r5blg-eth0" Aug 13 07:57:09.909037 containerd[1620]: 2025-08-13 07:57:09.902 [INFO][5515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:09.909037 containerd[1620]: 2025-08-13 07:57:09.905 [INFO][5505] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb" Aug 13 07:57:09.911524 containerd[1620]: time="2025-08-13T07:57:09.909069740Z" level=info msg="TearDown network for sandbox \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\" successfully" Aug 13 07:57:09.916914 containerd[1620]: time="2025-08-13T07:57:09.916776631Z" level=info msg="CreateContainer within sandbox \"0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f9ac58c7cf151180172fd7ab884c157a2c137f1afd126fbebc46cb44d14613bd\"" Aug 13 07:57:09.919125 containerd[1620]: time="2025-08-13T07:57:09.919066155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:57:09.919422 containerd[1620]: time="2025-08-13T07:57:09.919382643Z" level=info msg="RemovePodSandbox \"68cf7708a65791edd7c4ca9c346cafb4d38132e343035bfd216d7f48155bdbdb\" returns successfully" Aug 13 07:57:09.920245 containerd[1620]: time="2025-08-13T07:57:09.920204049Z" level=info msg="StopPodSandbox for \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\"" Aug 13 07:57:09.922528 containerd[1620]: time="2025-08-13T07:57:09.922318204Z" level=info msg="StartContainer for \"f9ac58c7cf151180172fd7ab884c157a2c137f1afd126fbebc46cb44d14613bd\"" Aug 13 07:57:09.935205 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:57:09.941104 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:57:09.936349 systemd-resolved[1515]: Flushed all caches. Aug 13 07:57:10.145412 containerd[1620]: time="2025-08-13T07:57:10.145347391Z" level=info msg="StartContainer for \"f9ac58c7cf151180172fd7ab884c157a2c137f1afd126fbebc46cb44d14613bd\" returns successfully" Aug 13 07:57:10.149050 containerd[1620]: 2025-08-13 07:57:10.026 [WARNING][5536] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f05ad85b-d4f3-4c2d-b462-454f0dd5790f", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e", Pod:"coredns-7c65d6cfc9-twz7x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69932d82b8b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:10.149050 containerd[1620]: 2025-08-13 07:57:10.029 [INFO][5536] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Aug 13 07:57:10.149050 containerd[1620]: 2025-08-13 07:57:10.029 [INFO][5536] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" iface="eth0" netns="" Aug 13 07:57:10.149050 containerd[1620]: 2025-08-13 07:57:10.029 [INFO][5536] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Aug 13 07:57:10.149050 containerd[1620]: 2025-08-13 07:57:10.029 [INFO][5536] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Aug 13 07:57:10.149050 containerd[1620]: 2025-08-13 07:57:10.100 [INFO][5563] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" HandleID="k8s-pod-network.8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:10.149050 containerd[1620]: 2025-08-13 07:57:10.101 [INFO][5563] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:10.149050 containerd[1620]: 2025-08-13 07:57:10.101 [INFO][5563] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:10.149050 containerd[1620]: 2025-08-13 07:57:10.119 [WARNING][5563] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" HandleID="k8s-pod-network.8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:10.149050 containerd[1620]: 2025-08-13 07:57:10.119 [INFO][5563] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" HandleID="k8s-pod-network.8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:10.149050 containerd[1620]: 2025-08-13 07:57:10.122 [INFO][5563] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:10.149050 containerd[1620]: 2025-08-13 07:57:10.140 [INFO][5536] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Aug 13 07:57:10.149050 containerd[1620]: time="2025-08-13T07:57:10.148955489Z" level=info msg="TearDown network for sandbox \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\" successfully" Aug 13 07:57:10.149050 containerd[1620]: time="2025-08-13T07:57:10.148981688Z" level=info msg="StopPodSandbox for \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\" returns successfully" Aug 13 07:57:10.153678 containerd[1620]: time="2025-08-13T07:57:10.151821965Z" level=info msg="RemovePodSandbox for \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\"" Aug 13 07:57:10.153678 containerd[1620]: time="2025-08-13T07:57:10.151861992Z" level=info msg="Forcibly stopping sandbox \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\"" Aug 13 07:57:10.398182 containerd[1620]: 2025-08-13 07:57:10.257 [WARNING][5588] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"f05ad85b-d4f3-4c2d-b462-454f0dd5790f", ResourceVersion:"1024", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"fecdbe094256236625ce598bd37a91862b507905990e93ca4dc947cfcaa15b7e", Pod:"coredns-7c65d6cfc9-twz7x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.23.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali69932d82b8b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:10.398182 containerd[1620]: 2025-08-13 07:57:10.259 [INFO][5588] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Aug 13 07:57:10.398182 containerd[1620]: 2025-08-13 07:57:10.260 [INFO][5588] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" iface="eth0" netns="" Aug 13 07:57:10.398182 containerd[1620]: 2025-08-13 07:57:10.260 [INFO][5588] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Aug 13 07:57:10.398182 containerd[1620]: 2025-08-13 07:57:10.261 [INFO][5588] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Aug 13 07:57:10.398182 containerd[1620]: 2025-08-13 07:57:10.344 [INFO][5601] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" HandleID="k8s-pod-network.8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:10.398182 containerd[1620]: 2025-08-13 07:57:10.347 [INFO][5601] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:10.398182 containerd[1620]: 2025-08-13 07:57:10.347 [INFO][5601] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:10.398182 containerd[1620]: 2025-08-13 07:57:10.367 [WARNING][5601] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" HandleID="k8s-pod-network.8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:10.398182 containerd[1620]: 2025-08-13 07:57:10.367 [INFO][5601] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" HandleID="k8s-pod-network.8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Workload="srv--er0cq.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--twz7x-eth0" Aug 13 07:57:10.398182 containerd[1620]: 2025-08-13 07:57:10.371 [INFO][5601] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:10.398182 containerd[1620]: 2025-08-13 07:57:10.382 [INFO][5588] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46" Aug 13 07:57:10.398182 containerd[1620]: time="2025-08-13T07:57:10.397471147Z" level=info msg="TearDown network for sandbox \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\" successfully" Aug 13 07:57:10.463016 containerd[1620]: time="2025-08-13T07:57:10.461882209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:57:10.463016 containerd[1620]: time="2025-08-13T07:57:10.461997535Z" level=info msg="RemovePodSandbox \"8c40dad14461af4de72294bbf3a4dce6c763d986a9d71068c4929ffb89133d46\" returns successfully" Aug 13 07:57:10.466367 containerd[1620]: time="2025-08-13T07:57:10.464615018Z" level=info msg="StopPodSandbox for \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\"" Aug 13 07:57:10.667124 containerd[1620]: 2025-08-13 07:57:10.576 [WARNING][5621] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0", GenerateName:"calico-apiserver-75f8484686-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6d807f6-bf8b-4276-bd29-e9b753213504", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f8484686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd", Pod:"calico-apiserver-75f8484686-ncws4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali857ee685224", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:10.667124 containerd[1620]: 2025-08-13 07:57:10.577 [INFO][5621] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Aug 13 07:57:10.667124 containerd[1620]: 2025-08-13 07:57:10.577 [INFO][5621] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" iface="eth0" netns="" Aug 13 07:57:10.667124 containerd[1620]: 2025-08-13 07:57:10.577 [INFO][5621] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Aug 13 07:57:10.667124 containerd[1620]: 2025-08-13 07:57:10.577 [INFO][5621] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Aug 13 07:57:10.667124 containerd[1620]: 2025-08-13 07:57:10.630 [INFO][5628] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" HandleID="k8s-pod-network.e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:10.667124 containerd[1620]: 2025-08-13 07:57:10.632 [INFO][5628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:10.667124 containerd[1620]: 2025-08-13 07:57:10.632 [INFO][5628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:10.667124 containerd[1620]: 2025-08-13 07:57:10.652 [WARNING][5628] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" HandleID="k8s-pod-network.e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:10.667124 containerd[1620]: 2025-08-13 07:57:10.652 [INFO][5628] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" HandleID="k8s-pod-network.e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:10.667124 containerd[1620]: 2025-08-13 07:57:10.654 [INFO][5628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:10.667124 containerd[1620]: 2025-08-13 07:57:10.661 [INFO][5621] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Aug 13 07:57:10.667124 containerd[1620]: time="2025-08-13T07:57:10.665497440Z" level=info msg="TearDown network for sandbox \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\" successfully" Aug 13 07:57:10.667124 containerd[1620]: time="2025-08-13T07:57:10.665550058Z" level=info msg="StopPodSandbox for \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\" returns successfully" Aug 13 07:57:10.674948 containerd[1620]: time="2025-08-13T07:57:10.667614155Z" level=info msg="RemovePodSandbox for \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\"" Aug 13 07:57:10.674948 containerd[1620]: time="2025-08-13T07:57:10.667666795Z" level=info msg="Forcibly stopping sandbox \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\"" Aug 13 07:57:10.833960 kubelet[2873]: I0813 07:57:10.831400 2873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:57:10.844195 kubelet[2873]: I0813 07:57:10.843805 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-75f8484686-ncws4" podStartSLOduration=39.934264476 podStartE2EDuration="47.843786127s" podCreationTimestamp="2025-08-13 07:56:23 +0000 UTC" firstStartedPulling="2025-08-13 07:57:01.965844809 +0000 UTC m=+55.196517186" lastFinishedPulling="2025-08-13 07:57:09.875366452 +0000 UTC m=+63.106038837" observedRunningTime="2025-08-13 07:57:10.842212496 +0000 UTC m=+64.072884881" watchObservedRunningTime="2025-08-13 07:57:10.843786127 +0000 UTC m=+64.074458510" Aug 13 07:57:10.970900 containerd[1620]: 2025-08-13 07:57:10.822 [WARNING][5642] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0", GenerateName:"calico-apiserver-75f8484686-", Namespace:"calico-apiserver", SelfLink:"", UID:"b6d807f6-bf8b-4276-bd29-e9b753213504", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 7, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"75f8484686", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-er0cq.gb1.brightbox.com", ContainerID:"0b06498923448f30c9691b3671d6c7315d05c844a269b3d9eed41f47d1455ecd", Pod:"calico-apiserver-75f8484686-ncws4", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.23.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali857ee685224", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 07:57:10.970900 containerd[1620]: 2025-08-13 07:57:10.826 [INFO][5642] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Aug 13 07:57:10.970900 containerd[1620]: 2025-08-13 07:57:10.826 [INFO][5642] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" iface="eth0" netns="" Aug 13 07:57:10.970900 containerd[1620]: 2025-08-13 07:57:10.826 [INFO][5642] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Aug 13 07:57:10.970900 containerd[1620]: 2025-08-13 07:57:10.826 [INFO][5642] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Aug 13 07:57:10.970900 containerd[1620]: 2025-08-13 07:57:10.924 [INFO][5649] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" HandleID="k8s-pod-network.e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:10.970900 containerd[1620]: 2025-08-13 07:57:10.924 [INFO][5649] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 07:57:10.970900 containerd[1620]: 2025-08-13 07:57:10.924 [INFO][5649] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 07:57:10.970900 containerd[1620]: 2025-08-13 07:57:10.953 [WARNING][5649] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" HandleID="k8s-pod-network.e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:10.970900 containerd[1620]: 2025-08-13 07:57:10.954 [INFO][5649] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" HandleID="k8s-pod-network.e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Workload="srv--er0cq.gb1.brightbox.com-k8s-calico--apiserver--75f8484686--ncws4-eth0" Aug 13 07:57:10.970900 containerd[1620]: 2025-08-13 07:57:10.958 [INFO][5649] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 07:57:10.970900 containerd[1620]: 2025-08-13 07:57:10.965 [INFO][5642] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a" Aug 13 07:57:10.970900 containerd[1620]: time="2025-08-13T07:57:10.969099258Z" level=info msg="TearDown network for sandbox \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\" successfully" Aug 13 07:57:10.979050 containerd[1620]: time="2025-08-13T07:57:10.978807546Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 13 07:57:10.979050 containerd[1620]: time="2025-08-13T07:57:10.978919420Z" level=info msg="RemovePodSandbox \"e59d7ade8779c3277a906132e97c88f3e0656cce02c74e2287cba08e2366383a\" returns successfully" Aug 13 07:57:14.383317 containerd[1620]: time="2025-08-13T07:57:14.383200989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:14.385084 containerd[1620]: time="2025-08-13T07:57:14.384895313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 07:57:14.386865 containerd[1620]: time="2025-08-13T07:57:14.386786746Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:14.390679 containerd[1620]: time="2025-08-13T07:57:14.390635793Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:14.391984 containerd[1620]: time="2025-08-13T07:57:14.391729327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 4.515010492s" Aug 13 07:57:14.391984 containerd[1620]: time="2025-08-13T07:57:14.391810911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Aug 13 07:57:14.421926 containerd[1620]: time="2025-08-13T07:57:14.421878761Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Aug 13 07:57:14.539896 containerd[1620]: time="2025-08-13T07:57:14.539833988Z" level=info msg="CreateContainer within sandbox \"3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 13 07:57:14.592274 containerd[1620]: time="2025-08-13T07:57:14.592138179Z" level=info msg="CreateContainer within sandbox \"3f11aa13cf6584d274ff10da7acf9e0c269dcaef82c7d9df68676824ea0540ef\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"344e0171ee6496b2de9f617a7762877250044ba98de11889422316f64efff0f9\"" Aug 13 07:57:14.602440 containerd[1620]: time="2025-08-13T07:57:14.602378904Z" level=info msg="StartContainer for \"344e0171ee6496b2de9f617a7762877250044ba98de11889422316f64efff0f9\"" Aug 13 07:57:14.876828 containerd[1620]: time="2025-08-13T07:57:14.873084208Z" level=info msg="StartContainer for \"344e0171ee6496b2de9f617a7762877250044ba98de11889422316f64efff0f9\" returns successfully" Aug 13 07:57:15.109378 kubelet[2873]: I0813 07:57:15.096781 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-85fc769f-kftlc" podStartSLOduration=33.768296935 podStartE2EDuration="46.088272107s" podCreationTimestamp="2025-08-13 07:56:29 +0000 UTC" firstStartedPulling="2025-08-13 07:57:02.094782328 +0000 UTC m=+55.325454704" lastFinishedPulling="2025-08-13 07:57:14.414757498 +0000 UTC m=+67.645429876" observedRunningTime="2025-08-13 07:57:15.079724618 +0000 UTC m=+68.310397008" watchObservedRunningTime="2025-08-13 07:57:15.088272107 +0000 UTC m=+68.318944490" Aug 13 07:57:15.520862 systemd[1]: run-containerd-runc-k8s.io-344e0171ee6496b2de9f617a7762877250044ba98de11889422316f64efff0f9-runc.3o0IlK.mount: Deactivated successfully. Aug 13 07:57:15.953264 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:57:15.964961 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:57:15.965395 systemd-resolved[1515]: Flushed all caches. Aug 13 07:57:16.981728 systemd[1]: Started sshd@10-10.230.74.218:22-49.247.36.49:24709.service - OpenSSH per-connection server daemon (49.247.36.49:24709). Aug 13 07:57:18.001039 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:57:17.998955 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:57:17.998976 systemd-resolved[1515]: Flushed all caches. Aug 13 07:57:18.759122 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1678353238.mount: Deactivated successfully. Aug 13 07:57:18.844035 sshd[5735]: Received disconnect from 49.247.36.49 port 24709:11: Bye Bye [preauth] Aug 13 07:57:18.844035 sshd[5735]: Disconnected from authenticating user root 49.247.36.49 port 24709 [preauth] Aug 13 07:57:18.861289 systemd[1]: sshd@10-10.230.74.218:22-49.247.36.49:24709.service: Deactivated successfully. Aug 13 07:57:20.050532 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:57:20.046557 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:57:20.046587 systemd-resolved[1515]: Flushed all caches. Aug 13 07:57:20.261082 containerd[1620]: time="2025-08-13T07:57:20.260773963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:20.270477 containerd[1620]: time="2025-08-13T07:57:20.270323164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Aug 13 07:57:20.273429 containerd[1620]: time="2025-08-13T07:57:20.270592499Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:20.295953 containerd[1620]: time="2025-08-13T07:57:20.295872082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:20.313079 containerd[1620]: time="2025-08-13T07:57:20.312882890Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 5.886406288s" Aug 13 07:57:20.313323 containerd[1620]: time="2025-08-13T07:57:20.313294955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Aug 13 07:57:20.322227 containerd[1620]: time="2025-08-13T07:57:20.321835323Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Aug 13 07:57:20.342051 containerd[1620]: time="2025-08-13T07:57:20.341812137Z" level=info msg="CreateContainer within sandbox \"828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Aug 13 07:57:20.419169 kubelet[2873]: I0813 07:57:20.418789 2873 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 07:57:20.668639 containerd[1620]: time="2025-08-13T07:57:20.667407353Z" level=info msg="CreateContainer within sandbox \"828de8ba870ecd2a5884e7b05dcfb26086faac31ea8c1706eb7de64732c644d8\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"629f1dd7b7455394da93337be10045bbcc7c5ff9bbc74c3c62690089b87bb4de\"" Aug 13 07:57:20.678135 containerd[1620]: time="2025-08-13T07:57:20.677911529Z" level=info msg="StartContainer for \"629f1dd7b7455394da93337be10045bbcc7c5ff9bbc74c3c62690089b87bb4de\"" Aug 13 07:57:21.075038 containerd[1620]: time="2025-08-13T07:57:21.074899694Z" level=info msg="StartContainer for \"629f1dd7b7455394da93337be10045bbcc7c5ff9bbc74c3c62690089b87bb4de\" returns successfully" Aug 13 07:57:21.233268 kubelet[2873]: I0813 07:57:21.217267 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-t8sp4" podStartSLOduration=35.017430889 podStartE2EDuration="53.215435038s" podCreationTimestamp="2025-08-13 07:56:28 +0000 UTC" firstStartedPulling="2025-08-13 07:57:02.116771974 +0000 UTC m=+55.347444343" lastFinishedPulling="2025-08-13 07:57:20.314776105 +0000 UTC m=+73.545448492" observedRunningTime="2025-08-13 07:57:21.20922607 +0000 UTC m=+74.439898458" watchObservedRunningTime="2025-08-13 07:57:21.215435038 +0000 UTC m=+74.446107413" Aug 13 07:57:22.095908 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:57:22.099449 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:57:22.095942 systemd-resolved[1515]: Flushed all caches. Aug 13 07:57:23.969101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1196598428.mount: Deactivated successfully. Aug 13 07:57:24.005481 containerd[1620]: time="2025-08-13T07:57:24.005319434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:24.008629 containerd[1620]: time="2025-08-13T07:57:24.008431939Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Aug 13 07:57:24.016439 containerd[1620]: time="2025-08-13T07:57:24.016358199Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:24.022316 containerd[1620]: time="2025-08-13T07:57:24.022208746Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:24.023807 containerd[1620]: time="2025-08-13T07:57:24.023541284Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.701644551s" Aug 13 07:57:24.023807 containerd[1620]: time="2025-08-13T07:57:24.023612755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Aug 13 07:57:24.076217 containerd[1620]: time="2025-08-13T07:57:24.076018359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 07:57:24.093752 containerd[1620]: time="2025-08-13T07:57:24.093618376Z" level=info msg="CreateContainer within sandbox \"682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Aug 13 07:57:24.127665 containerd[1620]: time="2025-08-13T07:57:24.127477589Z" level=info msg="CreateContainer within sandbox \"682dfdbc9c425a48d580f00125e380d5315b47ec0b008ca5ffcfd367ef4a4f38\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"651377c5f100f42dfd47bb6cb3e1be441aaedded3058eebba01b9785529a763a\"" Aug 13 07:57:24.129052 containerd[1620]: time="2025-08-13T07:57:24.128502463Z" level=info msg="StartContainer for \"651377c5f100f42dfd47bb6cb3e1be441aaedded3058eebba01b9785529a763a\"" Aug 13 07:57:24.519011 containerd[1620]: time="2025-08-13T07:57:24.518952843Z" level=info msg="StartContainer for \"651377c5f100f42dfd47bb6cb3e1be441aaedded3058eebba01b9785529a763a\" returns successfully" Aug 13 07:57:25.653616 kubelet[2873]: I0813 07:57:25.639892 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-85b7cdd6b9-dbvzq" podStartSLOduration=4.3203149530000005 podStartE2EDuration="29.60949894s" podCreationTimestamp="2025-08-13 07:56:56 +0000 UTC" firstStartedPulling="2025-08-13 07:56:58.775977976 +0000 UTC m=+52.006650348" lastFinishedPulling="2025-08-13 07:57:24.065161952 +0000 UTC m=+77.295834335" observedRunningTime="2025-08-13 07:57:25.451459551 +0000 UTC m=+78.682131941" watchObservedRunningTime="2025-08-13 07:57:25.60949894 +0000 UTC m=+78.840171322" Aug 13 07:57:25.950373 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:57:25.934416 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:57:25.934460 systemd-resolved[1515]: Flushed all caches. Aug 13 07:57:26.157747 systemd[1]: Started sshd@11-10.230.74.218:22-139.178.68.195:60682.service - OpenSSH per-connection server daemon (139.178.68.195:60682). Aug 13 07:57:27.240431 containerd[1620]: time="2025-08-13T07:57:27.240344227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:27.243738 containerd[1620]: time="2025-08-13T07:57:27.240960824Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 07:57:27.258273 containerd[1620]: time="2025-08-13T07:57:27.257216914Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:27.268278 containerd[1620]: time="2025-08-13T07:57:27.266900356Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 07:57:27.272086 sshd[5910]: Accepted publickey for core from 139.178.68.195 port 60682 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:57:27.275610 sshd[5910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:57:27.297143 containerd[1620]: time="2025-08-13T07:57:27.269055317Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 3.192966524s" Aug 13 07:57:27.297143 containerd[1620]: time="2025-08-13T07:57:27.277329702Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 07:57:27.352970 systemd-logind[1591]: New session 12 of user core. Aug 13 07:57:27.356630 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 07:57:27.486623 containerd[1620]: time="2025-08-13T07:57:27.483050804Z" level=info msg="CreateContainer within sandbox \"33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 07:57:27.604614 containerd[1620]: time="2025-08-13T07:57:27.604555206Z" level=info msg="CreateContainer within sandbox \"33940691b38d77ded765a5072c7a4aa78cc9342a70e2ab7d2e8fc336464b36e6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"81805e297f108e2d3308d3a4e4d525aa12fd0b3a79d67c8dced60a307060fe98\"" Aug 13 07:57:27.605649 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195682873.mount: Deactivated successfully. Aug 13 07:57:27.607957 containerd[1620]: time="2025-08-13T07:57:27.607885551Z" level=info msg="StartContainer for \"81805e297f108e2d3308d3a4e4d525aa12fd0b3a79d67c8dced60a307060fe98\"" Aug 13 07:57:27.740492 systemd[1]: run-containerd-runc-k8s.io-81805e297f108e2d3308d3a4e4d525aa12fd0b3a79d67c8dced60a307060fe98-runc.ZEPom2.mount: Deactivated successfully. Aug 13 07:57:27.993415 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:57:27.984916 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:57:27.984934 systemd-resolved[1515]: Flushed all caches. Aug 13 07:57:28.088125 containerd[1620]: time="2025-08-13T07:57:28.088052042Z" level=info msg="StartContainer for \"81805e297f108e2d3308d3a4e4d525aa12fd0b3a79d67c8dced60a307060fe98\" returns successfully" Aug 13 07:57:28.586367 kubelet[2873]: I0813 07:57:28.584868 2873 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-clt64" podStartSLOduration=31.739558764999998 podStartE2EDuration="59.584842185s" podCreationTimestamp="2025-08-13 07:56:29 +0000 UTC" firstStartedPulling="2025-08-13 07:56:59.447904864 +0000 UTC m=+52.678577233" lastFinishedPulling="2025-08-13 07:57:27.293188271 +0000 UTC m=+80.523860653" observedRunningTime="2025-08-13 07:57:28.574514826 +0000 UTC m=+81.805187215" watchObservedRunningTime="2025-08-13 07:57:28.584842185 +0000 UTC m=+81.815514554" Aug 13 07:57:29.298247 sshd[5910]: pam_unix(sshd:session): session closed for user core Aug 13 07:57:29.313783 systemd[1]: sshd@11-10.230.74.218:22-139.178.68.195:60682.service: Deactivated successfully. Aug 13 07:57:29.320788 systemd-logind[1591]: Session 12 logged out. Waiting for processes to exit. Aug 13 07:57:29.322571 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 07:57:29.328089 systemd-logind[1591]: Removed session 12. Aug 13 07:57:29.573252 kubelet[2873]: I0813 07:57:29.566002 2873 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 07:57:29.581824 kubelet[2873]: I0813 07:57:29.581789 2873 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 07:57:30.031790 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:57:30.034681 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:57:30.031800 systemd-resolved[1515]: Flushed all caches. Aug 13 07:57:34.165759 systemd[1]: Started sshd@12-10.230.74.218:22-143.92.37.154:59628.service - OpenSSH per-connection server daemon (143.92.37.154:59628). Aug 13 07:57:34.468064 systemd[1]: Started sshd@13-10.230.74.218:22-139.178.68.195:56604.service - OpenSSH per-connection server daemon (139.178.68.195:56604). Aug 13 07:57:35.441355 sshd[6012]: Accepted publickey for core from 139.178.68.195 port 56604 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:57:35.449953 sshd[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:57:35.473210 systemd-logind[1591]: New session 13 of user core. Aug 13 07:57:35.478686 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 07:57:35.694315 sshd[6015]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.92.37.154 user=root Aug 13 07:57:37.112187 sshd[6012]: pam_unix(sshd:session): session closed for user core Aug 13 07:57:37.124170 systemd-logind[1591]: Session 13 logged out. Waiting for processes to exit. Aug 13 07:57:37.126721 systemd[1]: sshd@13-10.230.74.218:22-139.178.68.195:56604.service: Deactivated successfully. Aug 13 07:57:37.149364 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 07:57:37.152365 systemd-logind[1591]: Removed session 13. Aug 13 07:57:37.877334 sshd[6010]: PAM: Permission denied for root from 143.92.37.154 Aug 13 07:57:38.270369 sshd[6028]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.92.37.154 user=root Aug 13 07:57:40.196312 sshd[6010]: PAM: Permission denied for root from 143.92.37.154 Aug 13 07:57:40.596463 sshd[6035]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.92.37.154 user=root Aug 13 07:57:42.262545 systemd[1]: Started sshd@14-10.230.74.218:22-139.178.68.195:53394.service - OpenSSH per-connection server daemon (139.178.68.195:53394). Aug 13 07:57:42.800313 sshd[6010]: PAM: Permission denied for root from 143.92.37.154 Aug 13 07:57:42.996263 sshd[6010]: Received disconnect from 143.92.37.154 port 59628:11: [preauth] Aug 13 07:57:42.996263 sshd[6010]: Disconnected from authenticating user root 143.92.37.154 port 59628 [preauth] Aug 13 07:57:43.000937 systemd[1]: sshd@12-10.230.74.218:22-143.92.37.154:59628.service: Deactivated successfully. Aug 13 07:57:43.218686 systemd[1]: Started sshd@15-10.230.74.218:22-143.92.37.154:10420.service - OpenSSH per-connection server daemon (143.92.37.154:10420). Aug 13 07:57:43.232455 sshd[6038]: Accepted publickey for core from 139.178.68.195 port 53394 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:57:43.239991 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:57:43.288343 systemd-logind[1591]: New session 14 of user core. Aug 13 07:57:43.291951 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 07:57:43.424430 systemd[1]: run-containerd-runc-k8s.io-629f1dd7b7455394da93337be10045bbcc7c5ff9bbc74c3c62690089b87bb4de-runc.bmH6Al.mount: Deactivated successfully. Aug 13 07:57:43.994222 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:57:43.983478 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:57:43.983515 systemd-resolved[1515]: Flushed all caches. Aug 13 07:57:44.829061 sshd[6096]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.92.37.154 user=root Aug 13 07:57:44.883650 sshd[6038]: pam_unix(sshd:session): session closed for user core Aug 13 07:57:44.898600 systemd[1]: sshd@14-10.230.74.218:22-139.178.68.195:53394.service: Deactivated successfully. Aug 13 07:57:44.922610 systemd-logind[1591]: Session 14 logged out. Waiting for processes to exit. Aug 13 07:57:44.926067 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 07:57:44.942015 systemd-logind[1591]: Removed session 14. Aug 13 07:57:45.038141 systemd[1]: Started sshd@16-10.230.74.218:22-139.178.68.195:53408.service - OpenSSH per-connection server daemon (139.178.68.195:53408). Aug 13 07:57:45.963327 sshd[6100]: Accepted publickey for core from 139.178.68.195 port 53408 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:57:45.966611 sshd[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:57:45.986350 systemd-logind[1591]: New session 15 of user core. Aug 13 07:57:45.991694 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 07:57:46.037442 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:57:46.031832 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:57:46.031842 systemd-resolved[1515]: Flushed all caches. Aug 13 07:57:46.579363 sshd[6045]: PAM: Permission denied for root from 143.92.37.154 Aug 13 07:57:46.909922 sshd[6100]: pam_unix(sshd:session): session closed for user core Aug 13 07:57:46.926084 systemd-logind[1591]: Session 15 logged out. Waiting for processes to exit. Aug 13 07:57:46.927700 systemd[1]: sshd@16-10.230.74.218:22-139.178.68.195:53408.service: Deactivated successfully. Aug 13 07:57:46.937346 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 07:57:46.940021 systemd-logind[1591]: Removed session 15. Aug 13 07:57:46.996582 sshd[6109]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.92.37.154 user=root Aug 13 07:57:47.068900 systemd[1]: Started sshd@17-10.230.74.218:22-139.178.68.195:53418.service - OpenSSH per-connection server daemon (139.178.68.195:53418). Aug 13 07:57:48.019013 sshd[6113]: Accepted publickey for core from 139.178.68.195 port 53418 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:57:48.023360 sshd[6113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:57:48.033683 systemd-logind[1591]: New session 16 of user core. Aug 13 07:57:48.041139 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 07:57:48.974755 sshd[6113]: pam_unix(sshd:session): session closed for user core Aug 13 07:57:48.983730 systemd[1]: sshd@17-10.230.74.218:22-139.178.68.195:53418.service: Deactivated successfully. Aug 13 07:57:48.990657 systemd-logind[1591]: Session 16 logged out. Waiting for processes to exit. Aug 13 07:57:48.992226 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 07:57:48.994869 systemd-logind[1591]: Removed session 16. Aug 13 07:57:49.021853 sshd[6045]: PAM: Permission denied for root from 143.92.37.154 Aug 13 07:57:49.447353 sshd[6165]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.92.37.154 user=root Aug 13 07:57:50.130440 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:57:50.126474 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:57:50.126485 systemd-resolved[1515]: Flushed all caches. Aug 13 07:57:51.226338 sshd[6045]: PAM: Permission denied for root from 143.92.37.154 Aug 13 07:57:51.435827 sshd[6045]: Received disconnect from 143.92.37.154 port 10420:11: [preauth] Aug 13 07:57:51.435827 sshd[6045]: Disconnected from authenticating user root 143.92.37.154 port 10420 [preauth] Aug 13 07:57:51.448640 systemd[1]: sshd@15-10.230.74.218:22-143.92.37.154:10420.service: Deactivated successfully. Aug 13 07:57:51.651567 systemd[1]: Started sshd@18-10.230.74.218:22-143.92.37.154:40942.service - OpenSSH per-connection server daemon (143.92.37.154:40942). Aug 13 07:57:53.138958 sshd[6178]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.92.37.154 user=root Aug 13 07:57:54.130501 systemd[1]: Started sshd@19-10.230.74.218:22-139.178.68.195:50866.service - OpenSSH per-connection server daemon (139.178.68.195:50866). Aug 13 07:57:54.988854 sshd[6176]: PAM: Permission denied for root from 143.92.37.154 Aug 13 07:57:55.063773 sshd[6179]: Accepted publickey for core from 139.178.68.195 port 50866 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:57:55.071757 sshd[6179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:57:55.088731 systemd-logind[1591]: New session 17 of user core. Aug 13 07:57:55.095865 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 07:57:55.392469 sshd[6183]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.92.37.154 user=root Aug 13 07:57:56.613632 sshd[6179]: pam_unix(sshd:session): session closed for user core Aug 13 07:57:56.628115 systemd-logind[1591]: Session 17 logged out. Waiting for processes to exit. Aug 13 07:57:56.630606 systemd[1]: sshd@19-10.230.74.218:22-139.178.68.195:50866.service: Deactivated successfully. Aug 13 07:57:56.661005 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 07:57:56.664115 systemd-logind[1591]: Removed session 17. Aug 13 07:57:56.986015 sshd[6176]: PAM: Permission denied for root from 143.92.37.154 Aug 13 07:57:57.384445 sshd[6195]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=143.92.37.154 user=root Aug 13 07:57:58.932155 sshd[6176]: PAM: Permission denied for root from 143.92.37.154 Aug 13 07:57:59.135942 sshd[6176]: Received disconnect from 143.92.37.154 port 40942:11: [preauth] Aug 13 07:57:59.135942 sshd[6176]: Disconnected from authenticating user root 143.92.37.154 port 40942 [preauth] Aug 13 07:57:59.145288 systemd[1]: sshd@18-10.230.74.218:22-143.92.37.154:40942.service: Deactivated successfully. Aug 13 07:58:01.769550 systemd[1]: Started sshd@20-10.230.74.218:22-139.178.68.195:51182.service - OpenSSH per-connection server daemon (139.178.68.195:51182). Aug 13 07:58:02.726404 sshd[6199]: Accepted publickey for core from 139.178.68.195 port 51182 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:58:02.731571 sshd[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:58:02.747517 systemd-logind[1591]: New session 18 of user core. Aug 13 07:58:02.757837 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 07:58:03.965367 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:03.951894 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:03.951991 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:04.447999 sshd[6199]: pam_unix(sshd:session): session closed for user core Aug 13 07:58:04.461565 systemd-logind[1591]: Session 18 logged out. Waiting for processes to exit. Aug 13 07:58:04.465768 systemd[1]: sshd@20-10.230.74.218:22-139.178.68.195:51182.service: Deactivated successfully. Aug 13 07:58:04.495017 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 07:58:04.500698 systemd-logind[1591]: Removed session 18. Aug 13 07:58:05.998969 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:06.002125 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:05.998981 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:09.613420 systemd[1]: Started sshd@21-10.230.74.218:22-139.178.68.195:51194.service - OpenSSH per-connection server daemon (139.178.68.195:51194). Aug 13 07:58:10.576283 sshd[6215]: Accepted publickey for core from 139.178.68.195 port 51194 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:58:10.587779 sshd[6215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:58:10.624054 systemd-logind[1591]: New session 19 of user core. Aug 13 07:58:10.628295 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 07:58:11.948038 sshd[6215]: pam_unix(sshd:session): session closed for user core Aug 13 07:58:11.961556 systemd[1]: sshd@21-10.230.74.218:22-139.178.68.195:51194.service: Deactivated successfully. Aug 13 07:58:11.969305 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 07:58:11.970087 systemd-logind[1591]: Session 19 logged out. Waiting for processes to exit. Aug 13 07:58:11.973286 systemd-logind[1591]: Removed session 19. Aug 13 07:58:12.018864 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:12.015367 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:12.015380 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:12.098844 systemd[1]: Started sshd@22-10.230.74.218:22-139.178.68.195:42790.service - OpenSSH per-connection server daemon (139.178.68.195:42790). Aug 13 07:58:13.025144 sshd[6231]: Accepted publickey for core from 139.178.68.195 port 42790 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:58:13.029526 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:58:13.046334 systemd-logind[1591]: New session 20 of user core. Aug 13 07:58:13.055812 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 07:58:14.072508 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:14.063148 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:14.065109 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:14.406444 sshd[6231]: pam_unix(sshd:session): session closed for user core Aug 13 07:58:14.426850 systemd[1]: sshd@22-10.230.74.218:22-139.178.68.195:42790.service: Deactivated successfully. Aug 13 07:58:14.447622 systemd-logind[1591]: Session 20 logged out. Waiting for processes to exit. Aug 13 07:58:14.449505 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 07:58:14.454775 systemd-logind[1591]: Removed session 20. Aug 13 07:58:14.564798 systemd[1]: Started sshd@23-10.230.74.218:22-139.178.68.195:42800.service - OpenSSH per-connection server daemon (139.178.68.195:42800). Aug 13 07:58:15.528259 sshd[6289]: Accepted publickey for core from 139.178.68.195 port 42800 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:58:15.529654 sshd[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:58:15.541299 systemd-logind[1591]: New session 21 of user core. Aug 13 07:58:15.548707 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 07:58:17.983489 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:17.967154 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:17.967171 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:20.027320 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:20.031848 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:20.031897 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:22.105585 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:22.084666 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:22.084695 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:22.459117 sshd[6289]: pam_unix(sshd:session): session closed for user core Aug 13 07:58:22.539008 systemd[1]: sshd@23-10.230.74.218:22-139.178.68.195:42800.service: Deactivated successfully. Aug 13 07:58:22.554515 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 07:58:22.556219 systemd-logind[1591]: Session 21 logged out. Waiting for processes to exit. Aug 13 07:58:22.613690 systemd[1]: Started sshd@24-10.230.74.218:22-139.178.68.195:43002.service - OpenSSH per-connection server daemon (139.178.68.195:43002). Aug 13 07:58:22.621803 systemd-logind[1591]: Removed session 21. Aug 13 07:58:23.957941 sshd[6341]: Accepted publickey for core from 139.178.68.195 port 43002 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:58:23.999443 sshd[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:58:24.128449 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:24.115160 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:24.120622 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:24.183515 systemd-logind[1591]: New session 22 of user core. Aug 13 07:58:24.193924 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 07:58:26.232471 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:26.231091 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:26.231132 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:28.301445 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:28.286558 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:28.289897 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:28.784350 kubelet[2873]: E0813 07:58:27.515127 2873 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.453s" Aug 13 07:58:30.350496 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:30.273743 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:30.273761 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:31.094837 kubelet[2873]: E0813 07:58:31.093049 2873 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.506s" Aug 13 07:58:31.826796 sshd[6341]: pam_unix(sshd:session): session closed for user core Aug 13 07:58:31.925654 systemd[1]: sshd@24-10.230.74.218:22-139.178.68.195:43002.service: Deactivated successfully. Aug 13 07:58:31.940417 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 07:58:31.940734 systemd-logind[1591]: Session 22 logged out. Waiting for processes to exit. Aug 13 07:58:31.965983 systemd-logind[1591]: Removed session 22. Aug 13 07:58:32.007706 systemd[1]: Started sshd@25-10.230.74.218:22-139.178.68.195:43080.service - OpenSSH per-connection server daemon (139.178.68.195:43080). Aug 13 07:58:32.319915 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:32.302892 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:32.302913 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:33.012968 sshd[6375]: Accepted publickey for core from 139.178.68.195 port 43080 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:58:33.016842 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:58:33.031153 systemd-logind[1591]: New session 23 of user core. Aug 13 07:58:33.036200 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 07:58:33.289628 systemd[1]: Started sshd@26-10.230.74.218:22-49.247.36.49:7042.service - OpenSSH per-connection server daemon (49.247.36.49:7042). Aug 13 07:58:34.359301 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:34.357623 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:34.357646 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:35.148380 sshd[6393]: Received disconnect from 49.247.36.49 port 7042:11: Bye Bye [preauth] Aug 13 07:58:35.148380 sshd[6393]: Disconnected from authenticating user root 49.247.36.49 port 7042 [preauth] Aug 13 07:58:35.156047 systemd[1]: sshd@26-10.230.74.218:22-49.247.36.49:7042.service: Deactivated successfully. Aug 13 07:58:35.420514 sshd[6375]: pam_unix(sshd:session): session closed for user core Aug 13 07:58:35.428827 systemd[1]: sshd@25-10.230.74.218:22-139.178.68.195:43080.service: Deactivated successfully. Aug 13 07:58:35.444302 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 07:58:35.444753 systemd-logind[1591]: Session 23 logged out. Waiting for processes to exit. Aug 13 07:58:35.452219 systemd-logind[1591]: Removed session 23. Aug 13 07:58:36.406910 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:36.400338 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:36.400377 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:40.610010 systemd[1]: Started sshd@27-10.230.74.218:22-139.178.68.195:53494.service - OpenSSH per-connection server daemon (139.178.68.195:53494). Aug 13 07:58:41.641018 sshd[6437]: Accepted publickey for core from 139.178.68.195 port 53494 ssh2: RSA SHA256:OaWZFdeXPh6CYYASI1PRTz4egRCVAyEUFgarVyGxwBQ Aug 13 07:58:41.645600 sshd[6437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 07:58:41.683005 systemd-logind[1591]: New session 24 of user core. Aug 13 07:58:41.688258 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 07:58:43.794654 systemd[1]: run-containerd-runc-k8s.io-344e0171ee6496b2de9f617a7762877250044ba98de11889422316f64efff0f9-runc.gJcxhU.mount: Deactivated successfully. Aug 13 07:58:44.024969 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:44.018496 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:44.018515 systemd-resolved[1515]: Flushed all caches. Aug 13 07:58:44.601810 sshd[6437]: pam_unix(sshd:session): session closed for user core Aug 13 07:58:44.652681 systemd[1]: sshd@27-10.230.74.218:22-139.178.68.195:53494.service: Deactivated successfully. Aug 13 07:58:44.656979 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 07:58:44.658711 systemd-logind[1591]: Session 24 logged out. Waiting for processes to exit. Aug 13 07:58:44.662077 systemd-logind[1591]: Removed session 24. Aug 13 07:58:46.062448 systemd-resolved[1515]: Under memory pressure, flushing caches. Aug 13 07:58:46.065759 systemd-journald[1188]: Under memory pressure, flushing caches. Aug 13 07:58:46.062468 systemd-resolved[1515]: Flushed all caches.