Jul 7 02:56:14.063317 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 7 02:56:14.066385 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 02:56:14.066402 kernel: BIOS-provided physical RAM map: Jul 7 02:56:14.066420 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 7 02:56:14.066430 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 7 02:56:14.066441 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 7 02:56:14.066453 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Jul 7 02:56:14.066463 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Jul 7 02:56:14.066474 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 7 02:56:14.066484 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 7 02:56:14.066495 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 02:56:14.066506 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 7 02:56:14.066521 kernel: NX (Execute Disable) protection: active Jul 7 02:56:14.066532 kernel: APIC: Static calls initialized Jul 7 02:56:14.066545 kernel: SMBIOS 2.8 present. Jul 7 02:56:14.066557 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Jul 7 02:56:14.066569 kernel: Hypervisor detected: KVM Jul 7 02:56:14.066585 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 02:56:14.066597 kernel: kvm-clock: using sched offset of 4363495056 cycles Jul 7 02:56:14.066610 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 02:56:14.066622 kernel: tsc: Detected 2499.998 MHz processor Jul 7 02:56:14.066634 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 02:56:14.066646 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 02:56:14.066658 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Jul 7 02:56:14.066669 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 7 02:56:14.066681 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 02:56:14.066698 kernel: Using GB pages for direct mapping Jul 7 02:56:14.066710 kernel: ACPI: Early table checksum verification disabled Jul 7 02:56:14.066721 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Jul 7 02:56:14.066733 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 02:56:14.066760 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 02:56:14.066772 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 02:56:14.066783 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Jul 7 02:56:14.066795 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 02:56:14.066807 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 02:56:14.066824 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 02:56:14.066836 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 02:56:14.066848 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Jul 7 02:56:14.066860 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Jul 7 02:56:14.066872 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Jul 7 02:56:14.066890 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Jul 7 02:56:14.066903 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Jul 7 02:56:14.066920 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Jul 7 02:56:14.066933 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Jul 7 02:56:14.066945 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 7 02:56:14.066957 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 7 02:56:14.066969 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jul 7 02:56:14.066981 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Jul 7 02:56:14.066994 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jul 7 02:56:14.067006 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Jul 7 02:56:14.067023 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jul 7 02:56:14.067035 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Jul 7 02:56:14.067047 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jul 7 02:56:14.067059 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Jul 7 02:56:14.067071 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jul 7 02:56:14.067083 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Jul 7 02:56:14.067095 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jul 7 02:56:14.067107 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Jul 7 02:56:14.067119 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jul 7 02:56:14.067136 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Jul 7 02:56:14.067149 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 7 02:56:14.067161 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jul 7 02:56:14.067173 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Jul 7 02:56:14.067185 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Jul 7 02:56:14.067198 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Jul 7 02:56:14.067211 kernel: Zone ranges: Jul 7 02:56:14.067223 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 02:56:14.067235 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Jul 7 02:56:14.067252 kernel: Normal empty Jul 7 02:56:14.067265 kernel: Movable zone start for each node Jul 7 02:56:14.067277 kernel: Early memory node ranges Jul 7 02:56:14.067289 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 7 02:56:14.067301 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Jul 7 02:56:14.067313 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Jul 7 02:56:14.067326 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 02:56:14.067352 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 02:56:14.067367 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Jul 7 02:56:14.067379 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 02:56:14.067398 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 02:56:14.067410 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 02:56:14.067423 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 02:56:14.067435 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 02:56:14.067447 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 02:56:14.067460 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 02:56:14.067472 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 02:56:14.067484 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 02:56:14.067496 kernel: TSC deadline timer available Jul 7 02:56:14.067514 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Jul 7 02:56:14.067526 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 02:56:14.067538 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 7 02:56:14.067550 kernel: Booting paravirtualized kernel on KVM Jul 7 02:56:14.067563 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 02:56:14.067575 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Jul 7 02:56:14.067588 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Jul 7 02:56:14.067600 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Jul 7 02:56:14.067612 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Jul 7 02:56:14.067629 kernel: kvm-guest: PV spinlocks enabled Jul 7 02:56:14.067642 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 02:56:14.067656 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 02:56:14.067669 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 02:56:14.067681 kernel: random: crng init done Jul 7 02:56:14.067693 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 02:56:14.067705 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 7 02:56:14.067718 kernel: Fallback order for Node 0: 0 Jul 7 02:56:14.067744 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Jul 7 02:56:14.067759 kernel: Policy zone: DMA32 Jul 7 02:56:14.067771 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 02:56:14.067783 kernel: software IO TLB: area num 16. Jul 7 02:56:14.067796 kernel: Memory: 1901536K/2096616K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 194820K reserved, 0K cma-reserved) Jul 7 02:56:14.067808 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Jul 7 02:56:14.067821 kernel: Kernel/User page tables isolation: enabled Jul 7 02:56:14.067833 kernel: ftrace: allocating 37966 entries in 149 pages Jul 7 02:56:14.067845 kernel: ftrace: allocated 149 pages with 4 groups Jul 7 02:56:14.067863 kernel: Dynamic Preempt: voluntary Jul 7 02:56:14.067875 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 02:56:14.067888 kernel: rcu: RCU event tracing is enabled. Jul 7 02:56:14.067901 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Jul 7 02:56:14.067914 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 02:56:14.067938 kernel: Rude variant of Tasks RCU enabled. Jul 7 02:56:14.067956 kernel: Tracing variant of Tasks RCU enabled. Jul 7 02:56:14.067969 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 02:56:14.067982 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Jul 7 02:56:14.067994 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Jul 7 02:56:14.068007 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 02:56:14.068020 kernel: Console: colour VGA+ 80x25 Jul 7 02:56:14.068038 kernel: printk: console [tty0] enabled Jul 7 02:56:14.068051 kernel: printk: console [ttyS0] enabled Jul 7 02:56:14.068064 kernel: ACPI: Core revision 20230628 Jul 7 02:56:14.068076 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 02:56:14.068089 kernel: x2apic enabled Jul 7 02:56:14.068107 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 02:56:14.068120 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 7 02:56:14.068133 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Jul 7 02:56:14.068146 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 02:56:14.068159 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jul 7 02:56:14.068171 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jul 7 02:56:14.068184 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 02:56:14.068197 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 02:56:14.068209 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 02:56:14.068222 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jul 7 02:56:14.068240 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 02:56:14.068253 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 02:56:14.068266 kernel: MDS: Mitigation: Clear CPU buffers Jul 7 02:56:14.068278 kernel: MMIO Stale Data: Unknown: No mitigations Jul 7 02:56:14.068291 kernel: SRBDS: Unknown: Dependent on hypervisor status Jul 7 02:56:14.068303 kernel: ITS: Mitigation: Aligned branch/return thunks Jul 7 02:56:14.068316 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 02:56:14.068329 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 02:56:14.070805 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 02:56:14.070824 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 02:56:14.070846 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 7 02:56:14.070859 kernel: Freeing SMP alternatives memory: 32K Jul 7 02:56:14.070872 kernel: pid_max: default: 32768 minimum: 301 Jul 7 02:56:14.070885 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 7 02:56:14.070898 kernel: landlock: Up and running. Jul 7 02:56:14.070911 kernel: SELinux: Initializing. Jul 7 02:56:14.070924 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 02:56:14.070937 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jul 7 02:56:14.070949 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Jul 7 02:56:14.070963 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 02:56:14.070976 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 02:56:14.070994 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Jul 7 02:56:14.071008 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Jul 7 02:56:14.071021 kernel: signal: max sigframe size: 1776 Jul 7 02:56:14.071034 kernel: rcu: Hierarchical SRCU implementation. Jul 7 02:56:14.071048 kernel: rcu: Max phase no-delay instances is 400. Jul 7 02:56:14.071061 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 7 02:56:14.071074 kernel: smp: Bringing up secondary CPUs ... Jul 7 02:56:14.071087 kernel: smpboot: x86: Booting SMP configuration: Jul 7 02:56:14.071100 kernel: .... node #0, CPUs: #1 Jul 7 02:56:14.071118 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jul 7 02:56:14.071131 kernel: smp: Brought up 1 node, 2 CPUs Jul 7 02:56:14.071144 kernel: smpboot: Max logical packages: 16 Jul 7 02:56:14.071157 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Jul 7 02:56:14.071170 kernel: devtmpfs: initialized Jul 7 02:56:14.071183 kernel: x86/mm: Memory block size: 128MB Jul 7 02:56:14.071196 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 02:56:14.071210 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Jul 7 02:56:14.071223 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 02:56:14.071240 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 02:56:14.071253 kernel: audit: initializing netlink subsys (disabled) Jul 7 02:56:14.071267 kernel: audit: type=2000 audit(1751856972.573:1): state=initialized audit_enabled=0 res=1 Jul 7 02:56:14.071280 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 02:56:14.071293 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 02:56:14.071306 kernel: cpuidle: using governor menu Jul 7 02:56:14.071319 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 02:56:14.071332 kernel: dca service started, version 1.12.1 Jul 7 02:56:14.071372 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 7 02:56:14.071393 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 7 02:56:14.071406 kernel: PCI: Using configuration type 1 for base access Jul 7 02:56:14.071419 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 02:56:14.071432 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 02:56:14.071445 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 02:56:14.071496 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 02:56:14.071511 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 02:56:14.071524 kernel: ACPI: Added _OSI(Module Device) Jul 7 02:56:14.071537 kernel: ACPI: Added _OSI(Processor Device) Jul 7 02:56:14.071556 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 02:56:14.071569 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 02:56:14.071582 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 7 02:56:14.071595 kernel: ACPI: Interpreter enabled Jul 7 02:56:14.071608 kernel: ACPI: PM: (supports S0 S5) Jul 7 02:56:14.071621 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 02:56:14.071634 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 02:56:14.071647 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 02:56:14.071660 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 02:56:14.071678 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 02:56:14.071989 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 02:56:14.072180 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 7 02:56:14.073400 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 7 02:56:14.073424 kernel: PCI host bridge to bus 0000:00 Jul 7 02:56:14.073615 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 02:56:14.073790 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 02:56:14.073970 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 02:56:14.074136 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Jul 7 02:56:14.074303 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 7 02:56:14.075512 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Jul 7 02:56:14.075675 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 02:56:14.075890 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 7 02:56:14.076110 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Jul 7 02:56:14.076296 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Jul 7 02:56:14.077569 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Jul 7 02:56:14.077759 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Jul 7 02:56:14.077933 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 02:56:14.078142 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Jul 7 02:56:14.078319 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Jul 7 02:56:14.078529 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Jul 7 02:56:14.078702 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Jul 7 02:56:14.078905 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Jul 7 02:56:14.079078 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Jul 7 02:56:14.079267 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Jul 7 02:56:14.081528 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Jul 7 02:56:14.081770 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Jul 7 02:56:14.081962 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Jul 7 02:56:14.082162 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Jul 7 02:56:14.082380 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Jul 7 02:56:14.082590 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Jul 7 02:56:14.082774 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Jul 7 02:56:14.082966 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Jul 7 02:56:14.083137 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Jul 7 02:56:14.083334 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jul 7 02:56:14.085598 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 7 02:56:14.085794 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Jul 7 02:56:14.085969 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Jul 7 02:56:14.086142 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Jul 7 02:56:14.086333 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Jul 7 02:56:14.088559 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Jul 7 02:56:14.088746 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Jul 7 02:56:14.088925 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Jul 7 02:56:14.089110 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 7 02:56:14.089283 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 02:56:14.089514 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 7 02:56:14.089698 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Jul 7 02:56:14.089884 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Jul 7 02:56:14.090067 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 7 02:56:14.090242 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 7 02:56:14.090467 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Jul 7 02:56:14.090648 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Jul 7 02:56:14.090848 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 7 02:56:14.091023 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 7 02:56:14.091195 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 02:56:14.093419 kernel: pci_bus 0000:02: extended config space not accessible Jul 7 02:56:14.093639 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Jul 7 02:56:14.093843 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Jul 7 02:56:14.094032 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 7 02:56:14.094210 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 7 02:56:14.100483 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Jul 7 02:56:14.100703 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Jul 7 02:56:14.100898 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 7 02:56:14.101071 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 7 02:56:14.101242 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 02:56:14.101466 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Jul 7 02:56:14.101667 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Jul 7 02:56:14.101857 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 7 02:56:14.102031 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 7 02:56:14.102204 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 02:56:14.102395 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 7 02:56:14.102571 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 7 02:56:14.102753 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 02:56:14.102942 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 7 02:56:14.103121 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 7 02:56:14.103295 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 02:56:14.103504 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 7 02:56:14.103679 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 7 02:56:14.103866 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 02:56:14.104045 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 7 02:56:14.104219 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 7 02:56:14.104427 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 02:56:14.104600 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 7 02:56:14.104783 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 7 02:56:14.104953 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 02:56:14.104973 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 02:56:14.104987 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 02:56:14.105000 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 02:56:14.105013 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 02:56:14.105026 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 02:56:14.105047 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 02:56:14.105060 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 02:56:14.105074 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 02:56:14.105087 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 02:56:14.105100 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 02:56:14.105113 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 02:56:14.105126 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 02:56:14.105138 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 02:56:14.105151 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 02:56:14.105170 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 02:56:14.105183 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 02:56:14.105196 kernel: iommu: Default domain type: Translated Jul 7 02:56:14.105209 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 02:56:14.105222 kernel: PCI: Using ACPI for IRQ routing Jul 7 02:56:14.105235 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 02:56:14.105248 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 7 02:56:14.105261 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Jul 7 02:56:14.106506 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 02:56:14.106687 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 02:56:14.106874 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 02:56:14.106895 kernel: vgaarb: loaded Jul 7 02:56:14.106909 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 02:56:14.106922 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 02:56:14.106936 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 02:56:14.106949 kernel: pnp: PnP ACPI init Jul 7 02:56:14.107139 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 7 02:56:14.107169 kernel: pnp: PnP ACPI: found 5 devices Jul 7 02:56:14.107183 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 02:56:14.107197 kernel: NET: Registered PF_INET protocol family Jul 7 02:56:14.107210 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 02:56:14.107223 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jul 7 02:56:14.107236 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 02:56:14.107250 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 7 02:56:14.107263 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jul 7 02:56:14.107282 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jul 7 02:56:14.107295 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 02:56:14.107308 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jul 7 02:56:14.107322 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 02:56:14.107335 kernel: NET: Registered PF_XDP protocol family Jul 7 02:56:14.110819 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Jul 7 02:56:14.110999 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jul 7 02:56:14.111174 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jul 7 02:56:14.111388 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jul 7 02:56:14.111565 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jul 7 02:56:14.111749 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jul 7 02:56:14.111929 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jul 7 02:56:14.112115 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jul 7 02:56:14.112298 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Jul 7 02:56:14.112498 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Jul 7 02:56:14.112671 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Jul 7 02:56:14.112858 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Jul 7 02:56:14.113031 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Jul 7 02:56:14.113204 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Jul 7 02:56:14.119623 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Jul 7 02:56:14.119824 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Jul 7 02:56:14.120020 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Jul 7 02:56:14.120235 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Jul 7 02:56:14.120465 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Jul 7 02:56:14.120639 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jul 7 02:56:14.120826 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Jul 7 02:56:14.120996 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 02:56:14.121167 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Jul 7 02:56:14.121353 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jul 7 02:56:14.121529 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Jul 7 02:56:14.121710 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 02:56:14.121911 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Jul 7 02:56:14.122087 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jul 7 02:56:14.122258 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Jul 7 02:56:14.122483 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 02:56:14.122664 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Jul 7 02:56:14.122854 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jul 7 02:56:14.123025 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Jul 7 02:56:14.123208 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 02:56:14.123402 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Jul 7 02:56:14.123575 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jul 7 02:56:14.123762 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Jul 7 02:56:14.123937 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 02:56:14.124110 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Jul 7 02:56:14.124284 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jul 7 02:56:14.124485 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Jul 7 02:56:14.125415 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 02:56:14.125598 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Jul 7 02:56:14.125807 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jul 7 02:56:14.125980 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Jul 7 02:56:14.126162 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 02:56:14.127514 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Jul 7 02:56:14.127733 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jul 7 02:56:14.127923 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Jul 7 02:56:14.128094 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 02:56:14.128270 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 02:56:14.128443 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 02:56:14.128598 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 02:56:14.128769 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Jul 7 02:56:14.130558 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 7 02:56:14.130721 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Jul 7 02:56:14.130911 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jul 7 02:56:14.131074 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Jul 7 02:56:14.131235 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Jul 7 02:56:14.132898 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Jul 7 02:56:14.133086 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Jul 7 02:56:14.133248 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Jul 7 02:56:14.134521 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Jul 7 02:56:14.134713 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Jul 7 02:56:14.134897 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Jul 7 02:56:14.135058 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Jul 7 02:56:14.135231 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Jul 7 02:56:14.138186 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Jul 7 02:56:14.138404 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Jul 7 02:56:14.138602 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Jul 7 02:56:14.138783 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Jul 7 02:56:14.138946 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Jul 7 02:56:14.139124 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Jul 7 02:56:14.139285 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Jul 7 02:56:14.139490 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Jul 7 02:56:14.139669 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Jul 7 02:56:14.139845 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Jul 7 02:56:14.140007 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Jul 7 02:56:14.140177 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Jul 7 02:56:14.140369 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Jul 7 02:56:14.140535 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Jul 7 02:56:14.140564 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 02:56:14.140579 kernel: PCI: CLS 0 bytes, default 64 Jul 7 02:56:14.140593 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 7 02:56:14.140607 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Jul 7 02:56:14.140621 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 7 02:56:14.140635 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Jul 7 02:56:14.140649 kernel: Initialise system trusted keyrings Jul 7 02:56:14.140672 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jul 7 02:56:14.140691 kernel: Key type asymmetric registered Jul 7 02:56:14.140705 kernel: Asymmetric key parser 'x509' registered Jul 7 02:56:14.140718 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 7 02:56:14.140733 kernel: io scheduler mq-deadline registered Jul 7 02:56:14.140758 kernel: io scheduler kyber registered Jul 7 02:56:14.140772 kernel: io scheduler bfq registered Jul 7 02:56:14.140948 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Jul 7 02:56:14.141151 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Jul 7 02:56:14.141329 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:56:14.142227 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Jul 7 02:56:14.142433 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Jul 7 02:56:14.142608 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:56:14.142816 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Jul 7 02:56:14.142989 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Jul 7 02:56:14.143160 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:56:14.143360 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Jul 7 02:56:14.143558 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Jul 7 02:56:14.143761 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:56:14.143940 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Jul 7 02:56:14.144112 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Jul 7 02:56:14.144285 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:56:14.144508 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Jul 7 02:56:14.144681 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Jul 7 02:56:14.144870 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:56:14.145051 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Jul 7 02:56:14.145225 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Jul 7 02:56:14.145472 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:56:14.145658 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Jul 7 02:56:14.145841 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Jul 7 02:56:14.146014 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jul 7 02:56:14.146035 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 02:56:14.146050 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 02:56:14.146064 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 02:56:14.146085 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 02:56:14.146099 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 02:56:14.146113 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 02:56:14.146128 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 02:56:14.146142 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 02:56:14.146317 kernel: rtc_cmos 00:03: RTC can wake from S4 Jul 7 02:56:14.146523 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 02:56:14.146696 kernel: rtc_cmos 00:03: registered as rtc0 Jul 7 02:56:14.146881 kernel: rtc_cmos 00:03: setting system clock to 2025-07-07T02:56:13 UTC (1751856973) Jul 7 02:56:14.147042 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jul 7 02:56:14.147062 kernel: intel_pstate: CPU model not supported Jul 7 02:56:14.147076 kernel: NET: Registered PF_INET6 protocol family Jul 7 02:56:14.147090 kernel: Segment Routing with IPv6 Jul 7 02:56:14.147104 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 02:56:14.147118 kernel: NET: Registered PF_PACKET protocol family Jul 7 02:56:14.147132 kernel: Key type dns_resolver registered Jul 7 02:56:14.147145 kernel: IPI shorthand broadcast: enabled Jul 7 02:56:14.147166 kernel: sched_clock: Marking stable (1225067414, 238555594)->(1715679684, -252056676) Jul 7 02:56:14.147181 kernel: registered taskstats version 1 Jul 7 02:56:14.147194 kernel: Loading compiled-in X.509 certificates Jul 7 02:56:14.147208 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 7 02:56:14.147222 kernel: Key type .fscrypt registered Jul 7 02:56:14.147236 kernel: Key type fscrypt-provisioning registered Jul 7 02:56:14.147249 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 02:56:14.147263 kernel: ima: Allocated hash algorithm: sha1 Jul 7 02:56:14.147277 kernel: ima: No architecture policies found Jul 7 02:56:14.147296 kernel: clk: Disabling unused clocks Jul 7 02:56:14.147310 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 7 02:56:14.147324 kernel: Write protecting the kernel read-only data: 36864k Jul 7 02:56:14.147350 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 7 02:56:14.147366 kernel: Run /init as init process Jul 7 02:56:14.147380 kernel: with arguments: Jul 7 02:56:14.147394 kernel: /init Jul 7 02:56:14.147413 kernel: with environment: Jul 7 02:56:14.147427 kernel: HOME=/ Jul 7 02:56:14.147444 kernel: TERM=linux Jul 7 02:56:14.147458 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 02:56:14.147475 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 02:56:14.147493 systemd[1]: Detected virtualization kvm. Jul 7 02:56:14.147507 systemd[1]: Detected architecture x86-64. Jul 7 02:56:14.147521 systemd[1]: Running in initrd. Jul 7 02:56:14.147536 systemd[1]: No hostname configured, using default hostname. Jul 7 02:56:14.147550 systemd[1]: Hostname set to . Jul 7 02:56:14.147571 systemd[1]: Initializing machine ID from VM UUID. Jul 7 02:56:14.147585 systemd[1]: Queued start job for default target initrd.target. Jul 7 02:56:14.147600 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 02:56:14.147615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 02:56:14.147630 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 02:56:14.147645 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 02:56:14.147660 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 02:56:14.147675 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 02:56:14.147696 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 02:56:14.147712 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 02:56:14.147727 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 02:56:14.147754 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 02:56:14.147769 systemd[1]: Reached target paths.target - Path Units. Jul 7 02:56:14.147784 systemd[1]: Reached target slices.target - Slice Units. Jul 7 02:56:14.147804 systemd[1]: Reached target swap.target - Swaps. Jul 7 02:56:14.147819 systemd[1]: Reached target timers.target - Timer Units. Jul 7 02:56:14.147834 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 02:56:14.147848 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 02:56:14.147863 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 02:56:14.147878 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 7 02:56:14.147893 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 02:56:14.147907 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 02:56:14.147922 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 02:56:14.147941 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 02:56:14.147956 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 02:56:14.147971 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 02:56:14.147985 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 02:56:14.148000 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 02:56:14.148015 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 02:56:14.148030 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 02:56:14.148044 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 02:56:14.148059 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 02:56:14.148079 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 02:56:14.148094 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 02:56:14.148147 systemd-journald[201]: Collecting audit messages is disabled. Jul 7 02:56:14.148186 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 02:56:14.148202 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:56:14.148216 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 02:56:14.148230 kernel: Bridge firewalling registered Jul 7 02:56:14.148245 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 02:56:14.148265 systemd-journald[201]: Journal started Jul 7 02:56:14.148306 systemd-journald[201]: Runtime Journal (/run/log/journal/0fc346330891405d94e889fabf5ba592) is 4.7M, max 38.0M, 33.2M free. Jul 7 02:56:14.069010 systemd-modules-load[202]: Inserted module 'overlay' Jul 7 02:56:14.141429 systemd-modules-load[202]: Inserted module 'br_netfilter' Jul 7 02:56:14.153372 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 02:56:14.154291 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 02:56:14.156023 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 02:56:14.171560 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 02:56:14.174531 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 02:56:14.179527 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 02:56:14.184117 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 02:56:14.188815 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 02:56:14.192536 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 02:56:14.205964 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 02:56:14.207144 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 02:56:14.219651 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 02:56:14.223633 dracut-cmdline[233]: dracut-dracut-053 Jul 7 02:56:14.223633 dracut-cmdline[233]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 7 02:56:14.268674 systemd-resolved[240]: Positive Trust Anchors: Jul 7 02:56:14.269765 systemd-resolved[240]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 02:56:14.269812 systemd-resolved[240]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 02:56:14.278325 systemd-resolved[240]: Defaulting to hostname 'linux'. Jul 7 02:56:14.281458 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 02:56:14.283162 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 02:56:14.327412 kernel: SCSI subsystem initialized Jul 7 02:56:14.339371 kernel: Loading iSCSI transport class v2.0-870. Jul 7 02:56:14.353426 kernel: iscsi: registered transport (tcp) Jul 7 02:56:14.382920 kernel: iscsi: registered transport (qla4xxx) Jul 7 02:56:14.383013 kernel: QLogic iSCSI HBA Driver Jul 7 02:56:14.438215 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 02:56:14.448698 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 02:56:14.501051 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 02:56:14.501145 kernel: device-mapper: uevent: version 1.0.3 Jul 7 02:56:14.501924 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 7 02:56:14.554411 kernel: raid6: sse2x4 gen() 13668 MB/s Jul 7 02:56:14.570396 kernel: raid6: sse2x2 gen() 9388 MB/s Jul 7 02:56:14.589152 kernel: raid6: sse2x1 gen() 9810 MB/s Jul 7 02:56:14.589216 kernel: raid6: using algorithm sse2x4 gen() 13668 MB/s Jul 7 02:56:14.608193 kernel: raid6: .... xor() 7625 MB/s, rmw enabled Jul 7 02:56:14.608271 kernel: raid6: using ssse3x2 recovery algorithm Jul 7 02:56:14.635391 kernel: xor: automatically using best checksumming function avx Jul 7 02:56:14.831400 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 02:56:14.846638 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 02:56:14.854560 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 02:56:14.882136 systemd-udevd[420]: Using default interface naming scheme 'v255'. Jul 7 02:56:14.889624 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 02:56:14.899640 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 02:56:14.923776 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Jul 7 02:56:14.966957 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 02:56:14.973578 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 02:56:15.092092 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 02:56:15.102588 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 02:56:15.138809 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 02:56:15.143860 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 02:56:15.145188 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 02:56:15.147278 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 02:56:15.156570 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 02:56:15.182389 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 02:56:15.232408 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 02:56:15.238580 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Jul 7 02:56:15.245558 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jul 7 02:56:15.257355 kernel: AVX version of gcm_enc/dec engaged. Jul 7 02:56:15.257402 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 02:56:15.258886 kernel: GPT:17805311 != 125829119 Jul 7 02:56:15.258918 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 02:56:15.262846 kernel: GPT:17805311 != 125829119 Jul 7 02:56:15.262878 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 02:56:15.263363 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 02:56:15.265189 kernel: AES CTR mode by8 optimization enabled Jul 7 02:56:15.275956 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 02:56:15.276139 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 02:56:15.278126 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 02:56:15.280837 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 02:56:15.281011 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:56:15.310108 kernel: ACPI: bus type USB registered Jul 7 02:56:15.310150 kernel: usbcore: registered new interface driver usbfs Jul 7 02:56:15.310170 kernel: usbcore: registered new interface driver hub Jul 7 02:56:15.310187 kernel: usbcore: registered new device driver usb Jul 7 02:56:15.287828 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 02:56:15.299644 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 02:56:15.358431 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 7 02:56:15.358806 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Jul 7 02:56:15.365363 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jul 7 02:56:15.366362 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Jul 7 02:56:15.366635 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Jul 7 02:56:15.366867 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Jul 7 02:56:15.365990 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 02:56:15.503387 kernel: hub 1-0:1.0: USB hub found Jul 7 02:56:15.503739 kernel: hub 1-0:1.0: 4 ports detected Jul 7 02:56:15.503972 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (472) Jul 7 02:56:15.503994 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jul 7 02:56:15.504287 kernel: hub 2-0:1.0: USB hub found Jul 7 02:56:15.504660 kernel: hub 2-0:1.0: 4 ports detected Jul 7 02:56:15.504920 kernel: libata version 3.00 loaded. Jul 7 02:56:15.504943 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 02:56:15.505153 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 02:56:15.505194 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 7 02:56:15.505430 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 02:56:15.505656 kernel: scsi host0: ahci Jul 7 02:56:15.505879 kernel: scsi host1: ahci Jul 7 02:56:15.506088 kernel: scsi host2: ahci Jul 7 02:56:15.506311 kernel: scsi host3: ahci Jul 7 02:56:15.506576 kernel: scsi host4: ahci Jul 7 02:56:15.506791 kernel: scsi host5: ahci Jul 7 02:56:15.506985 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Jul 7 02:56:15.507007 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Jul 7 02:56:15.507025 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Jul 7 02:56:15.507060 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Jul 7 02:56:15.507079 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Jul 7 02:56:15.507096 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Jul 7 02:56:15.507118 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (466) Jul 7 02:56:15.508149 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:56:15.527105 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 02:56:15.534536 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 02:56:15.542236 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 02:56:15.543101 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 02:56:15.555595 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 02:56:15.561566 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 02:56:15.564385 disk-uuid[563]: Primary Header is updated. Jul 7 02:56:15.564385 disk-uuid[563]: Secondary Entries is updated. Jul 7 02:56:15.564385 disk-uuid[563]: Secondary Header is updated. Jul 7 02:56:15.572819 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 02:56:15.578387 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 02:56:15.608443 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 02:56:15.626366 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jul 7 02:56:15.727366 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 02:56:15.727462 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 02:56:15.727487 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 02:56:15.731393 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 02:56:15.734681 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 02:56:15.734732 kernel: ata3: SATA link down (SStatus 0 SControl 300) Jul 7 02:56:15.767411 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 7 02:56:15.773911 kernel: usbcore: registered new interface driver usbhid Jul 7 02:56:15.773971 kernel: usbhid: USB HID core driver Jul 7 02:56:15.781741 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Jul 7 02:56:15.781786 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Jul 7 02:56:16.582392 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 02:56:16.584211 disk-uuid[564]: The operation has completed successfully. Jul 7 02:56:16.631529 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 02:56:16.631746 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 02:56:16.670678 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 02:56:16.675221 sh[583]: Success Jul 7 02:56:16.694618 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Jul 7 02:56:16.760832 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 02:56:16.770757 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 02:56:16.775065 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 02:56:16.807410 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 7 02:56:16.807476 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 02:56:16.807497 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 7 02:56:16.808783 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 7 02:56:16.810587 kernel: BTRFS info (device dm-0): using free space tree Jul 7 02:56:16.822750 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 02:56:16.824974 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 02:56:16.831592 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 02:56:16.834504 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 02:56:16.853683 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 02:56:16.853782 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 02:56:16.853803 kernel: BTRFS info (device vda6): using free space tree Jul 7 02:56:16.859362 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 02:56:16.877402 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 02:56:16.877532 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 7 02:56:16.887697 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 02:56:16.894562 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 02:56:17.031261 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 02:56:17.041285 ignition[683]: Ignition 2.19.0 Jul 7 02:56:17.041313 ignition[683]: Stage: fetch-offline Jul 7 02:56:17.041442 ignition[683]: no configs at "/usr/lib/ignition/base.d" Jul 7 02:56:17.041474 ignition[683]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 02:56:17.046238 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 02:56:17.041668 ignition[683]: parsed url from cmdline: "" Jul 7 02:56:17.041675 ignition[683]: no config URL provided Jul 7 02:56:17.041700 ignition[683]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 02:56:17.049397 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 02:56:17.041718 ignition[683]: no config at "/usr/lib/ignition/user.ign" Jul 7 02:56:17.041728 ignition[683]: failed to fetch config: resource requires networking Jul 7 02:56:17.042006 ignition[683]: Ignition finished successfully Jul 7 02:56:17.083291 systemd-networkd[773]: lo: Link UP Jul 7 02:56:17.083308 systemd-networkd[773]: lo: Gained carrier Jul 7 02:56:17.086044 systemd-networkd[773]: Enumeration completed Jul 7 02:56:17.086739 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 02:56:17.086745 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 02:56:17.087815 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 02:56:17.088981 systemd[1]: Reached target network.target - Network. Jul 7 02:56:17.089673 systemd-networkd[773]: eth0: Link UP Jul 7 02:56:17.089696 systemd-networkd[773]: eth0: Gained carrier Jul 7 02:56:17.089710 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 02:56:17.099590 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 7 02:56:17.117461 systemd-networkd[773]: eth0: DHCPv4 address 10.244.11.130/30, gateway 10.244.11.129 acquired from 10.244.11.129 Jul 7 02:56:17.122898 ignition[776]: Ignition 2.19.0 Jul 7 02:56:17.122917 ignition[776]: Stage: fetch Jul 7 02:56:17.123229 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jul 7 02:56:17.123256 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 02:56:17.123877 ignition[776]: parsed url from cmdline: "" Jul 7 02:56:17.123885 ignition[776]: no config URL provided Jul 7 02:56:17.123895 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 02:56:17.123911 ignition[776]: no config at "/usr/lib/ignition/user.ign" Jul 7 02:56:17.124105 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jul 7 02:56:17.125292 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jul 7 02:56:17.125404 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jul 7 02:56:17.148290 ignition[776]: GET result: OK Jul 7 02:56:17.149139 ignition[776]: parsing config with SHA512: 86fda6bde7b755595bb9ba32f425e5c7759743ed4b4357b6b191c979c924e481ab1305c0fdfe062bd302e8a27c9812feac309046b2fd0cff1cf4e2ddee9489b3 Jul 7 02:56:17.155051 unknown[776]: fetched base config from "system" Jul 7 02:56:17.155070 unknown[776]: fetched base config from "system" Jul 7 02:56:17.155728 ignition[776]: fetch: fetch complete Jul 7 02:56:17.155080 unknown[776]: fetched user config from "openstack" Jul 7 02:56:17.155737 ignition[776]: fetch: fetch passed Jul 7 02:56:17.158636 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 7 02:56:17.155803 ignition[776]: Ignition finished successfully Jul 7 02:56:17.170614 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 02:56:17.194601 ignition[783]: Ignition 2.19.0 Jul 7 02:56:17.194624 ignition[783]: Stage: kargs Jul 7 02:56:17.194882 ignition[783]: no configs at "/usr/lib/ignition/base.d" Jul 7 02:56:17.194905 ignition[783]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 02:56:17.198441 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 02:56:17.196662 ignition[783]: kargs: kargs passed Jul 7 02:56:17.196753 ignition[783]: Ignition finished successfully Jul 7 02:56:17.205541 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 02:56:17.227235 ignition[789]: Ignition 2.19.0 Jul 7 02:56:17.227258 ignition[789]: Stage: disks Jul 7 02:56:17.227552 ignition[789]: no configs at "/usr/lib/ignition/base.d" Jul 7 02:56:17.230102 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 02:56:17.227573 ignition[789]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 02:56:17.232467 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 02:56:17.228752 ignition[789]: disks: disks passed Jul 7 02:56:17.233255 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 02:56:17.228832 ignition[789]: Ignition finished successfully Jul 7 02:56:17.234817 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 02:56:17.236483 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 02:56:17.238037 systemd[1]: Reached target basic.target - Basic System. Jul 7 02:56:17.246636 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 02:56:17.266887 systemd-fsck[797]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jul 7 02:56:17.270913 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 02:56:17.277471 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 02:56:17.407372 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 7 02:56:17.408827 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 02:56:17.410330 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 02:56:17.416492 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 02:56:17.421387 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 02:56:17.422492 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 02:56:17.423550 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jul 7 02:56:17.425929 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 02:56:17.425972 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 02:56:17.437599 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 02:56:17.439658 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (805) Jul 7 02:56:17.451275 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 02:56:17.451369 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 02:56:17.451393 kernel: BTRFS info (device vda6): using free space tree Jul 7 02:56:17.454700 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 02:56:17.463522 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 02:56:17.469054 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 02:56:17.533793 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 02:56:17.543133 initrd-setup-root[842]: cut: /sysroot/etc/group: No such file or directory Jul 7 02:56:17.549881 initrd-setup-root[849]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 02:56:17.558207 initrd-setup-root[856]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 02:56:17.673010 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 02:56:17.679472 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 02:56:17.683568 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 02:56:17.697373 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 02:56:17.729079 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 02:56:17.732327 ignition[923]: INFO : Ignition 2.19.0 Jul 7 02:56:17.733321 ignition[923]: INFO : Stage: mount Jul 7 02:56:17.735398 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 02:56:17.735398 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 02:56:17.738052 ignition[923]: INFO : mount: mount passed Jul 7 02:56:17.739014 ignition[923]: INFO : Ignition finished successfully Jul 7 02:56:17.740091 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 02:56:17.803040 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 02:56:18.798758 systemd-networkd[773]: eth0: Gained IPv6LL Jul 7 02:56:20.312466 systemd-networkd[773]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:2e0:24:19ff:fef4:b82/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:2e0:24:19ff:fef4:b82/64 assigned by NDisc. Jul 7 02:56:20.312496 systemd-networkd[773]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 7 02:56:24.606371 coreos-metadata[807]: Jul 07 02:56:24.606 WARN failed to locate config-drive, using the metadata service API instead Jul 7 02:56:24.630706 coreos-metadata[807]: Jul 07 02:56:24.630 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 02:56:24.647699 coreos-metadata[807]: Jul 07 02:56:24.647 INFO Fetch successful Jul 7 02:56:24.648794 coreos-metadata[807]: Jul 07 02:56:24.648 INFO wrote hostname srv-3i0x6.gb1.brightbox.com to /sysroot/etc/hostname Jul 7 02:56:24.651180 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jul 7 02:56:24.651428 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jul 7 02:56:24.661516 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 02:56:24.676585 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 02:56:24.703699 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (940) Jul 7 02:56:24.710202 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 7 02:56:24.710249 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 02:56:24.710270 kernel: BTRFS info (device vda6): using free space tree Jul 7 02:56:24.715363 kernel: BTRFS info (device vda6): auto enabling async discard Jul 7 02:56:24.718415 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 02:56:24.749503 ignition[957]: INFO : Ignition 2.19.0 Jul 7 02:56:24.749503 ignition[957]: INFO : Stage: files Jul 7 02:56:24.751573 ignition[957]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 02:56:24.751573 ignition[957]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 02:56:24.751573 ignition[957]: DEBUG : files: compiled without relabeling support, skipping Jul 7 02:56:24.754582 ignition[957]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 02:56:24.754582 ignition[957]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 02:56:24.756759 ignition[957]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 02:56:24.758013 ignition[957]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 02:56:24.758013 ignition[957]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 02:56:24.757495 unknown[957]: wrote ssh authorized keys file for user: core Jul 7 02:56:24.761026 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 02:56:24.761026 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 7 02:56:24.945482 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 02:56:25.219104 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 7 02:56:25.220757 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 02:56:25.220757 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 02:56:25.220757 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 02:56:25.220757 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 02:56:25.220757 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 02:56:25.220757 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 02:56:25.220757 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 02:56:25.220757 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 02:56:25.236104 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 02:56:25.236104 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 02:56:25.236104 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 02:56:25.236104 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 02:56:25.236104 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 02:56:25.236104 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 7 02:56:25.929863 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 02:56:27.410386 ignition[957]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 7 02:56:27.410386 ignition[957]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 02:56:27.413860 ignition[957]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 02:56:27.413860 ignition[957]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 02:56:27.413860 ignition[957]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 02:56:27.413860 ignition[957]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 7 02:56:27.413860 ignition[957]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 02:56:27.413860 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 02:56:27.413860 ignition[957]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 02:56:27.413860 ignition[957]: INFO : files: files passed Jul 7 02:56:27.413860 ignition[957]: INFO : Ignition finished successfully Jul 7 02:56:27.415192 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 02:56:27.425669 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 02:56:27.435423 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 02:56:27.442277 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 02:56:27.442767 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 02:56:27.452518 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 02:56:27.455143 initrd-setup-root-after-ignition[986]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 02:56:27.456305 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 02:56:27.457789 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 02:56:27.459022 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 02:56:27.465563 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 02:56:27.505125 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 02:56:27.505305 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 02:56:27.507239 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 02:56:27.508600 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 02:56:27.510162 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 02:56:27.515532 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 02:56:27.536221 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 02:56:27.541554 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 02:56:27.564441 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 02:56:27.566450 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 02:56:27.568506 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 02:56:27.569490 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 02:56:27.569757 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 02:56:27.571657 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 02:56:27.572652 systemd[1]: Stopped target basic.target - Basic System. Jul 7 02:56:27.574194 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 02:56:27.575697 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 02:56:27.577191 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 02:56:27.578851 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 02:56:27.580553 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 02:56:27.582109 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 02:56:27.583626 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 02:56:27.585187 systemd[1]: Stopped target swap.target - Swaps. Jul 7 02:56:27.586661 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 02:56:27.587053 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 02:56:27.588565 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 02:56:27.589504 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 02:56:27.590953 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 02:56:27.592423 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 02:56:27.593738 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 02:56:27.593913 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 02:56:27.595779 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 02:56:27.595949 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 02:56:27.596888 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 02:56:27.597042 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 02:56:27.604673 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 02:56:27.607636 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 02:56:27.610402 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 02:56:27.611487 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 02:56:27.614837 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 02:56:27.615846 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 02:56:27.625836 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 02:56:27.626997 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 02:56:27.628856 ignition[1010]: INFO : Ignition 2.19.0 Jul 7 02:56:27.628856 ignition[1010]: INFO : Stage: umount Jul 7 02:56:27.628856 ignition[1010]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 02:56:27.628856 ignition[1010]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jul 7 02:56:27.628856 ignition[1010]: INFO : umount: umount passed Jul 7 02:56:27.628856 ignition[1010]: INFO : Ignition finished successfully Jul 7 02:56:27.634690 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 02:56:27.634847 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 02:56:27.638229 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 02:56:27.638322 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 02:56:27.646566 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 02:56:27.646653 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 02:56:27.647423 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 7 02:56:27.647511 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 7 02:56:27.648203 systemd[1]: Stopped target network.target - Network. Jul 7 02:56:27.648836 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 02:56:27.648913 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 02:56:27.651235 systemd[1]: Stopped target paths.target - Path Units. Jul 7 02:56:27.653426 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 02:56:27.657507 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 02:56:27.658378 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 02:56:27.659015 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 02:56:27.659747 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 02:56:27.660666 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 02:56:27.662151 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 02:56:27.662228 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 02:56:27.663919 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 02:56:27.663997 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 02:56:27.665512 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 02:56:27.665581 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 02:56:27.666929 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 02:56:27.670524 systemd-networkd[773]: eth0: DHCPv6 lease lost Jul 7 02:56:27.670935 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 02:56:27.675225 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 02:56:27.678209 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 02:56:27.679371 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 02:56:27.681142 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 02:56:27.681467 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 02:56:27.685399 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 02:56:27.685771 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 02:56:27.694536 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 02:56:27.696009 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 02:56:27.696088 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 02:56:27.699846 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 02:56:27.699916 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 02:56:27.701289 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 02:56:27.701375 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 02:56:27.704371 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 02:56:27.704445 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 02:56:27.706243 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 02:56:27.721884 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 02:56:27.722159 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 02:56:27.724941 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 02:56:27.725094 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 02:56:27.727251 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 02:56:27.727619 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 02:56:27.728581 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 02:56:27.728641 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 02:56:27.730015 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 02:56:27.730090 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 02:56:27.733556 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 02:56:27.733638 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 02:56:27.735014 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 02:56:27.735086 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 02:56:27.742601 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 02:56:27.743467 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 02:56:27.743565 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 02:56:27.745183 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 7 02:56:27.745294 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 02:56:27.748234 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 02:56:27.748307 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 02:56:27.749939 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 02:56:27.750020 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:56:27.764080 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 02:56:27.764280 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 02:56:27.793549 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 02:56:27.793750 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 02:56:27.795896 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 02:56:27.796691 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 02:56:27.796777 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 02:56:27.804689 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 02:56:27.816721 systemd[1]: Switching root. Jul 7 02:56:27.856086 systemd-journald[201]: Journal stopped Jul 7 02:56:29.383226 systemd-journald[201]: Received SIGTERM from PID 1 (systemd). Jul 7 02:56:29.383423 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 02:56:29.383468 kernel: SELinux: policy capability open_perms=1 Jul 7 02:56:29.383491 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 02:56:29.383510 kernel: SELinux: policy capability always_check_network=0 Jul 7 02:56:29.383528 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 02:56:29.383577 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 02:56:29.383598 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 02:56:29.383624 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 02:56:29.383655 kernel: audit: type=1403 audit(1751856988.098:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 02:56:29.383700 systemd[1]: Successfully loaded SELinux policy in 52.431ms. Jul 7 02:56:29.383745 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 21.825ms. Jul 7 02:56:29.383780 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 7 02:56:29.383809 systemd[1]: Detected virtualization kvm. Jul 7 02:56:29.383831 systemd[1]: Detected architecture x86-64. Jul 7 02:56:29.383852 systemd[1]: Detected first boot. Jul 7 02:56:29.383882 systemd[1]: Hostname set to . Jul 7 02:56:29.383915 systemd[1]: Initializing machine ID from VM UUID. Jul 7 02:56:29.383944 zram_generator::config[1052]: No configuration found. Jul 7 02:56:29.383974 systemd[1]: Populated /etc with preset unit settings. Jul 7 02:56:29.384004 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 02:56:29.384036 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 02:56:29.384058 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 02:56:29.384094 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 02:56:29.384122 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 02:56:29.384149 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 02:56:29.384182 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 02:56:29.384205 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 02:56:29.384226 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 02:56:29.384254 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 02:56:29.384275 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 02:56:29.384296 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 02:56:29.384317 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 02:56:29.390374 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 02:56:29.390452 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 02:56:29.390496 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 02:56:29.390520 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 02:56:29.390542 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 02:56:29.390563 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 02:56:29.390584 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 02:56:29.390606 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 02:56:29.390640 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 02:56:29.390676 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 02:56:29.390698 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 02:56:29.390726 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 02:56:29.390748 systemd[1]: Reached target slices.target - Slice Units. Jul 7 02:56:29.390769 systemd[1]: Reached target swap.target - Swaps. Jul 7 02:56:29.390796 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 02:56:29.390818 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 02:56:29.390839 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 02:56:29.390875 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 02:56:29.390909 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 02:56:29.390949 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 02:56:29.390972 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 02:56:29.390994 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 02:56:29.391029 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 02:56:29.391052 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:56:29.391073 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 02:56:29.391094 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 02:56:29.391115 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 02:56:29.391137 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 02:56:29.391158 systemd[1]: Reached target machines.target - Containers. Jul 7 02:56:29.391180 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 02:56:29.391201 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 02:56:29.391238 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 02:56:29.391267 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 02:56:29.391296 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 02:56:29.391318 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 02:56:29.391360 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 02:56:29.391384 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 02:56:29.391412 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 02:56:29.391443 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 02:56:29.391486 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 02:56:29.391517 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 02:56:29.391546 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 02:56:29.391568 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 02:56:29.391588 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 02:56:29.391609 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 02:56:29.391631 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 02:56:29.391659 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 02:56:29.391686 kernel: fuse: init (API version 7.39) Jul 7 02:56:29.391724 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 02:56:29.391757 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 02:56:29.391779 systemd[1]: Stopped verity-setup.service. Jul 7 02:56:29.391800 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:56:29.391822 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 02:56:29.391842 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 02:56:29.391875 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 02:56:29.391904 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 02:56:29.391926 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 02:56:29.391947 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 02:56:29.391968 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 02:56:29.391989 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 02:56:29.392010 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 02:56:29.392033 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 02:56:29.392069 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 02:56:29.392102 kernel: loop: module loaded Jul 7 02:56:29.392124 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 02:56:29.392145 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 02:56:29.392166 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 02:56:29.392202 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 02:56:29.392233 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 02:56:29.392255 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 02:56:29.392288 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 02:56:29.392322 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 02:56:29.396039 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 02:56:29.396119 systemd-journald[1148]: Collecting audit messages is disabled. Jul 7 02:56:29.396189 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 02:56:29.396225 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 02:56:29.396247 systemd-journald[1148]: Journal started Jul 7 02:56:29.396290 systemd-journald[1148]: Runtime Journal (/run/log/journal/0fc346330891405d94e889fabf5ba592) is 4.7M, max 38.0M, 33.2M free. Jul 7 02:56:29.415648 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 02:56:29.415734 kernel: ACPI: bus type drm_connector registered Jul 7 02:56:29.415776 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 02:56:28.907589 systemd[1]: Queued start job for default target multi-user.target. Jul 7 02:56:28.927652 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 02:56:28.928440 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 02:56:29.419488 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 02:56:29.422534 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 02:56:29.427444 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 7 02:56:29.441357 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 02:56:29.447356 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 02:56:29.452403 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 02:56:29.465423 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 02:56:29.465547 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 02:56:29.484611 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 02:56:29.492735 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 02:56:29.497397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 02:56:29.516774 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 02:56:29.526193 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 02:56:29.531379 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 02:56:29.536522 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 02:56:29.539637 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 02:56:29.543167 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 02:56:29.544951 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 02:56:29.547409 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 02:56:29.589075 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 02:56:29.604095 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 02:56:29.614555 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 02:56:29.623776 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 7 02:56:29.624514 kernel: loop0: detected capacity change from 0 to 142488 Jul 7 02:56:29.675750 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 02:56:29.690769 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 02:56:29.692849 systemd-journald[1148]: Time spent on flushing to /var/log/journal/0fc346330891405d94e889fabf5ba592 is 64.741ms for 1145 entries. Jul 7 02:56:29.692849 systemd-journald[1148]: System Journal (/var/log/journal/0fc346330891405d94e889fabf5ba592) is 8.0M, max 584.8M, 576.8M free. Jul 7 02:56:29.790474 systemd-journald[1148]: Received client request to flush runtime journal. Jul 7 02:56:29.790546 kernel: loop1: detected capacity change from 0 to 221472 Jul 7 02:56:29.705273 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jul 7 02:56:29.705295 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Jul 7 02:56:29.708628 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 02:56:29.712931 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 7 02:56:29.731018 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 02:56:29.744695 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 7 02:56:29.750839 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 02:56:29.763655 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 02:56:29.800115 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 02:56:29.819288 udevadm[1200]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 7 02:56:29.827509 kernel: loop2: detected capacity change from 0 to 140768 Jul 7 02:56:29.828951 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 02:56:29.843619 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 02:56:29.878427 kernel: loop3: detected capacity change from 0 to 8 Jul 7 02:56:29.886070 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jul 7 02:56:29.886569 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Jul 7 02:56:29.894919 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 02:56:29.917324 kernel: loop4: detected capacity change from 0 to 142488 Jul 7 02:56:29.955433 kernel: loop5: detected capacity change from 0 to 221472 Jul 7 02:56:30.000385 kernel: loop6: detected capacity change from 0 to 140768 Jul 7 02:56:30.041389 kernel: loop7: detected capacity change from 0 to 8 Jul 7 02:56:30.046511 (sd-merge)[1213]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Jul 7 02:56:30.047532 (sd-merge)[1213]: Merged extensions into '/usr'. Jul 7 02:56:30.060944 systemd[1]: Reloading requested from client PID 1167 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 02:56:30.060989 systemd[1]: Reloading... Jul 7 02:56:30.211373 zram_generator::config[1239]: No configuration found. Jul 7 02:56:30.258695 ldconfig[1163]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 02:56:30.418892 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 02:56:30.490212 systemd[1]: Reloading finished in 428 ms. Jul 7 02:56:30.518417 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 02:56:30.521105 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 02:56:30.535698 systemd[1]: Starting ensure-sysext.service... Jul 7 02:56:30.542006 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 02:56:30.563409 systemd[1]: Reloading requested from client PID 1295 ('systemctl') (unit ensure-sysext.service)... Jul 7 02:56:30.563435 systemd[1]: Reloading... Jul 7 02:56:30.624768 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 02:56:30.626957 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 02:56:30.630679 systemd-tmpfiles[1296]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 02:56:30.631104 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jul 7 02:56:30.631216 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jul 7 02:56:30.639994 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 02:56:30.640014 systemd-tmpfiles[1296]: Skipping /boot Jul 7 02:56:30.663368 zram_generator::config[1318]: No configuration found. Jul 7 02:56:30.669527 systemd-tmpfiles[1296]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 02:56:30.669548 systemd-tmpfiles[1296]: Skipping /boot Jul 7 02:56:30.886722 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 02:56:30.956292 systemd[1]: Reloading finished in 392 ms. Jul 7 02:56:30.982691 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 02:56:30.991024 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 02:56:31.005686 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 02:56:31.009710 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 02:56:31.014699 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 02:56:31.028869 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 02:56:31.035447 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 02:56:31.040232 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 02:56:31.047838 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:56:31.048128 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 02:56:31.053741 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 02:56:31.057630 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 02:56:31.059955 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 02:56:31.061562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 02:56:31.065518 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 02:56:31.066752 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:56:31.070615 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:56:31.070909 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 02:56:31.071179 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 02:56:31.071368 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:56:31.077565 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:56:31.078153 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 02:56:31.085746 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 02:56:31.087245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 02:56:31.087506 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 02:56:31.091566 systemd[1]: Finished ensure-sysext.service. Jul 7 02:56:31.103633 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 02:56:31.123156 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 02:56:31.123871 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 02:56:31.126486 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 02:56:31.144853 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 02:56:31.151668 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 02:56:31.159365 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 02:56:31.159636 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 02:56:31.160733 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 02:56:31.168987 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 02:56:31.169230 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 02:56:31.171105 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 02:56:31.181517 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 02:56:31.181920 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 02:56:31.186544 systemd-udevd[1391]: Using default interface naming scheme 'v255'. Jul 7 02:56:31.203688 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 02:56:31.209619 augenrules[1416]: No rules Jul 7 02:56:31.211794 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 02:56:31.213228 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 02:56:31.214930 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 02:56:31.233495 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 02:56:31.234567 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 02:56:31.245598 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 02:56:31.336099 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 02:56:31.337894 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 02:56:31.419800 systemd-networkd[1433]: lo: Link UP Jul 7 02:56:31.420392 systemd-resolved[1387]: Positive Trust Anchors: Jul 7 02:56:31.420406 systemd-resolved[1387]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 02:56:31.420466 systemd-resolved[1387]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 02:56:31.422439 systemd-networkd[1433]: lo: Gained carrier Jul 7 02:56:31.424701 systemd-networkd[1433]: Enumeration completed Jul 7 02:56:31.424837 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 02:56:31.434557 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 02:56:31.436372 systemd-resolved[1387]: Using system hostname 'srv-3i0x6.gb1.brightbox.com'. Jul 7 02:56:31.439441 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 02:56:31.440282 systemd[1]: Reached target network.target - Network. Jul 7 02:56:31.440962 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 02:56:31.462401 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 02:56:31.490383 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1434) Jul 7 02:56:31.566956 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 02:56:31.574936 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 02:56:31.582915 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 02:56:31.583403 systemd-networkd[1433]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 02:56:31.586814 systemd-networkd[1433]: eth0: Link UP Jul 7 02:56:31.587283 systemd-networkd[1433]: eth0: Gained carrier Jul 7 02:56:31.587511 systemd-networkd[1433]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 02:56:31.607160 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 02:56:31.606452 systemd-networkd[1433]: eth0: DHCPv4 address 10.244.11.130/30, gateway 10.244.11.129 acquired from 10.244.11.129 Jul 7 02:56:31.609230 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jul 7 02:56:31.614364 kernel: ACPI: button: Power Button [PWRF] Jul 7 02:56:31.617225 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 02:56:31.636641 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 02:56:31.667200 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 02:56:31.667700 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 7 02:56:31.667971 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 02:56:31.697365 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 7 02:56:31.776730 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 02:56:31.972123 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 02:56:31.979008 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 7 02:56:31.989622 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 7 02:56:32.005792 lvm[1468]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 02:56:32.039565 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 7 02:56:32.040930 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 02:56:32.041724 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 02:56:32.042749 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 02:56:32.043602 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 02:56:32.044835 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 02:56:32.045827 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 02:56:32.046647 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 02:56:32.047471 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 02:56:32.047530 systemd[1]: Reached target paths.target - Path Units. Jul 7 02:56:32.048244 systemd[1]: Reached target timers.target - Timer Units. Jul 7 02:56:32.050365 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 02:56:32.052955 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 02:56:32.059611 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 02:56:32.062338 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 7 02:56:32.063818 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 02:56:32.064721 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 02:56:32.065420 systemd[1]: Reached target basic.target - Basic System. Jul 7 02:56:32.066126 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 02:56:32.066190 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 02:56:32.069495 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 02:56:32.078851 lvm[1472]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 7 02:56:32.080594 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 7 02:56:32.086608 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 02:56:32.092488 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 02:56:32.094795 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 02:56:32.095737 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 02:56:32.103627 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 02:56:32.108701 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 02:56:32.114606 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 02:56:32.122737 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 02:56:32.142778 jq[1476]: false Jul 7 02:56:32.151812 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 02:56:32.154098 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 02:56:32.154851 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 02:56:32.162593 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 02:56:32.165055 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 02:56:32.168952 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 02:56:32.169227 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 02:56:32.185078 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 7 02:56:32.203674 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 02:56:32.204179 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 02:56:32.207029 dbus-daemon[1475]: [system] SELinux support is enabled Jul 7 02:56:32.207264 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 02:56:32.211976 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 02:56:32.212398 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 02:56:32.216743 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 02:56:32.220260 dbus-daemon[1475]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1433 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 7 02:56:32.216800 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 02:56:32.218149 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 02:56:32.218181 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 02:56:32.227747 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 7 02:56:32.236518 jq[1486]: true Jul 7 02:56:32.242524 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jul 7 02:56:32.249923 extend-filesystems[1478]: Found loop4 Jul 7 02:56:32.252910 extend-filesystems[1478]: Found loop5 Jul 7 02:56:32.252910 extend-filesystems[1478]: Found loop6 Jul 7 02:56:32.252910 extend-filesystems[1478]: Found loop7 Jul 7 02:56:32.252910 extend-filesystems[1478]: Found vda Jul 7 02:56:32.252910 extend-filesystems[1478]: Found vda1 Jul 7 02:56:32.252910 extend-filesystems[1478]: Found vda2 Jul 7 02:56:32.252910 extend-filesystems[1478]: Found vda3 Jul 7 02:56:32.252910 extend-filesystems[1478]: Found usr Jul 7 02:56:32.252910 extend-filesystems[1478]: Found vda4 Jul 7 02:56:32.252910 extend-filesystems[1478]: Found vda6 Jul 7 02:56:32.252910 extend-filesystems[1478]: Found vda7 Jul 7 02:56:32.252910 extend-filesystems[1478]: Found vda9 Jul 7 02:56:32.252910 extend-filesystems[1478]: Checking size of /dev/vda9 Jul 7 02:56:32.274504 update_engine[1485]: I20250707 02:56:32.266199 1485 main.cc:92] Flatcar Update Engine starting Jul 7 02:56:32.272112 (ntainerd)[1504]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 02:56:32.282295 systemd[1]: Started update-engine.service - Update Engine. Jul 7 02:56:32.287880 update_engine[1485]: I20250707 02:56:32.286160 1485 update_check_scheduler.cc:74] Next update check in 4m41s Jul 7 02:56:32.287983 tar[1488]: linux-amd64/helm Jul 7 02:56:32.294818 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 02:56:32.305513 extend-filesystems[1478]: Resized partition /dev/vda9 Jul 7 02:56:32.316988 jq[1507]: true Jul 7 02:56:32.317455 extend-filesystems[1516]: resize2fs 1.47.1 (20-May-2024) Jul 7 02:56:32.335401 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Jul 7 02:56:32.343364 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1429) Jul 7 02:56:32.377256 systemd-logind[1484]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 02:56:32.377299 systemd-logind[1484]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 02:56:32.377644 systemd-logind[1484]: New seat seat0. Jul 7 02:56:32.379775 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 02:56:32.557691 bash[1532]: Updated "/home/core/.ssh/authorized_keys" Jul 7 02:56:32.560750 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 02:56:32.574149 systemd[1]: Starting sshkeys.service... Jul 7 02:56:32.636361 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jul 7 02:56:32.646882 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jul 7 02:56:32.722566 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jul 7 02:56:32.729592 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 7 02:56:32.730507 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jul 7 02:56:32.733155 dbus-daemon[1475]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1505 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 7 02:56:32.744037 systemd[1]: Starting polkit.service - Authorization Manager... Jul 7 02:56:32.750789 extend-filesystems[1516]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 02:56:32.750789 extend-filesystems[1516]: old_desc_blocks = 1, new_desc_blocks = 8 Jul 7 02:56:32.750789 extend-filesystems[1516]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jul 7 02:56:32.754246 extend-filesystems[1478]: Resized filesystem in /dev/vda9 Jul 7 02:56:32.756340 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 02:56:32.756633 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 02:56:32.769551 polkitd[1547]: Started polkitd version 121 Jul 7 02:56:32.787424 polkitd[1547]: Loading rules from directory /etc/polkit-1/rules.d Jul 7 02:56:32.787773 polkitd[1547]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 7 02:56:32.795607 polkitd[1547]: Finished loading, compiling and executing 2 rules Jul 7 02:56:32.796288 dbus-daemon[1475]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 7 02:56:32.797010 systemd[1]: Started polkit.service - Authorization Manager. Jul 7 02:56:32.797199 polkitd[1547]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 7 02:56:32.803425 locksmithd[1515]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 02:56:32.819100 containerd[1504]: time="2025-07-07T02:56:32.818427691Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 7 02:56:32.841466 systemd-hostnamed[1505]: Hostname set to (static) Jul 7 02:56:32.860932 containerd[1504]: time="2025-07-07T02:56:32.860868836Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 7 02:56:32.864149 containerd[1504]: time="2025-07-07T02:56:32.864107430Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 7 02:56:32.864253 containerd[1504]: time="2025-07-07T02:56:32.864228361Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 7 02:56:32.864446 containerd[1504]: time="2025-07-07T02:56:32.864418164Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 7 02:56:32.864823 containerd[1504]: time="2025-07-07T02:56:32.864794104Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 7 02:56:32.864939 containerd[1504]: time="2025-07-07T02:56:32.864912677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 7 02:56:32.865185 containerd[1504]: time="2025-07-07T02:56:32.865144647Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 02:56:32.865291 containerd[1504]: time="2025-07-07T02:56:32.865268086Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 7 02:56:32.866262 containerd[1504]: time="2025-07-07T02:56:32.865644966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 02:56:32.866262 containerd[1504]: time="2025-07-07T02:56:32.865676969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 7 02:56:32.866262 containerd[1504]: time="2025-07-07T02:56:32.865700347Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 02:56:32.866262 containerd[1504]: time="2025-07-07T02:56:32.865717305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 7 02:56:32.866262 containerd[1504]: time="2025-07-07T02:56:32.865842085Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 7 02:56:32.866262 containerd[1504]: time="2025-07-07T02:56:32.866211011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 7 02:56:32.866674 containerd[1504]: time="2025-07-07T02:56:32.866642077Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 7 02:56:32.866794 containerd[1504]: time="2025-07-07T02:56:32.866768507Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 7 02:56:32.867021 containerd[1504]: time="2025-07-07T02:56:32.866994780Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 7 02:56:32.867180 containerd[1504]: time="2025-07-07T02:56:32.867154948Z" level=info msg="metadata content store policy set" policy=shared Jul 7 02:56:32.874014 containerd[1504]: time="2025-07-07T02:56:32.873050696Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 7 02:56:32.874014 containerd[1504]: time="2025-07-07T02:56:32.873233719Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 7 02:56:32.874014 containerd[1504]: time="2025-07-07T02:56:32.873284714Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 7 02:56:32.874014 containerd[1504]: time="2025-07-07T02:56:32.873309237Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 7 02:56:32.874014 containerd[1504]: time="2025-07-07T02:56:32.873367648Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 7 02:56:32.874014 containerd[1504]: time="2025-07-07T02:56:32.873631776Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 7 02:56:32.874502 containerd[1504]: time="2025-07-07T02:56:32.874473882Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 7 02:56:32.875518 containerd[1504]: time="2025-07-07T02:56:32.875489860Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 7 02:56:32.875840 containerd[1504]: time="2025-07-07T02:56:32.875619328Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 7 02:56:32.875840 containerd[1504]: time="2025-07-07T02:56:32.875649636Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 7 02:56:32.875840 containerd[1504]: time="2025-07-07T02:56:32.875690262Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 7 02:56:32.875840 containerd[1504]: time="2025-07-07T02:56:32.875721397Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 7 02:56:32.875840 containerd[1504]: time="2025-07-07T02:56:32.875767219Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 7 02:56:32.875840 containerd[1504]: time="2025-07-07T02:56:32.875793205Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 7 02:56:32.876153 containerd[1504]: time="2025-07-07T02:56:32.875816156Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 7 02:56:32.876153 containerd[1504]: time="2025-07-07T02:56:32.876102475Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 7 02:56:32.876292 containerd[1504]: time="2025-07-07T02:56:32.876133862Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878383964Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878452246Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878486278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878541916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878567901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878589500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878631496Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878651723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878671900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878711794Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878736305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878755195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878814215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878834834Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879303 containerd[1504]: time="2025-07-07T02:56:32.878884740Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 7 02:56:32.879898 containerd[1504]: time="2025-07-07T02:56:32.878964239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879898 containerd[1504]: time="2025-07-07T02:56:32.878990961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.879898 containerd[1504]: time="2025-07-07T02:56:32.879009179Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 7 02:56:32.879898 containerd[1504]: time="2025-07-07T02:56:32.879130354Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 7 02:56:32.880288 containerd[1504]: time="2025-07-07T02:56:32.880053682Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 7 02:56:32.880288 containerd[1504]: time="2025-07-07T02:56:32.880086263Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 7 02:56:32.880288 containerd[1504]: time="2025-07-07T02:56:32.880129006Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 7 02:56:32.880288 containerd[1504]: time="2025-07-07T02:56:32.880148722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.880288 containerd[1504]: time="2025-07-07T02:56:32.880168525Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 7 02:56:32.880288 containerd[1504]: time="2025-07-07T02:56:32.880213182Z" level=info msg="NRI interface is disabled by configuration." Jul 7 02:56:32.880288 containerd[1504]: time="2025-07-07T02:56:32.880246094Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 7 02:56:32.884378 containerd[1504]: time="2025-07-07T02:56:32.883472601Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 7 02:56:32.884378 containerd[1504]: time="2025-07-07T02:56:32.883580966Z" level=info msg="Connect containerd service" Jul 7 02:56:32.884378 containerd[1504]: time="2025-07-07T02:56:32.883655517Z" level=info msg="using legacy CRI server" Jul 7 02:56:32.884378 containerd[1504]: time="2025-07-07T02:56:32.883672681Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 02:56:32.884378 containerd[1504]: time="2025-07-07T02:56:32.883881794Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 7 02:56:32.885405 containerd[1504]: time="2025-07-07T02:56:32.885336912Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 02:56:32.885662 containerd[1504]: time="2025-07-07T02:56:32.885609445Z" level=info msg="Start subscribing containerd event" Jul 7 02:56:32.885807 containerd[1504]: time="2025-07-07T02:56:32.885779773Z" level=info msg="Start recovering state" Jul 7 02:56:32.886791 containerd[1504]: time="2025-07-07T02:56:32.886523447Z" level=info msg="Start event monitor" Jul 7 02:56:32.886791 containerd[1504]: time="2025-07-07T02:56:32.886568033Z" level=info msg="Start snapshots syncer" Jul 7 02:56:32.886791 containerd[1504]: time="2025-07-07T02:56:32.886591766Z" level=info msg="Start cni network conf syncer for default" Jul 7 02:56:32.886791 containerd[1504]: time="2025-07-07T02:56:32.886605836Z" level=info msg="Start streaming server" Jul 7 02:56:32.888427 containerd[1504]: time="2025-07-07T02:56:32.888388655Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 02:56:32.890016 containerd[1504]: time="2025-07-07T02:56:32.889446109Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 02:56:32.890016 containerd[1504]: time="2025-07-07T02:56:32.889560689Z" level=info msg="containerd successfully booted in 0.074474s" Jul 7 02:56:32.889679 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 02:56:33.207903 sshd_keygen[1513]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 02:56:33.241478 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 02:56:33.256570 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 02:56:33.268470 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 02:56:33.268772 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 02:56:33.279368 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 02:56:33.290133 tar[1488]: linux-amd64/LICENSE Jul 7 02:56:33.290133 tar[1488]: linux-amd64/README.md Jul 7 02:56:33.302452 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 02:56:33.309121 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 02:56:33.313659 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 02:56:33.315752 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 02:56:33.318172 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 02:56:33.646748 systemd-networkd[1433]: eth0: Gained IPv6LL Jul 7 02:56:33.650676 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jul 7 02:56:33.653284 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 02:56:33.657501 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 02:56:33.666767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:56:33.671685 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 02:56:33.710266 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 02:56:34.800132 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:56:34.812355 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 02:56:34.892066 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 02:56:34.902896 systemd[1]: Started sshd@0-10.244.11.130:22-139.178.68.195:55194.service - OpenSSH per-connection server daemon (139.178.68.195:55194). Jul 7 02:56:35.157135 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jul 7 02:56:35.159562 systemd-networkd[1433]: eth0: Ignoring DHCPv6 address 2a02:1348:17d:2e0:24:19ff:fef4:b82/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:17d:2e0:24:19ff:fef4:b82/64 assigned by NDisc. Jul 7 02:56:35.159575 systemd-networkd[1433]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Jul 7 02:56:35.518606 kubelet[1598]: E0707 02:56:35.518140 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 02:56:35.521098 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 02:56:35.521405 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 02:56:35.522103 systemd[1]: kubelet.service: Consumed 1.137s CPU time. Jul 7 02:56:35.791811 sshd[1600]: Accepted publickey for core from 139.178.68.195 port 55194 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:56:35.795579 sshd[1600]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:56:35.814802 systemd-logind[1484]: New session 1 of user core. Jul 7 02:56:35.818028 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 02:56:35.827882 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 02:56:35.866219 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 02:56:35.874863 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 02:56:35.891565 (systemd)[1613]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 02:56:36.040469 systemd[1613]: Queued start job for default target default.target. Jul 7 02:56:36.048924 systemd[1613]: Created slice app.slice - User Application Slice. Jul 7 02:56:36.048976 systemd[1613]: Reached target paths.target - Paths. Jul 7 02:56:36.049001 systemd[1613]: Reached target timers.target - Timers. Jul 7 02:56:36.051438 systemd[1613]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 02:56:36.077644 systemd[1613]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 02:56:36.077844 systemd[1613]: Reached target sockets.target - Sockets. Jul 7 02:56:36.077870 systemd[1613]: Reached target basic.target - Basic System. Jul 7 02:56:36.077936 systemd[1613]: Reached target default.target - Main User Target. Jul 7 02:56:36.077991 systemd[1613]: Startup finished in 176ms. Jul 7 02:56:36.078033 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 02:56:36.094787 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 02:56:36.167483 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jul 7 02:56:36.728764 systemd[1]: Started sshd@1-10.244.11.130:22-139.178.68.195:55206.service - OpenSSH per-connection server daemon (139.178.68.195:55206). Jul 7 02:56:36.975624 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jul 7 02:56:37.623856 sshd[1624]: Accepted publickey for core from 139.178.68.195 port 55206 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:56:37.626111 sshd[1624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:56:37.633112 systemd-logind[1484]: New session 2 of user core. Jul 7 02:56:37.643873 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 02:56:38.247759 sshd[1624]: pam_unix(sshd:session): session closed for user core Jul 7 02:56:38.252658 systemd[1]: sshd@1-10.244.11.130:22-139.178.68.195:55206.service: Deactivated successfully. Jul 7 02:56:38.255457 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 02:56:38.256638 systemd-logind[1484]: Session 2 logged out. Waiting for processes to exit. Jul 7 02:56:38.258170 systemd-logind[1484]: Removed session 2. Jul 7 02:56:38.363287 login[1579]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 02:56:38.374411 systemd-logind[1484]: New session 3 of user core. Jul 7 02:56:38.380666 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 02:56:38.381374 login[1578]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 7 02:56:38.407825 systemd[1]: Started sshd@2-10.244.11.130:22-139.178.68.195:33918.service - OpenSSH per-connection server daemon (139.178.68.195:33918). Jul 7 02:56:38.415975 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 02:56:38.417753 systemd-logind[1484]: New session 4 of user core. Jul 7 02:56:39.243215 coreos-metadata[1474]: Jul 07 02:56:39.243 WARN failed to locate config-drive, using the metadata service API instead Jul 7 02:56:39.269436 coreos-metadata[1474]: Jul 07 02:56:39.269 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jul 7 02:56:39.278380 coreos-metadata[1474]: Jul 07 02:56:39.278 INFO Fetch failed with 404: resource not found Jul 7 02:56:39.278854 coreos-metadata[1474]: Jul 07 02:56:39.278 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jul 7 02:56:39.280047 coreos-metadata[1474]: Jul 07 02:56:39.280 INFO Fetch successful Jul 7 02:56:39.280194 coreos-metadata[1474]: Jul 07 02:56:39.280 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jul 7 02:56:39.285035 sshd[1635]: Accepted publickey for core from 139.178.68.195 port 33918 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:56:39.287130 sshd[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:56:39.293233 systemd-logind[1484]: New session 5 of user core. Jul 7 02:56:39.300379 coreos-metadata[1474]: Jul 07 02:56:39.300 INFO Fetch successful Jul 7 02:56:39.300711 coreos-metadata[1474]: Jul 07 02:56:39.300 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jul 7 02:56:39.302644 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 02:56:39.315504 coreos-metadata[1474]: Jul 07 02:56:39.315 INFO Fetch successful Jul 7 02:56:39.315736 coreos-metadata[1474]: Jul 07 02:56:39.315 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jul 7 02:56:39.333412 coreos-metadata[1474]: Jul 07 02:56:39.333 INFO Fetch successful Jul 7 02:56:39.333809 coreos-metadata[1474]: Jul 07 02:56:39.333 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jul 7 02:56:39.381171 coreos-metadata[1474]: Jul 07 02:56:39.381 INFO Fetch successful Jul 7 02:56:39.419935 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 7 02:56:39.421748 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 02:56:39.770754 coreos-metadata[1542]: Jul 07 02:56:39.770 WARN failed to locate config-drive, using the metadata service API instead Jul 7 02:56:39.792744 coreos-metadata[1542]: Jul 07 02:56:39.792 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jul 7 02:56:39.826646 coreos-metadata[1542]: Jul 07 02:56:39.826 INFO Fetch successful Jul 7 02:56:39.826792 coreos-metadata[1542]: Jul 07 02:56:39.826 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 7 02:56:39.873533 coreos-metadata[1542]: Jul 07 02:56:39.873 INFO Fetch successful Jul 7 02:56:39.875643 unknown[1542]: wrote ssh authorized keys file for user: core Jul 7 02:56:39.913482 sshd[1635]: pam_unix(sshd:session): session closed for user core Jul 7 02:56:39.917824 systemd[1]: sshd@2-10.244.11.130:22-139.178.68.195:33918.service: Deactivated successfully. Jul 7 02:56:39.919305 update-ssh-keys[1668]: Updated "/home/core/.ssh/authorized_keys" Jul 7 02:56:39.920982 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jul 7 02:56:39.921883 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 02:56:39.922948 systemd[1]: Finished sshkeys.service. Jul 7 02:56:39.925714 systemd-logind[1484]: Session 5 logged out. Waiting for processes to exit. Jul 7 02:56:39.928059 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 02:56:39.931451 systemd[1]: Startup finished in 1.409s (kernel) + 14.335s (initrd) + 11.884s (userspace) = 27.629s. Jul 7 02:56:39.931880 systemd-logind[1484]: Removed session 5. Jul 7 02:56:45.772127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 02:56:45.781787 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:56:45.963225 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:56:45.976312 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 02:56:46.084432 kubelet[1682]: E0707 02:56:46.083130 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 02:56:46.087046 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 02:56:46.087355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 02:56:50.066766 systemd[1]: Started sshd@3-10.244.11.130:22-139.178.68.195:46304.service - OpenSSH per-connection server daemon (139.178.68.195:46304). Jul 7 02:56:50.948311 sshd[1691]: Accepted publickey for core from 139.178.68.195 port 46304 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:56:50.950981 sshd[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:56:50.958936 systemd-logind[1484]: New session 6 of user core. Jul 7 02:56:50.970695 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 02:56:51.568510 sshd[1691]: pam_unix(sshd:session): session closed for user core Jul 7 02:56:51.572592 systemd[1]: sshd@3-10.244.11.130:22-139.178.68.195:46304.service: Deactivated successfully. Jul 7 02:56:51.575628 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 02:56:51.578304 systemd-logind[1484]: Session 6 logged out. Waiting for processes to exit. Jul 7 02:56:51.580215 systemd-logind[1484]: Removed session 6. Jul 7 02:56:51.727678 systemd[1]: Started sshd@4-10.244.11.130:22-139.178.68.195:46314.service - OpenSSH per-connection server daemon (139.178.68.195:46314). Jul 7 02:56:52.625747 sshd[1698]: Accepted publickey for core from 139.178.68.195 port 46314 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:56:52.627882 sshd[1698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:56:52.637463 systemd-logind[1484]: New session 7 of user core. Jul 7 02:56:52.639547 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 02:56:53.246774 sshd[1698]: pam_unix(sshd:session): session closed for user core Jul 7 02:56:53.252226 systemd[1]: sshd@4-10.244.11.130:22-139.178.68.195:46314.service: Deactivated successfully. Jul 7 02:56:53.254884 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 02:56:53.257306 systemd-logind[1484]: Session 7 logged out. Waiting for processes to exit. Jul 7 02:56:53.259094 systemd-logind[1484]: Removed session 7. Jul 7 02:56:53.409713 systemd[1]: Started sshd@5-10.244.11.130:22-139.178.68.195:46316.service - OpenSSH per-connection server daemon (139.178.68.195:46316). Jul 7 02:56:54.306386 sshd[1705]: Accepted publickey for core from 139.178.68.195 port 46316 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:56:54.309564 sshd[1705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:56:54.317144 systemd-logind[1484]: New session 8 of user core. Jul 7 02:56:54.326697 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 02:56:54.932847 sshd[1705]: pam_unix(sshd:session): session closed for user core Jul 7 02:56:54.937107 systemd-logind[1484]: Session 8 logged out. Waiting for processes to exit. Jul 7 02:56:54.939775 systemd[1]: sshd@5-10.244.11.130:22-139.178.68.195:46316.service: Deactivated successfully. Jul 7 02:56:54.942500 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 02:56:54.944547 systemd-logind[1484]: Removed session 8. Jul 7 02:56:55.086144 systemd[1]: Started sshd@6-10.244.11.130:22-139.178.68.195:46326.service - OpenSSH per-connection server daemon (139.178.68.195:46326). Jul 7 02:56:55.982381 sshd[1712]: Accepted publickey for core from 139.178.68.195 port 46326 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:56:55.984876 sshd[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:56:55.993476 systemd-logind[1484]: New session 9 of user core. Jul 7 02:56:56.002565 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 02:56:56.237293 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 02:56:56.245849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:56:56.408786 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:56:56.425824 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 02:56:56.614552 kubelet[1723]: E0707 02:56:56.612845 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 02:56:56.617953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 02:56:56.618220 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 02:56:56.623325 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 02:56:56.624400 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 02:56:56.640749 sudo[1728]: pam_unix(sudo:session): session closed for user root Jul 7 02:56:56.785107 sshd[1712]: pam_unix(sshd:session): session closed for user core Jul 7 02:56:56.790330 systemd[1]: sshd@6-10.244.11.130:22-139.178.68.195:46326.service: Deactivated successfully. Jul 7 02:56:56.792554 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 02:56:56.793493 systemd-logind[1484]: Session 9 logged out. Waiting for processes to exit. Jul 7 02:56:56.795317 systemd-logind[1484]: Removed session 9. Jul 7 02:56:56.946454 systemd[1]: Started sshd@7-10.244.11.130:22-139.178.68.195:46330.service - OpenSSH per-connection server daemon (139.178.68.195:46330). Jul 7 02:56:57.842265 sshd[1735]: Accepted publickey for core from 139.178.68.195 port 46330 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:56:57.844330 sshd[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:56:57.850499 systemd-logind[1484]: New session 10 of user core. Jul 7 02:56:57.857593 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 02:56:58.320705 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 02:56:58.321154 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 02:56:58.327175 sudo[1739]: pam_unix(sudo:session): session closed for user root Jul 7 02:56:58.335519 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 7 02:56:58.335975 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 02:56:58.362781 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 7 02:56:58.365605 auditctl[1742]: No rules Jul 7 02:56:58.366120 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 02:56:58.366439 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 7 02:56:58.375032 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 7 02:56:58.413736 augenrules[1760]: No rules Jul 7 02:56:58.414620 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 7 02:56:58.416295 sudo[1738]: pam_unix(sudo:session): session closed for user root Jul 7 02:56:58.560542 sshd[1735]: pam_unix(sshd:session): session closed for user core Jul 7 02:56:58.565579 systemd[1]: sshd@7-10.244.11.130:22-139.178.68.195:46330.service: Deactivated successfully. Jul 7 02:56:58.568178 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 02:56:58.569419 systemd-logind[1484]: Session 10 logged out. Waiting for processes to exit. Jul 7 02:56:58.570862 systemd-logind[1484]: Removed session 10. Jul 7 02:56:58.723669 systemd[1]: Started sshd@8-10.244.11.130:22-139.178.68.195:42090.service - OpenSSH per-connection server daemon (139.178.68.195:42090). Jul 7 02:56:59.604301 sshd[1768]: Accepted publickey for core from 139.178.68.195 port 42090 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:56:59.606284 sshd[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:56:59.612933 systemd-logind[1484]: New session 11 of user core. Jul 7 02:56:59.618605 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 02:57:00.078802 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 02:57:00.079288 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 02:57:00.584857 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 02:57:00.585096 (dockerd)[1787]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 02:57:01.028681 dockerd[1787]: time="2025-07-07T02:57:01.028555859Z" level=info msg="Starting up" Jul 7 02:57:01.153137 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3740074550-merged.mount: Deactivated successfully. Jul 7 02:57:01.184050 dockerd[1787]: time="2025-07-07T02:57:01.183645730Z" level=info msg="Loading containers: start." Jul 7 02:57:01.327376 kernel: Initializing XFRM netlink socket Jul 7 02:57:01.365303 systemd-timesyncd[1402]: Network configuration changed, trying to establish connection. Jul 7 02:57:01.443963 systemd-networkd[1433]: docker0: Link UP Jul 7 02:57:01.472146 dockerd[1787]: time="2025-07-07T02:57:01.472035597Z" level=info msg="Loading containers: done." Jul 7 02:57:01.493400 dockerd[1787]: time="2025-07-07T02:57:01.492822047Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 02:57:01.493400 dockerd[1787]: time="2025-07-07T02:57:01.493004089Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 7 02:57:01.493400 dockerd[1787]: time="2025-07-07T02:57:01.493191366Z" level=info msg="Daemon has completed initialization" Jul 7 02:57:01.530635 dockerd[1787]: time="2025-07-07T02:57:01.530546072Z" level=info msg="API listen on /run/docker.sock" Jul 7 02:57:01.531492 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 02:57:02.508129 systemd-timesyncd[1402]: Contacted time server [2a02:ac00:2:1::5]:123 (2.flatcar.pool.ntp.org). Jul 7 02:57:02.508171 systemd-resolved[1387]: Clock change detected. Flushing caches. Jul 7 02:57:02.508267 systemd-timesyncd[1402]: Initial clock synchronization to Mon 2025-07-07 02:57:02.507505 UTC. Jul 7 02:57:03.013460 containerd[1504]: time="2025-07-07T02:57:03.013142927Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" Jul 7 02:57:04.042958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount534713076.mount: Deactivated successfully. Jul 7 02:57:05.849162 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 7 02:57:06.332260 containerd[1504]: time="2025-07-07T02:57:06.332138316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:06.334132 containerd[1504]: time="2025-07-07T02:57:06.334064834Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=27960995" Jul 7 02:57:06.336065 containerd[1504]: time="2025-07-07T02:57:06.336024797Z" level=info msg="ImageCreate event name:\"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:06.342178 containerd[1504]: time="2025-07-07T02:57:06.339942716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:06.342929 containerd[1504]: time="2025-07-07T02:57:06.342131498Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"27957787\" in 3.328844715s" Jul 7 02:57:06.343026 containerd[1504]: time="2025-07-07T02:57:06.342977962Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:e6d208e868a9ca7f89efcb0d5bddc55a62df551cb4fb39c5099a2fe7b0e33adc\"" Jul 7 02:57:06.344552 containerd[1504]: time="2025-07-07T02:57:06.344521200Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" Jul 7 02:57:07.405383 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 7 02:57:07.413501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:57:07.591629 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:57:07.598475 (kubelet)[1996]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 02:57:07.776439 kubelet[1996]: E0707 02:57:07.775348 1996 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 02:57:07.778866 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 02:57:07.779123 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 02:57:09.560206 containerd[1504]: time="2025-07-07T02:57:09.558666234Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:09.560206 containerd[1504]: time="2025-07-07T02:57:09.560086407Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=24713784" Jul 7 02:57:09.560206 containerd[1504]: time="2025-07-07T02:57:09.560130285Z" level=info msg="ImageCreate event name:\"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:09.563993 containerd[1504]: time="2025-07-07T02:57:09.563951206Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:09.565820 containerd[1504]: time="2025-07-07T02:57:09.565778188Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"26202149\" in 3.221210577s" Jul 7 02:57:09.565925 containerd[1504]: time="2025-07-07T02:57:09.565824056Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:fbda0bc3bc4bb93c8b2d8627a9aa8d945c200b51e48c88f9b837dde628fc7c8f\"" Jul 7 02:57:09.567025 containerd[1504]: time="2025-07-07T02:57:09.566991987Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" Jul 7 02:57:11.759280 containerd[1504]: time="2025-07-07T02:57:11.758900438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:11.760671 containerd[1504]: time="2025-07-07T02:57:11.760601613Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=18780394" Jul 7 02:57:11.761785 containerd[1504]: time="2025-07-07T02:57:11.761710640Z" level=info msg="ImageCreate event name:\"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:11.767117 containerd[1504]: time="2025-07-07T02:57:11.766466417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:11.768208 containerd[1504]: time="2025-07-07T02:57:11.768165594Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"20268777\" in 2.201133216s" Jul 7 02:57:11.768307 containerd[1504]: time="2025-07-07T02:57:11.768214402Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:2a9c646db0be37003c2b50605a252f7139145411d9e4e0badd8ae07f56ce5eb8\"" Jul 7 02:57:11.769459 containerd[1504]: time="2025-07-07T02:57:11.769428127Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Jul 7 02:57:13.725068 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount385938952.mount: Deactivated successfully. Jul 7 02:57:14.556266 containerd[1504]: time="2025-07-07T02:57:14.555401669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:14.557036 containerd[1504]: time="2025-07-07T02:57:14.556381457Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354633" Jul 7 02:57:14.557854 containerd[1504]: time="2025-07-07T02:57:14.557778988Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:14.560841 containerd[1504]: time="2025-07-07T02:57:14.560771747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:14.562144 containerd[1504]: time="2025-07-07T02:57:14.561980991Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.79241055s" Jul 7 02:57:14.562144 containerd[1504]: time="2025-07-07T02:57:14.562024267Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Jul 7 02:57:14.563036 containerd[1504]: time="2025-07-07T02:57:14.562850237Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 02:57:15.509935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702263363.mount: Deactivated successfully. Jul 7 02:57:16.719772 containerd[1504]: time="2025-07-07T02:57:16.719695854Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:16.724607 containerd[1504]: time="2025-07-07T02:57:16.724542630Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565249" Jul 7 02:57:16.770571 containerd[1504]: time="2025-07-07T02:57:16.770459730Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:16.774573 containerd[1504]: time="2025-07-07T02:57:16.774533236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:16.777486 containerd[1504]: time="2025-07-07T02:57:16.776659098Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.213768321s" Jul 7 02:57:16.777486 containerd[1504]: time="2025-07-07T02:57:16.776723588Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 02:57:16.777721 containerd[1504]: time="2025-07-07T02:57:16.777642015Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 02:57:17.905202 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 7 02:57:17.920580 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:57:17.973259 update_engine[1485]: I20250707 02:57:17.972401 1485 update_attempter.cc:509] Updating boot flags... Jul 7 02:57:18.019348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount984491531.mount: Deactivated successfully. Jul 7 02:57:18.036263 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2084) Jul 7 02:57:18.242307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:57:18.246591 containerd[1504]: time="2025-07-07T02:57:18.246539792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:18.263394 containerd[1504]: time="2025-07-07T02:57:18.262828994Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jul 7 02:57:18.264904 containerd[1504]: time="2025-07-07T02:57:18.264772510Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:18.268819 (kubelet)[2095]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 02:57:18.300272 containerd[1504]: time="2025-07-07T02:57:18.300188747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:18.302628 containerd[1504]: time="2025-07-07T02:57:18.302566093Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.524887866s" Jul 7 02:57:18.302739 containerd[1504]: time="2025-07-07T02:57:18.302631925Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 02:57:18.307221 containerd[1504]: time="2025-07-07T02:57:18.306943112Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 7 02:57:18.307332 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2083) Jul 7 02:57:18.399914 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (2083) Jul 7 02:57:18.428948 kubelet[2095]: E0707 02:57:18.428880 2095 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 02:57:18.432059 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 02:57:18.433747 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 02:57:19.571338 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3910633029.mount: Deactivated successfully. Jul 7 02:57:23.976309 containerd[1504]: time="2025-07-07T02:57:23.974645755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:23.976309 containerd[1504]: time="2025-07-07T02:57:23.976077529Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780021" Jul 7 02:57:23.977920 containerd[1504]: time="2025-07-07T02:57:23.977747285Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:23.984057 containerd[1504]: time="2025-07-07T02:57:23.983979445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:23.986146 containerd[1504]: time="2025-07-07T02:57:23.985742944Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 5.678746139s" Jul 7 02:57:23.986146 containerd[1504]: time="2025-07-07T02:57:23.985810930Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jul 7 02:57:28.178214 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:57:28.194752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:57:28.244677 systemd[1]: Reloading requested from client PID 2187 ('systemctl') (unit session-11.scope)... Jul 7 02:57:28.244963 systemd[1]: Reloading... Jul 7 02:57:28.436263 zram_generator::config[2222]: No configuration found. Jul 7 02:57:28.624228 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 02:57:28.740853 systemd[1]: Reloading finished in 495 ms. Jul 7 02:57:28.821978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:57:28.826617 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:57:28.831950 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 02:57:28.832360 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:57:28.838671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:57:29.133442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:57:29.145771 (kubelet)[2295]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 02:57:29.206488 kubelet[2295]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 02:57:29.206488 kubelet[2295]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 02:57:29.206488 kubelet[2295]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 02:57:29.207149 kubelet[2295]: I0707 02:57:29.206600 2295 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 02:57:29.620265 kubelet[2295]: I0707 02:57:29.620166 2295 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 02:57:29.620265 kubelet[2295]: I0707 02:57:29.620252 2295 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 02:57:29.620659 kubelet[2295]: I0707 02:57:29.620614 2295 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 02:57:29.652899 kubelet[2295]: I0707 02:57:29.652840 2295 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 02:57:29.656860 kubelet[2295]: E0707 02:57:29.656771 2295 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.11.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.11.130:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:57:29.667076 kubelet[2295]: E0707 02:57:29.667038 2295 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 02:57:29.667372 kubelet[2295]: I0707 02:57:29.667221 2295 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 02:57:29.679176 kubelet[2295]: I0707 02:57:29.678954 2295 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 02:57:29.681291 kubelet[2295]: I0707 02:57:29.681209 2295 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 02:57:29.682297 kubelet[2295]: I0707 02:57:29.681667 2295 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 02:57:29.682297 kubelet[2295]: I0707 02:57:29.681765 2295 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-3i0x6.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 02:57:29.682297 kubelet[2295]: I0707 02:57:29.682127 2295 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 02:57:29.682297 kubelet[2295]: I0707 02:57:29.682143 2295 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 02:57:29.682711 kubelet[2295]: I0707 02:57:29.682385 2295 state_mem.go:36] "Initialized new in-memory state store" Jul 7 02:57:29.686220 kubelet[2295]: I0707 02:57:29.686183 2295 kubelet.go:408] "Attempting to sync node with API server" Jul 7 02:57:29.686220 kubelet[2295]: I0707 02:57:29.686226 2295 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 02:57:29.688422 kubelet[2295]: W0707 02:57:29.688281 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.11.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3i0x6.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.11.130:6443: connect: connection refused Jul 7 02:57:29.688422 kubelet[2295]: E0707 02:57:29.688379 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.11.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3i0x6.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.11.130:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:57:29.688819 kubelet[2295]: I0707 02:57:29.688784 2295 kubelet.go:314] "Adding apiserver pod source" Jul 7 02:57:29.688904 kubelet[2295]: I0707 02:57:29.688853 2295 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 02:57:29.691995 kubelet[2295]: W0707 02:57:29.691936 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.11.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.11.130:6443: connect: connection refused Jul 7 02:57:29.692081 kubelet[2295]: E0707 02:57:29.692006 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.11.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.11.130:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:57:29.692817 kubelet[2295]: I0707 02:57:29.692596 2295 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 02:57:29.696079 kubelet[2295]: I0707 02:57:29.695912 2295 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 02:57:29.697139 kubelet[2295]: W0707 02:57:29.696742 2295 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 02:57:29.697963 kubelet[2295]: I0707 02:57:29.697741 2295 server.go:1274] "Started kubelet" Jul 7 02:57:29.698955 kubelet[2295]: I0707 02:57:29.698579 2295 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 02:57:29.700213 kubelet[2295]: I0707 02:57:29.700079 2295 server.go:449] "Adding debug handlers to kubelet server" Jul 7 02:57:29.707134 kubelet[2295]: I0707 02:57:29.707064 2295 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 02:57:29.717629 kubelet[2295]: I0707 02:57:29.716659 2295 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 02:57:29.717629 kubelet[2295]: I0707 02:57:29.717210 2295 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 02:57:29.719464 kubelet[2295]: I0707 02:57:29.718351 2295 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 02:57:29.720590 kubelet[2295]: I0707 02:57:29.719733 2295 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 02:57:29.720590 kubelet[2295]: E0707 02:57:29.720171 2295 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3i0x6.gb1.brightbox.com\" not found" Jul 7 02:57:29.721086 kubelet[2295]: E0707 02:57:29.717511 2295 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.244.11.130:6443/api/v1/namespaces/default/events\": dial tcp 10.244.11.130:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-3i0x6.gb1.brightbox.com.184fd8b4787ef4d7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-3i0x6.gb1.brightbox.com,UID:srv-3i0x6.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-3i0x6.gb1.brightbox.com,},FirstTimestamp:2025-07-07 02:57:29.697711319 +0000 UTC m=+0.545754333,LastTimestamp:2025-07-07 02:57:29.697711319 +0000 UTC m=+0.545754333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-3i0x6.gb1.brightbox.com,}" Jul 7 02:57:29.726548 kubelet[2295]: I0707 02:57:29.724828 2295 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 02:57:29.727260 kubelet[2295]: E0707 02:57:29.722644 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.11.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3i0x6.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.11.130:6443: connect: connection refused" interval="200ms" Jul 7 02:57:29.727260 kubelet[2295]: W0707 02:57:29.727014 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.11.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.11.130:6443: connect: connection refused Jul 7 02:57:29.727260 kubelet[2295]: E0707 02:57:29.727082 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.11.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.11.130:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:57:29.728302 kubelet[2295]: I0707 02:57:29.724943 2295 reconciler.go:26] "Reconciler: start to sync state" Jul 7 02:57:29.729670 kubelet[2295]: I0707 02:57:29.729644 2295 factory.go:221] Registration of the systemd container factory successfully Jul 7 02:57:29.729789 kubelet[2295]: I0707 02:57:29.729759 2295 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 02:57:29.732191 kubelet[2295]: I0707 02:57:29.732166 2295 factory.go:221] Registration of the containerd container factory successfully Jul 7 02:57:29.735296 kubelet[2295]: E0707 02:57:29.732486 2295 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 02:57:29.759027 kubelet[2295]: I0707 02:57:29.758448 2295 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 02:57:29.762292 kubelet[2295]: I0707 02:57:29.761309 2295 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 02:57:29.762292 kubelet[2295]: I0707 02:57:29.761365 2295 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 02:57:29.762292 kubelet[2295]: I0707 02:57:29.761409 2295 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 02:57:29.762292 kubelet[2295]: E0707 02:57:29.761494 2295 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 02:57:29.769051 kubelet[2295]: W0707 02:57:29.768986 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.11.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.11.130:6443: connect: connection refused Jul 7 02:57:29.769339 kubelet[2295]: E0707 02:57:29.769311 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.11.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.11.130:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:57:29.778415 kubelet[2295]: I0707 02:57:29.778361 2295 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 02:57:29.778415 kubelet[2295]: I0707 02:57:29.778405 2295 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 02:57:29.778588 kubelet[2295]: I0707 02:57:29.778452 2295 state_mem.go:36] "Initialized new in-memory state store" Jul 7 02:57:29.780398 kubelet[2295]: I0707 02:57:29.780368 2295 policy_none.go:49] "None policy: Start" Jul 7 02:57:29.781468 kubelet[2295]: I0707 02:57:29.781436 2295 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 02:57:29.781526 kubelet[2295]: I0707 02:57:29.781468 2295 state_mem.go:35] "Initializing new in-memory state store" Jul 7 02:57:29.790584 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 02:57:29.803616 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 02:57:29.818917 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 02:57:29.820888 kubelet[2295]: E0707 02:57:29.820802 2295 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"srv-3i0x6.gb1.brightbox.com\" not found" Jul 7 02:57:29.821783 kubelet[2295]: I0707 02:57:29.821709 2295 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 02:57:29.822259 kubelet[2295]: I0707 02:57:29.822076 2295 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 02:57:29.822259 kubelet[2295]: I0707 02:57:29.822124 2295 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 02:57:29.823348 kubelet[2295]: I0707 02:57:29.822898 2295 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 02:57:29.825430 kubelet[2295]: E0707 02:57:29.825377 2295 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-3i0x6.gb1.brightbox.com\" not found" Jul 7 02:57:29.877350 systemd[1]: Created slice kubepods-burstable-podb4b3514224b58f43bdc0a663e184339a.slice - libcontainer container kubepods-burstable-podb4b3514224b58f43bdc0a663e184339a.slice. Jul 7 02:57:29.906884 systemd[1]: Created slice kubepods-burstable-pod76a41ac5d44e1a2715c7a929baf69139.slice - libcontainer container kubepods-burstable-pod76a41ac5d44e1a2715c7a929baf69139.slice. Jul 7 02:57:29.913675 systemd[1]: Created slice kubepods-burstable-poda312cd016c1e5247bd33f248933535a0.slice - libcontainer container kubepods-burstable-poda312cd016c1e5247bd33f248933535a0.slice. Jul 7 02:57:29.926656 kubelet[2295]: I0707 02:57:29.926592 2295 kubelet_node_status.go:72] "Attempting to register node" node="srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:29.927390 kubelet[2295]: E0707 02:57:29.927320 2295 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.11.130:6443/api/v1/nodes\": dial tcp 10.244.11.130:6443: connect: connection refused" node="srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:29.927390 kubelet[2295]: E0707 02:57:29.927353 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.11.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3i0x6.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.11.130:6443: connect: connection refused" interval="400ms" Jul 7 02:57:29.929913 kubelet[2295]: I0707 02:57:29.929804 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4b3514224b58f43bdc0a663e184339a-k8s-certs\") pod \"kube-apiserver-srv-3i0x6.gb1.brightbox.com\" (UID: \"b4b3514224b58f43bdc0a663e184339a\") " pod="kube-system/kube-apiserver-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:29.929913 kubelet[2295]: I0707 02:57:29.929859 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76a41ac5d44e1a2715c7a929baf69139-ca-certs\") pod \"kube-controller-manager-srv-3i0x6.gb1.brightbox.com\" (UID: \"76a41ac5d44e1a2715c7a929baf69139\") " pod="kube-system/kube-controller-manager-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:29.929913 kubelet[2295]: I0707 02:57:29.929894 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76a41ac5d44e1a2715c7a929baf69139-k8s-certs\") pod \"kube-controller-manager-srv-3i0x6.gb1.brightbox.com\" (UID: \"76a41ac5d44e1a2715c7a929baf69139\") " pod="kube-system/kube-controller-manager-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:29.930384 kubelet[2295]: I0707 02:57:29.929924 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76a41ac5d44e1a2715c7a929baf69139-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-3i0x6.gb1.brightbox.com\" (UID: \"76a41ac5d44e1a2715c7a929baf69139\") " pod="kube-system/kube-controller-manager-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:29.930384 kubelet[2295]: I0707 02:57:29.929952 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4b3514224b58f43bdc0a663e184339a-ca-certs\") pod \"kube-apiserver-srv-3i0x6.gb1.brightbox.com\" (UID: \"b4b3514224b58f43bdc0a663e184339a\") " pod="kube-system/kube-apiserver-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:29.930384 kubelet[2295]: I0707 02:57:29.929978 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4b3514224b58f43bdc0a663e184339a-usr-share-ca-certificates\") pod \"kube-apiserver-srv-3i0x6.gb1.brightbox.com\" (UID: \"b4b3514224b58f43bdc0a663e184339a\") " pod="kube-system/kube-apiserver-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:29.930384 kubelet[2295]: I0707 02:57:29.930042 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/76a41ac5d44e1a2715c7a929baf69139-flexvolume-dir\") pod \"kube-controller-manager-srv-3i0x6.gb1.brightbox.com\" (UID: \"76a41ac5d44e1a2715c7a929baf69139\") " pod="kube-system/kube-controller-manager-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:29.930384 kubelet[2295]: I0707 02:57:29.930130 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/76a41ac5d44e1a2715c7a929baf69139-kubeconfig\") pod \"kube-controller-manager-srv-3i0x6.gb1.brightbox.com\" (UID: \"76a41ac5d44e1a2715c7a929baf69139\") " pod="kube-system/kube-controller-manager-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:29.930734 kubelet[2295]: I0707 02:57:29.930165 2295 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a312cd016c1e5247bd33f248933535a0-kubeconfig\") pod \"kube-scheduler-srv-3i0x6.gb1.brightbox.com\" (UID: \"a312cd016c1e5247bd33f248933535a0\") " pod="kube-system/kube-scheduler-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:30.131435 kubelet[2295]: I0707 02:57:30.130918 2295 kubelet_node_status.go:72] "Attempting to register node" node="srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:30.131682 kubelet[2295]: E0707 02:57:30.131416 2295 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.11.130:6443/api/v1/nodes\": dial tcp 10.244.11.130:6443: connect: connection refused" node="srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:30.205253 containerd[1504]: time="2025-07-07T02:57:30.205082399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-3i0x6.gb1.brightbox.com,Uid:b4b3514224b58f43bdc0a663e184339a,Namespace:kube-system,Attempt:0,}" Jul 7 02:57:30.219418 containerd[1504]: time="2025-07-07T02:57:30.219358343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-3i0x6.gb1.brightbox.com,Uid:a312cd016c1e5247bd33f248933535a0,Namespace:kube-system,Attempt:0,}" Jul 7 02:57:30.219809 containerd[1504]: time="2025-07-07T02:57:30.219767727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-3i0x6.gb1.brightbox.com,Uid:76a41ac5d44e1a2715c7a929baf69139,Namespace:kube-system,Attempt:0,}" Jul 7 02:57:30.328405 kubelet[2295]: E0707 02:57:30.328327 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.11.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3i0x6.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.11.130:6443: connect: connection refused" interval="800ms" Jul 7 02:57:30.535534 kubelet[2295]: I0707 02:57:30.535419 2295 kubelet_node_status.go:72] "Attempting to register node" node="srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:30.535945 kubelet[2295]: E0707 02:57:30.535902 2295 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.11.130:6443/api/v1/nodes\": dial tcp 10.244.11.130:6443: connect: connection refused" node="srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:30.675936 kubelet[2295]: W0707 02:57:30.675778 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.244.11.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.244.11.130:6443: connect: connection refused Jul 7 02:57:30.675936 kubelet[2295]: E0707 02:57:30.675880 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.244.11.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.244.11.130:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:57:31.013202 kubelet[2295]: W0707 02:57:31.013040 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.244.11.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3i0x6.gb1.brightbox.com&limit=500&resourceVersion=0": dial tcp 10.244.11.130:6443: connect: connection refused Jul 7 02:57:31.013202 kubelet[2295]: E0707 02:57:31.013161 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.244.11.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-3i0x6.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.244.11.130:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:57:31.130082 kubelet[2295]: E0707 02:57:31.130000 2295 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.244.11.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-3i0x6.gb1.brightbox.com?timeout=10s\": dial tcp 10.244.11.130:6443: connect: connection refused" interval="1.6s" Jul 7 02:57:31.191395 kubelet[2295]: W0707 02:57:31.191299 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.244.11.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.244.11.130:6443: connect: connection refused Jul 7 02:57:31.191598 kubelet[2295]: E0707 02:57:31.191405 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.244.11.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.244.11.130:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:57:31.282636 kubelet[2295]: W0707 02:57:31.282407 2295 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.244.11.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.244.11.130:6443: connect: connection refused Jul 7 02:57:31.282636 kubelet[2295]: E0707 02:57:31.282510 2295 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.244.11.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.244.11.130:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:57:31.339038 kubelet[2295]: I0707 02:57:31.338977 2295 kubelet_node_status.go:72] "Attempting to register node" node="srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:31.339677 kubelet[2295]: E0707 02:57:31.339478 2295 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.244.11.130:6443/api/v1/nodes\": dial tcp 10.244.11.130:6443: connect: connection refused" node="srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:31.431067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3453578649.mount: Deactivated successfully. Jul 7 02:57:31.439705 containerd[1504]: time="2025-07-07T02:57:31.439595990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 02:57:31.440927 containerd[1504]: time="2025-07-07T02:57:31.440892281Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 02:57:31.442174 containerd[1504]: time="2025-07-07T02:57:31.442084799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 02:57:31.442484 containerd[1504]: time="2025-07-07T02:57:31.442391127Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 7 02:57:31.443066 containerd[1504]: time="2025-07-07T02:57:31.443004481Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 02:57:31.447567 containerd[1504]: time="2025-07-07T02:57:31.447495300Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 7 02:57:31.460466 containerd[1504]: time="2025-07-07T02:57:31.460391481Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 02:57:31.469071 containerd[1504]: time="2025-07-07T02:57:31.468224607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 02:57:31.470661 containerd[1504]: time="2025-07-07T02:57:31.470616343Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.250755414s" Jul 7 02:57:31.475532 containerd[1504]: time="2025-07-07T02:57:31.475492417Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.270191587s" Jul 7 02:57:31.477610 containerd[1504]: time="2025-07-07T02:57:31.477560497Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.258097412s" Jul 7 02:57:31.663947 kubelet[2295]: E0707 02:57:31.663899 2295 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.244.11.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.244.11.130:6443: connect: connection refused" logger="UnhandledError" Jul 7 02:57:31.718043 containerd[1504]: time="2025-07-07T02:57:31.717459823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:57:31.718043 containerd[1504]: time="2025-07-07T02:57:31.717643138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:57:31.718043 containerd[1504]: time="2025-07-07T02:57:31.717722787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:57:31.718043 containerd[1504]: time="2025-07-07T02:57:31.717938470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:57:31.728963 containerd[1504]: time="2025-07-07T02:57:31.727372109Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:57:31.728963 containerd[1504]: time="2025-07-07T02:57:31.728608359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:57:31.728963 containerd[1504]: time="2025-07-07T02:57:31.728665757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:57:31.728963 containerd[1504]: time="2025-07-07T02:57:31.728676964Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:57:31.728963 containerd[1504]: time="2025-07-07T02:57:31.728757629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:57:31.728963 containerd[1504]: time="2025-07-07T02:57:31.728782623Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:57:31.728963 containerd[1504]: time="2025-07-07T02:57:31.728897780Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:57:31.728963 containerd[1504]: time="2025-07-07T02:57:31.728892700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:57:31.772513 systemd[1]: Started cri-containerd-969766e1a7c05a51f96d18d1f8cc9e663e035aeb18401fba928928db0df7df34.scope - libcontainer container 969766e1a7c05a51f96d18d1f8cc9e663e035aeb18401fba928928db0df7df34. Jul 7 02:57:31.786509 systemd[1]: Started cri-containerd-05e4746e824c85427c45db3a2325b90b5988a557703663b67a507fd15e615bbf.scope - libcontainer container 05e4746e824c85427c45db3a2325b90b5988a557703663b67a507fd15e615bbf. Jul 7 02:57:31.790597 systemd[1]: Started cri-containerd-3d5f3c2de8cdc932194654d006bef162f1eccd33eab7df5bcbfd2a668b25434f.scope - libcontainer container 3d5f3c2de8cdc932194654d006bef162f1eccd33eab7df5bcbfd2a668b25434f. Jul 7 02:57:31.895187 containerd[1504]: time="2025-07-07T02:57:31.894978722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-3i0x6.gb1.brightbox.com,Uid:b4b3514224b58f43bdc0a663e184339a,Namespace:kube-system,Attempt:0,} returns sandbox id \"969766e1a7c05a51f96d18d1f8cc9e663e035aeb18401fba928928db0df7df34\"" Jul 7 02:57:31.908774 containerd[1504]: time="2025-07-07T02:57:31.908375124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-3i0x6.gb1.brightbox.com,Uid:76a41ac5d44e1a2715c7a929baf69139,Namespace:kube-system,Attempt:0,} returns sandbox id \"05e4746e824c85427c45db3a2325b90b5988a557703663b67a507fd15e615bbf\"" Jul 7 02:57:31.909358 containerd[1504]: time="2025-07-07T02:57:31.909046317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-3i0x6.gb1.brightbox.com,Uid:a312cd016c1e5247bd33f248933535a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d5f3c2de8cdc932194654d006bef162f1eccd33eab7df5bcbfd2a668b25434f\"" Jul 7 02:57:31.909921 containerd[1504]: time="2025-07-07T02:57:31.909881901Z" level=info msg="CreateContainer within sandbox \"969766e1a7c05a51f96d18d1f8cc9e663e035aeb18401fba928928db0df7df34\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 02:57:31.914384 containerd[1504]: time="2025-07-07T02:57:31.914046667Z" level=info msg="CreateContainer within sandbox \"05e4746e824c85427c45db3a2325b90b5988a557703663b67a507fd15e615bbf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 02:57:31.915443 containerd[1504]: time="2025-07-07T02:57:31.915289709Z" level=info msg="CreateContainer within sandbox \"3d5f3c2de8cdc932194654d006bef162f1eccd33eab7df5bcbfd2a668b25434f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 02:57:31.937303 containerd[1504]: time="2025-07-07T02:57:31.937026059Z" level=info msg="CreateContainer within sandbox \"05e4746e824c85427c45db3a2325b90b5988a557703663b67a507fd15e615bbf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7db8083efb9cccf5701712ea18b81b530adf5cef723bf81399d8a9f5dd72fd1a\"" Jul 7 02:57:31.938326 containerd[1504]: time="2025-07-07T02:57:31.938293328Z" level=info msg="StartContainer for \"7db8083efb9cccf5701712ea18b81b530adf5cef723bf81399d8a9f5dd72fd1a\"" Jul 7 02:57:31.940296 containerd[1504]: time="2025-07-07T02:57:31.940191213Z" level=info msg="CreateContainer within sandbox \"969766e1a7c05a51f96d18d1f8cc9e663e035aeb18401fba928928db0df7df34\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"af36e23f85ee8e1de44c01b366223fe75f11ad96a1b9cfe8f0e5369baddaca55\"" Jul 7 02:57:31.940688 containerd[1504]: time="2025-07-07T02:57:31.940454395Z" level=info msg="CreateContainer within sandbox \"3d5f3c2de8cdc932194654d006bef162f1eccd33eab7df5bcbfd2a668b25434f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d2a87550ab7c84108713e9237aeb78cd1640180c6f9e1cb449dca5a61b5aae2a\"" Jul 7 02:57:31.942188 containerd[1504]: time="2025-07-07T02:57:31.940936440Z" level=info msg="StartContainer for \"d2a87550ab7c84108713e9237aeb78cd1640180c6f9e1cb449dca5a61b5aae2a\"" Jul 7 02:57:31.944527 containerd[1504]: time="2025-07-07T02:57:31.944482707Z" level=info msg="StartContainer for \"af36e23f85ee8e1de44c01b366223fe75f11ad96a1b9cfe8f0e5369baddaca55\"" Jul 7 02:57:31.993041 systemd[1]: Started cri-containerd-d2a87550ab7c84108713e9237aeb78cd1640180c6f9e1cb449dca5a61b5aae2a.scope - libcontainer container d2a87550ab7c84108713e9237aeb78cd1640180c6f9e1cb449dca5a61b5aae2a. Jul 7 02:57:32.006652 systemd[1]: Started cri-containerd-7db8083efb9cccf5701712ea18b81b530adf5cef723bf81399d8a9f5dd72fd1a.scope - libcontainer container 7db8083efb9cccf5701712ea18b81b530adf5cef723bf81399d8a9f5dd72fd1a. Jul 7 02:57:32.017706 systemd[1]: Started cri-containerd-af36e23f85ee8e1de44c01b366223fe75f11ad96a1b9cfe8f0e5369baddaca55.scope - libcontainer container af36e23f85ee8e1de44c01b366223fe75f11ad96a1b9cfe8f0e5369baddaca55. Jul 7 02:57:32.106020 containerd[1504]: time="2025-07-07T02:57:32.105282681Z" level=info msg="StartContainer for \"af36e23f85ee8e1de44c01b366223fe75f11ad96a1b9cfe8f0e5369baddaca55\" returns successfully" Jul 7 02:57:32.133270 containerd[1504]: time="2025-07-07T02:57:32.132860104Z" level=info msg="StartContainer for \"d2a87550ab7c84108713e9237aeb78cd1640180c6f9e1cb449dca5a61b5aae2a\" returns successfully" Jul 7 02:57:32.139625 containerd[1504]: time="2025-07-07T02:57:32.139059127Z" level=info msg="StartContainer for \"7db8083efb9cccf5701712ea18b81b530adf5cef723bf81399d8a9f5dd72fd1a\" returns successfully" Jul 7 02:57:32.944262 kubelet[2295]: I0707 02:57:32.942768 2295 kubelet_node_status.go:72] "Attempting to register node" node="srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:35.196086 kubelet[2295]: E0707 02:57:35.195992 2295 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-3i0x6.gb1.brightbox.com\" not found" node="srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:35.312400 kubelet[2295]: I0707 02:57:35.311243 2295 kubelet_node_status.go:75] "Successfully registered node" node="srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:35.312400 kubelet[2295]: E0707 02:57:35.311310 2295 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"srv-3i0x6.gb1.brightbox.com\": node \"srv-3i0x6.gb1.brightbox.com\" not found" Jul 7 02:57:35.698617 kubelet[2295]: I0707 02:57:35.698480 2295 apiserver.go:52] "Watching apiserver" Jul 7 02:57:35.726367 kubelet[2295]: I0707 02:57:35.726275 2295 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 02:57:37.604048 systemd[1]: Reloading requested from client PID 2566 ('systemctl') (unit session-11.scope)... Jul 7 02:57:37.604092 systemd[1]: Reloading... Jul 7 02:57:37.720287 zram_generator::config[2605]: No configuration found. Jul 7 02:57:37.914403 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 02:57:38.047964 systemd[1]: Reloading finished in 443 ms. Jul 7 02:57:38.113380 kubelet[2295]: I0707 02:57:38.113195 2295 dynamic_cafile_content.go:174] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 02:57:38.113319 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:57:38.130035 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 02:57:38.130504 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:57:38.130618 systemd[1]: kubelet.service: Consumed 1.121s CPU time, 126.8M memory peak, 0B memory swap peak. Jul 7 02:57:38.138741 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 02:57:38.400487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 02:57:38.413984 (kubelet)[2669]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 02:57:38.489212 kubelet[2669]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 02:57:38.489785 kubelet[2669]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 7 02:57:38.489884 kubelet[2669]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 02:57:38.490160 kubelet[2669]: I0707 02:57:38.490087 2669 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 02:57:38.502845 kubelet[2669]: I0707 02:57:38.502800 2669 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 7 02:57:38.503065 kubelet[2669]: I0707 02:57:38.503044 2669 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 02:57:38.503840 kubelet[2669]: I0707 02:57:38.503748 2669 server.go:934] "Client rotation is on, will bootstrap in background" Jul 7 02:57:38.507100 kubelet[2669]: I0707 02:57:38.507045 2669 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 02:57:38.513282 kubelet[2669]: I0707 02:57:38.512353 2669 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 02:57:38.526266 kubelet[2669]: E0707 02:57:38.525383 2669 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 7 02:57:38.526266 kubelet[2669]: I0707 02:57:38.525439 2669 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 7 02:57:38.531328 kubelet[2669]: I0707 02:57:38.531295 2669 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 02:57:38.531925 kubelet[2669]: I0707 02:57:38.531896 2669 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 7 02:57:38.532170 kubelet[2669]: I0707 02:57:38.532119 2669 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 02:57:38.532518 kubelet[2669]: I0707 02:57:38.532173 2669 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-3i0x6.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 02:57:38.532708 kubelet[2669]: I0707 02:57:38.532558 2669 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 02:57:38.532708 kubelet[2669]: I0707 02:57:38.532579 2669 container_manager_linux.go:300] "Creating device plugin manager" Jul 7 02:57:38.532708 kubelet[2669]: I0707 02:57:38.532625 2669 state_mem.go:36] "Initialized new in-memory state store" Jul 7 02:57:38.535877 kubelet[2669]: I0707 02:57:38.535852 2669 kubelet.go:408] "Attempting to sync node with API server" Jul 7 02:57:38.535974 kubelet[2669]: I0707 02:57:38.535888 2669 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 02:57:38.535974 kubelet[2669]: I0707 02:57:38.535942 2669 kubelet.go:314] "Adding apiserver pod source" Jul 7 02:57:38.535974 kubelet[2669]: I0707 02:57:38.535960 2669 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 02:57:38.539173 kubelet[2669]: I0707 02:57:38.539140 2669 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 7 02:57:38.541642 kubelet[2669]: I0707 02:57:38.539813 2669 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 02:57:38.543415 kubelet[2669]: I0707 02:57:38.543391 2669 server.go:1274] "Started kubelet" Jul 7 02:57:38.548057 kubelet[2669]: I0707 02:57:38.547841 2669 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 02:57:38.563261 kubelet[2669]: I0707 02:57:38.561169 2669 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 02:57:38.563261 kubelet[2669]: I0707 02:57:38.562607 2669 server.go:449] "Adding debug handlers to kubelet server" Jul 7 02:57:38.566150 kubelet[2669]: I0707 02:57:38.566111 2669 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 02:57:38.568264 kubelet[2669]: I0707 02:57:38.566559 2669 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 02:57:38.568873 kubelet[2669]: I0707 02:57:38.568847 2669 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 02:57:38.572480 kubelet[2669]: I0707 02:57:38.572423 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 02:57:38.575242 kubelet[2669]: I0707 02:57:38.574020 2669 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 02:57:38.575242 kubelet[2669]: I0707 02:57:38.574074 2669 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 7 02:57:38.575242 kubelet[2669]: I0707 02:57:38.574115 2669 kubelet.go:2321] "Starting kubelet main sync loop" Jul 7 02:57:38.575242 kubelet[2669]: E0707 02:57:38.574178 2669 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 02:57:38.576019 kubelet[2669]: I0707 02:57:38.575634 2669 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 7 02:57:38.579001 kubelet[2669]: I0707 02:57:38.577928 2669 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 7 02:57:38.580489 kubelet[2669]: I0707 02:57:38.579335 2669 reconciler.go:26] "Reconciler: start to sync state" Jul 7 02:57:38.589880 kubelet[2669]: I0707 02:57:38.589837 2669 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 02:57:38.599484 kubelet[2669]: I0707 02:57:38.599436 2669 factory.go:221] Registration of the containerd container factory successfully Jul 7 02:57:38.599484 kubelet[2669]: I0707 02:57:38.599466 2669 factory.go:221] Registration of the systemd container factory successfully Jul 7 02:57:38.624742 kubelet[2669]: E0707 02:57:38.624693 2669 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 02:57:38.674899 kubelet[2669]: E0707 02:57:38.674727 2669 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 02:57:38.684982 kubelet[2669]: I0707 02:57:38.684188 2669 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 7 02:57:38.684982 kubelet[2669]: I0707 02:57:38.684216 2669 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 7 02:57:38.684982 kubelet[2669]: I0707 02:57:38.684265 2669 state_mem.go:36] "Initialized new in-memory state store" Jul 7 02:57:38.684982 kubelet[2669]: I0707 02:57:38.684654 2669 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 02:57:38.684982 kubelet[2669]: I0707 02:57:38.684675 2669 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 02:57:38.684982 kubelet[2669]: I0707 02:57:38.684716 2669 policy_none.go:49] "None policy: Start" Jul 7 02:57:38.685670 kubelet[2669]: I0707 02:57:38.685633 2669 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 7 02:57:38.686374 kubelet[2669]: I0707 02:57:38.685683 2669 state_mem.go:35] "Initializing new in-memory state store" Jul 7 02:57:38.686374 kubelet[2669]: I0707 02:57:38.685894 2669 state_mem.go:75] "Updated machine memory state" Jul 7 02:57:38.702871 kubelet[2669]: I0707 02:57:38.701479 2669 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 02:57:38.704340 kubelet[2669]: I0707 02:57:38.703469 2669 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 02:57:38.704340 kubelet[2669]: I0707 02:57:38.703506 2669 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 02:57:38.717961 kubelet[2669]: I0707 02:57:38.716816 2669 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 02:57:38.825643 kubelet[2669]: I0707 02:57:38.825552 2669 kubelet_node_status.go:72] "Attempting to register node" node="srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:38.838981 kubelet[2669]: I0707 02:57:38.838363 2669 kubelet_node_status.go:111] "Node was previously registered" node="srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:38.838981 kubelet[2669]: I0707 02:57:38.838497 2669 kubelet_node_status.go:75] "Successfully registered node" node="srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:38.882601 kubelet[2669]: I0707 02:57:38.882091 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/76a41ac5d44e1a2715c7a929baf69139-ca-certs\") pod \"kube-controller-manager-srv-3i0x6.gb1.brightbox.com\" (UID: \"76a41ac5d44e1a2715c7a929baf69139\") " pod="kube-system/kube-controller-manager-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:38.882601 kubelet[2669]: I0707 02:57:38.882147 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/76a41ac5d44e1a2715c7a929baf69139-flexvolume-dir\") pod \"kube-controller-manager-srv-3i0x6.gb1.brightbox.com\" (UID: \"76a41ac5d44e1a2715c7a929baf69139\") " pod="kube-system/kube-controller-manager-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:38.882601 kubelet[2669]: I0707 02:57:38.882183 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/76a41ac5d44e1a2715c7a929baf69139-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-3i0x6.gb1.brightbox.com\" (UID: \"76a41ac5d44e1a2715c7a929baf69139\") " pod="kube-system/kube-controller-manager-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:38.882601 kubelet[2669]: I0707 02:57:38.882400 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b4b3514224b58f43bdc0a663e184339a-k8s-certs\") pod \"kube-apiserver-srv-3i0x6.gb1.brightbox.com\" (UID: \"b4b3514224b58f43bdc0a663e184339a\") " pod="kube-system/kube-apiserver-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:38.882601 kubelet[2669]: I0707 02:57:38.882451 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b4b3514224b58f43bdc0a663e184339a-usr-share-ca-certificates\") pod \"kube-apiserver-srv-3i0x6.gb1.brightbox.com\" (UID: \"b4b3514224b58f43bdc0a663e184339a\") " pod="kube-system/kube-apiserver-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:38.883901 kubelet[2669]: I0707 02:57:38.882482 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/76a41ac5d44e1a2715c7a929baf69139-k8s-certs\") pod \"kube-controller-manager-srv-3i0x6.gb1.brightbox.com\" (UID: \"76a41ac5d44e1a2715c7a929baf69139\") " pod="kube-system/kube-controller-manager-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:38.883901 kubelet[2669]: I0707 02:57:38.882513 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/76a41ac5d44e1a2715c7a929baf69139-kubeconfig\") pod \"kube-controller-manager-srv-3i0x6.gb1.brightbox.com\" (UID: \"76a41ac5d44e1a2715c7a929baf69139\") " pod="kube-system/kube-controller-manager-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:38.883901 kubelet[2669]: I0707 02:57:38.882541 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a312cd016c1e5247bd33f248933535a0-kubeconfig\") pod \"kube-scheduler-srv-3i0x6.gb1.brightbox.com\" (UID: \"a312cd016c1e5247bd33f248933535a0\") " pod="kube-system/kube-scheduler-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:38.883901 kubelet[2669]: I0707 02:57:38.882568 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b4b3514224b58f43bdc0a663e184339a-ca-certs\") pod \"kube-apiserver-srv-3i0x6.gb1.brightbox.com\" (UID: \"b4b3514224b58f43bdc0a663e184339a\") " pod="kube-system/kube-apiserver-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:38.890492 kubelet[2669]: W0707 02:57:38.890449 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:57:38.892559 kubelet[2669]: W0707 02:57:38.891830 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:57:38.893407 kubelet[2669]: W0707 02:57:38.893326 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:57:39.537698 kubelet[2669]: I0707 02:57:39.537367 2669 apiserver.go:52] "Watching apiserver" Jul 7 02:57:39.580204 kubelet[2669]: I0707 02:57:39.580091 2669 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 7 02:57:39.675897 kubelet[2669]: W0707 02:57:39.672126 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:57:39.675897 kubelet[2669]: E0707 02:57:39.672316 2669 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-srv-3i0x6.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:39.675897 kubelet[2669]: W0707 02:57:39.673482 2669 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 7 02:57:39.675897 kubelet[2669]: E0707 02:57:39.673529 2669 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-srv-3i0x6.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-3i0x6.gb1.brightbox.com" Jul 7 02:57:39.724503 kubelet[2669]: I0707 02:57:39.723209 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-3i0x6.gb1.brightbox.com" podStartSLOduration=1.723169352 podStartE2EDuration="1.723169352s" podCreationTimestamp="2025-07-07 02:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:57:39.70726411 +0000 UTC m=+1.285798753" watchObservedRunningTime="2025-07-07 02:57:39.723169352 +0000 UTC m=+1.301703989" Jul 7 02:57:39.725057 kubelet[2669]: I0707 02:57:39.724890 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-3i0x6.gb1.brightbox.com" podStartSLOduration=1.7248562760000001 podStartE2EDuration="1.724856276s" podCreationTimestamp="2025-07-07 02:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:57:39.72456908 +0000 UTC m=+1.303103713" watchObservedRunningTime="2025-07-07 02:57:39.724856276 +0000 UTC m=+1.303390914" Jul 7 02:57:44.006295 kubelet[2669]: I0707 02:57:44.006098 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-3i0x6.gb1.brightbox.com" podStartSLOduration=6.006070431 podStartE2EDuration="6.006070431s" podCreationTimestamp="2025-07-07 02:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:57:39.741142899 +0000 UTC m=+1.319677538" watchObservedRunningTime="2025-07-07 02:57:44.006070431 +0000 UTC m=+5.584605065" Jul 7 02:57:44.023454 kubelet[2669]: I0707 02:57:44.023113 2669 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 02:57:44.027823 containerd[1504]: time="2025-07-07T02:57:44.027726566Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 02:57:44.029059 kubelet[2669]: I0707 02:57:44.028111 2669 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 02:57:45.121796 kubelet[2669]: I0707 02:57:45.121646 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e54df55a-cb72-452d-a512-747485ff8719-kube-proxy\") pod \"kube-proxy-552jc\" (UID: \"e54df55a-cb72-452d-a512-747485ff8719\") " pod="kube-system/kube-proxy-552jc" Jul 7 02:57:45.121796 kubelet[2669]: I0707 02:57:45.121710 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e54df55a-cb72-452d-a512-747485ff8719-xtables-lock\") pod \"kube-proxy-552jc\" (UID: \"e54df55a-cb72-452d-a512-747485ff8719\") " pod="kube-system/kube-proxy-552jc" Jul 7 02:57:45.121796 kubelet[2669]: I0707 02:57:45.121763 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e54df55a-cb72-452d-a512-747485ff8719-lib-modules\") pod \"kube-proxy-552jc\" (UID: \"e54df55a-cb72-452d-a512-747485ff8719\") " pod="kube-system/kube-proxy-552jc" Jul 7 02:57:45.121796 kubelet[2669]: I0707 02:57:45.121797 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7k8m\" (UniqueName: \"kubernetes.io/projected/e54df55a-cb72-452d-a512-747485ff8719-kube-api-access-r7k8m\") pod \"kube-proxy-552jc\" (UID: \"e54df55a-cb72-452d-a512-747485ff8719\") " pod="kube-system/kube-proxy-552jc" Jul 7 02:57:45.136056 systemd[1]: Created slice kubepods-besteffort-pode54df55a_cb72_452d_a512_747485ff8719.slice - libcontainer container kubepods-besteffort-pode54df55a_cb72_452d_a512_747485ff8719.slice. Jul 7 02:57:45.305903 systemd[1]: Created slice kubepods-besteffort-pode8cc48bd_35aa_4e1c_bce5_a9894e879ed9.slice - libcontainer container kubepods-besteffort-pode8cc48bd_35aa_4e1c_bce5_a9894e879ed9.slice. Jul 7 02:57:45.325004 kubelet[2669]: I0707 02:57:45.322224 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e8cc48bd-35aa-4e1c-bce5-a9894e879ed9-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-dv79h\" (UID: \"e8cc48bd-35aa-4e1c-bce5-a9894e879ed9\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-dv79h" Jul 7 02:57:45.325004 kubelet[2669]: I0707 02:57:45.322314 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxqrr\" (UniqueName: \"kubernetes.io/projected/e8cc48bd-35aa-4e1c-bce5-a9894e879ed9-kube-api-access-qxqrr\") pod \"tigera-operator-5bf8dfcb4-dv79h\" (UID: \"e8cc48bd-35aa-4e1c-bce5-a9894e879ed9\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-dv79h" Jul 7 02:57:45.451542 containerd[1504]: time="2025-07-07T02:57:45.451431429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-552jc,Uid:e54df55a-cb72-452d-a512-747485ff8719,Namespace:kube-system,Attempt:0,}" Jul 7 02:57:45.499999 containerd[1504]: time="2025-07-07T02:57:45.498721273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:57:45.499999 containerd[1504]: time="2025-07-07T02:57:45.498836624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:57:45.499999 containerd[1504]: time="2025-07-07T02:57:45.498861232Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:57:45.499999 containerd[1504]: time="2025-07-07T02:57:45.499081670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:57:45.546554 systemd[1]: Started cri-containerd-89e186c256c1ddd0c7815a0c43d1201094086b1df56d276f8e4f0422d206d02d.scope - libcontainer container 89e186c256c1ddd0c7815a0c43d1201094086b1df56d276f8e4f0422d206d02d. Jul 7 02:57:45.583338 containerd[1504]: time="2025-07-07T02:57:45.583079931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-552jc,Uid:e54df55a-cb72-452d-a512-747485ff8719,Namespace:kube-system,Attempt:0,} returns sandbox id \"89e186c256c1ddd0c7815a0c43d1201094086b1df56d276f8e4f0422d206d02d\"" Jul 7 02:57:45.589185 containerd[1504]: time="2025-07-07T02:57:45.588596753Z" level=info msg="CreateContainer within sandbox \"89e186c256c1ddd0c7815a0c43d1201094086b1df56d276f8e4f0422d206d02d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 02:57:45.611953 containerd[1504]: time="2025-07-07T02:57:45.611872681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-dv79h,Uid:e8cc48bd-35aa-4e1c-bce5-a9894e879ed9,Namespace:tigera-operator,Attempt:0,}" Jul 7 02:57:45.615641 containerd[1504]: time="2025-07-07T02:57:45.615496752Z" level=info msg="CreateContainer within sandbox \"89e186c256c1ddd0c7815a0c43d1201094086b1df56d276f8e4f0422d206d02d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d7610e649a6192ef116ea47f3796e7cea1465e584955eff6b21a78e175c33b79\"" Jul 7 02:57:45.619191 containerd[1504]: time="2025-07-07T02:57:45.619123811Z" level=info msg="StartContainer for \"d7610e649a6192ef116ea47f3796e7cea1465e584955eff6b21a78e175c33b79\"" Jul 7 02:57:45.667523 systemd[1]: Started cri-containerd-d7610e649a6192ef116ea47f3796e7cea1465e584955eff6b21a78e175c33b79.scope - libcontainer container d7610e649a6192ef116ea47f3796e7cea1465e584955eff6b21a78e175c33b79. Jul 7 02:57:45.694324 containerd[1504]: time="2025-07-07T02:57:45.692364383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:57:45.694324 containerd[1504]: time="2025-07-07T02:57:45.693782411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:57:45.694324 containerd[1504]: time="2025-07-07T02:57:45.693805644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:57:45.695273 containerd[1504]: time="2025-07-07T02:57:45.694063111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:57:45.727525 systemd[1]: Started cri-containerd-619ce47e947ed972038ebb30cacb71e691cae48b1ba41a729eed771c026edc04.scope - libcontainer container 619ce47e947ed972038ebb30cacb71e691cae48b1ba41a729eed771c026edc04. Jul 7 02:57:45.762966 containerd[1504]: time="2025-07-07T02:57:45.762889712Z" level=info msg="StartContainer for \"d7610e649a6192ef116ea47f3796e7cea1465e584955eff6b21a78e175c33b79\" returns successfully" Jul 7 02:57:45.821254 containerd[1504]: time="2025-07-07T02:57:45.821082293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-dv79h,Uid:e8cc48bd-35aa-4e1c-bce5-a9894e879ed9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"619ce47e947ed972038ebb30cacb71e691cae48b1ba41a729eed771c026edc04\"" Jul 7 02:57:45.824303 containerd[1504]: time="2025-07-07T02:57:45.823686272Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 02:57:46.714630 kubelet[2669]: I0707 02:57:46.714472 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-552jc" podStartSLOduration=1.713857418 podStartE2EDuration="1.713857418s" podCreationTimestamp="2025-07-07 02:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:57:46.712289896 +0000 UTC m=+8.290824543" watchObservedRunningTime="2025-07-07 02:57:46.713857418 +0000 UTC m=+8.292392051" Jul 7 02:57:48.715017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1651459881.mount: Deactivated successfully. Jul 7 02:57:50.080286 containerd[1504]: time="2025-07-07T02:57:50.079975186Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:50.082407 containerd[1504]: time="2025-07-07T02:57:50.081873210Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 02:57:50.082407 containerd[1504]: time="2025-07-07T02:57:50.082081292Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:50.087281 containerd[1504]: time="2025-07-07T02:57:50.086770776Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:57:50.095826 containerd[1504]: time="2025-07-07T02:57:50.095432717Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 4.271587635s" Jul 7 02:57:50.095826 containerd[1504]: time="2025-07-07T02:57:50.095483384Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 02:57:50.100264 containerd[1504]: time="2025-07-07T02:57:50.100076725Z" level=info msg="CreateContainer within sandbox \"619ce47e947ed972038ebb30cacb71e691cae48b1ba41a729eed771c026edc04\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 02:57:50.120252 containerd[1504]: time="2025-07-07T02:57:50.120182180Z" level=info msg="CreateContainer within sandbox \"619ce47e947ed972038ebb30cacb71e691cae48b1ba41a729eed771c026edc04\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"368ced863546ea7f13c0f72c4b41e7b686ddb396f95d89112da0a5adc1afe78b\"" Jul 7 02:57:50.121407 containerd[1504]: time="2025-07-07T02:57:50.121319721Z" level=info msg="StartContainer for \"368ced863546ea7f13c0f72c4b41e7b686ddb396f95d89112da0a5adc1afe78b\"" Jul 7 02:57:50.164870 systemd[1]: run-containerd-runc-k8s.io-368ced863546ea7f13c0f72c4b41e7b686ddb396f95d89112da0a5adc1afe78b-runc.wDRan3.mount: Deactivated successfully. Jul 7 02:57:50.177569 systemd[1]: Started cri-containerd-368ced863546ea7f13c0f72c4b41e7b686ddb396f95d89112da0a5adc1afe78b.scope - libcontainer container 368ced863546ea7f13c0f72c4b41e7b686ddb396f95d89112da0a5adc1afe78b. Jul 7 02:57:50.223028 containerd[1504]: time="2025-07-07T02:57:50.222973303Z" level=info msg="StartContainer for \"368ced863546ea7f13c0f72c4b41e7b686ddb396f95d89112da0a5adc1afe78b\" returns successfully" Jul 7 02:57:50.736468 kubelet[2669]: I0707 02:57:50.736014 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-dv79h" podStartSLOduration=1.461146639 podStartE2EDuration="5.73599218s" podCreationTimestamp="2025-07-07 02:57:45 +0000 UTC" firstStartedPulling="2025-07-07 02:57:45.822840221 +0000 UTC m=+7.401374842" lastFinishedPulling="2025-07-07 02:57:50.097685758 +0000 UTC m=+11.676220383" observedRunningTime="2025-07-07 02:57:50.735044949 +0000 UTC m=+12.313579593" watchObservedRunningTime="2025-07-07 02:57:50.73599218 +0000 UTC m=+12.314526810" Jul 7 02:57:55.780090 sudo[1771]: pam_unix(sudo:session): session closed for user root Jul 7 02:57:55.926876 sshd[1768]: pam_unix(sshd:session): session closed for user core Jul 7 02:57:55.933759 systemd-logind[1484]: Session 11 logged out. Waiting for processes to exit. Jul 7 02:57:55.936098 systemd[1]: sshd@8-10.244.11.130:22-139.178.68.195:42090.service: Deactivated successfully. Jul 7 02:57:55.946188 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 02:57:55.947751 systemd[1]: session-11.scope: Consumed 7.060s CPU time, 139.6M memory peak, 0B memory swap peak. Jul 7 02:57:55.951853 systemd-logind[1484]: Removed session 11. Jul 7 02:58:01.390963 systemd[1]: Created slice kubepods-besteffort-podeaf78287_646d_4cd3_a847_e333cc9a4cf1.slice - libcontainer container kubepods-besteffort-podeaf78287_646d_4cd3_a847_e333cc9a4cf1.slice. Jul 7 02:58:01.438548 kubelet[2669]: I0707 02:58:01.438474 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eaf78287-646d-4cd3-a847-e333cc9a4cf1-tigera-ca-bundle\") pod \"calico-typha-5b65c74f5-zf66b\" (UID: \"eaf78287-646d-4cd3-a847-e333cc9a4cf1\") " pod="calico-system/calico-typha-5b65c74f5-zf66b" Jul 7 02:58:01.438548 kubelet[2669]: I0707 02:58:01.438561 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eaf78287-646d-4cd3-a847-e333cc9a4cf1-typha-certs\") pod \"calico-typha-5b65c74f5-zf66b\" (UID: \"eaf78287-646d-4cd3-a847-e333cc9a4cf1\") " pod="calico-system/calico-typha-5b65c74f5-zf66b" Jul 7 02:58:01.438548 kubelet[2669]: I0707 02:58:01.438619 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv92l\" (UniqueName: \"kubernetes.io/projected/eaf78287-646d-4cd3-a847-e333cc9a4cf1-kube-api-access-tv92l\") pod \"calico-typha-5b65c74f5-zf66b\" (UID: \"eaf78287-646d-4cd3-a847-e333cc9a4cf1\") " pod="calico-system/calico-typha-5b65c74f5-zf66b" Jul 7 02:58:01.672631 systemd[1]: Created slice kubepods-besteffort-podeceda17f_ff96_4bae_9210_2e384d563b57.slice - libcontainer container kubepods-besteffort-podeceda17f_ff96_4bae_9210_2e384d563b57.slice. Jul 7 02:58:01.700534 containerd[1504]: time="2025-07-07T02:58:01.700392643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b65c74f5-zf66b,Uid:eaf78287-646d-4cd3-a847-e333cc9a4cf1,Namespace:calico-system,Attempt:0,}" Jul 7 02:58:01.742407 kubelet[2669]: I0707 02:58:01.740669 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eceda17f-ff96-4bae-9210-2e384d563b57-lib-modules\") pod \"calico-node-v9ckq\" (UID: \"eceda17f-ff96-4bae-9210-2e384d563b57\") " pod="calico-system/calico-node-v9ckq" Jul 7 02:58:01.742407 kubelet[2669]: I0707 02:58:01.741767 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eceda17f-ff96-4bae-9210-2e384d563b57-tigera-ca-bundle\") pod \"calico-node-v9ckq\" (UID: \"eceda17f-ff96-4bae-9210-2e384d563b57\") " pod="calico-system/calico-node-v9ckq" Jul 7 02:58:01.742407 kubelet[2669]: I0707 02:58:01.741808 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/eceda17f-ff96-4bae-9210-2e384d563b57-cni-log-dir\") pod \"calico-node-v9ckq\" (UID: \"eceda17f-ff96-4bae-9210-2e384d563b57\") " pod="calico-system/calico-node-v9ckq" Jul 7 02:58:01.742407 kubelet[2669]: I0707 02:58:01.741859 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/eceda17f-ff96-4bae-9210-2e384d563b57-node-certs\") pod \"calico-node-v9ckq\" (UID: \"eceda17f-ff96-4bae-9210-2e384d563b57\") " pod="calico-system/calico-node-v9ckq" Jul 7 02:58:01.742407 kubelet[2669]: I0707 02:58:01.741891 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/eceda17f-ff96-4bae-9210-2e384d563b57-cni-net-dir\") pod \"calico-node-v9ckq\" (UID: \"eceda17f-ff96-4bae-9210-2e384d563b57\") " pod="calico-system/calico-node-v9ckq" Jul 7 02:58:01.744037 kubelet[2669]: I0707 02:58:01.741938 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eceda17f-ff96-4bae-9210-2e384d563b57-xtables-lock\") pod \"calico-node-v9ckq\" (UID: \"eceda17f-ff96-4bae-9210-2e384d563b57\") " pod="calico-system/calico-node-v9ckq" Jul 7 02:58:01.744037 kubelet[2669]: I0707 02:58:01.742021 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eceda17f-ff96-4bae-9210-2e384d563b57-var-lib-calico\") pod \"calico-node-v9ckq\" (UID: \"eceda17f-ff96-4bae-9210-2e384d563b57\") " pod="calico-system/calico-node-v9ckq" Jul 7 02:58:01.744037 kubelet[2669]: I0707 02:58:01.742051 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/eceda17f-ff96-4bae-9210-2e384d563b57-var-run-calico\") pod \"calico-node-v9ckq\" (UID: \"eceda17f-ff96-4bae-9210-2e384d563b57\") " pod="calico-system/calico-node-v9ckq" Jul 7 02:58:01.744037 kubelet[2669]: I0707 02:58:01.742101 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/eceda17f-ff96-4bae-9210-2e384d563b57-cni-bin-dir\") pod \"calico-node-v9ckq\" (UID: \"eceda17f-ff96-4bae-9210-2e384d563b57\") " pod="calico-system/calico-node-v9ckq" Jul 7 02:58:01.744037 kubelet[2669]: I0707 02:58:01.742143 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/eceda17f-ff96-4bae-9210-2e384d563b57-flexvol-driver-host\") pod \"calico-node-v9ckq\" (UID: \"eceda17f-ff96-4bae-9210-2e384d563b57\") " pod="calico-system/calico-node-v9ckq" Jul 7 02:58:01.745644 kubelet[2669]: I0707 02:58:01.742198 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/eceda17f-ff96-4bae-9210-2e384d563b57-policysync\") pod \"calico-node-v9ckq\" (UID: \"eceda17f-ff96-4bae-9210-2e384d563b57\") " pod="calico-system/calico-node-v9ckq" Jul 7 02:58:01.745644 kubelet[2669]: I0707 02:58:01.742271 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sw5mg\" (UniqueName: \"kubernetes.io/projected/eceda17f-ff96-4bae-9210-2e384d563b57-kube-api-access-sw5mg\") pod \"calico-node-v9ckq\" (UID: \"eceda17f-ff96-4bae-9210-2e384d563b57\") " pod="calico-system/calico-node-v9ckq" Jul 7 02:58:01.776038 containerd[1504]: time="2025-07-07T02:58:01.775640979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:58:01.776038 containerd[1504]: time="2025-07-07T02:58:01.775771253Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:58:01.776812 containerd[1504]: time="2025-07-07T02:58:01.775802489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:01.776812 containerd[1504]: time="2025-07-07T02:58:01.775961908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:01.863110 kubelet[2669]: E0707 02:58:01.863054 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.863563 kubelet[2669]: W0707 02:58:01.863430 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.864324 kubelet[2669]: E0707 02:58:01.864127 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.866495 systemd[1]: Started cri-containerd-6a7bf1e0fce80f45b400790ba14cc896d4aee5868fe6bf7149898b919de54246.scope - libcontainer container 6a7bf1e0fce80f45b400790ba14cc896d4aee5868fe6bf7149898b919de54246. Jul 7 02:58:01.872310 kubelet[2669]: E0707 02:58:01.871787 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.872310 kubelet[2669]: W0707 02:58:01.871810 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.872310 kubelet[2669]: E0707 02:58:01.871842 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.876107 kubelet[2669]: E0707 02:58:01.875716 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.876107 kubelet[2669]: W0707 02:58:01.875739 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.876107 kubelet[2669]: E0707 02:58:01.875801 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.877651 kubelet[2669]: E0707 02:58:01.876266 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.877651 kubelet[2669]: W0707 02:58:01.876282 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.877651 kubelet[2669]: E0707 02:58:01.876349 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.880032 kubelet[2669]: E0707 02:58:01.879135 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.880032 kubelet[2669]: W0707 02:58:01.879156 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.880032 kubelet[2669]: E0707 02:58:01.879278 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.881059 kubelet[2669]: E0707 02:58:01.880654 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.881059 kubelet[2669]: W0707 02:58:01.880674 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.881059 kubelet[2669]: E0707 02:58:01.880716 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.881770 kubelet[2669]: E0707 02:58:01.881750 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.882016 kubelet[2669]: W0707 02:58:01.881867 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.882016 kubelet[2669]: E0707 02:58:01.882013 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.883194 kubelet[2669]: E0707 02:58:01.883036 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.883194 kubelet[2669]: W0707 02:58:01.883057 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.884295 kubelet[2669]: E0707 02:58:01.884256 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.886174 kubelet[2669]: E0707 02:58:01.885939 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.886276 kubelet[2669]: W0707 02:58:01.886171 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.886276 kubelet[2669]: E0707 02:58:01.886195 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.887476 kubelet[2669]: E0707 02:58:01.887378 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.887476 kubelet[2669]: W0707 02:58:01.887402 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.887730 kubelet[2669]: E0707 02:58:01.887700 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.892583 kubelet[2669]: E0707 02:58:01.891546 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.892583 kubelet[2669]: W0707 02:58:01.891573 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.892583 kubelet[2669]: E0707 02:58:01.891597 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.893543 kubelet[2669]: E0707 02:58:01.893202 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.893543 kubelet[2669]: W0707 02:58:01.893226 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.893543 kubelet[2669]: E0707 02:58:01.893263 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.895481 kubelet[2669]: E0707 02:58:01.894647 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.895481 kubelet[2669]: W0707 02:58:01.894670 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.895481 kubelet[2669]: E0707 02:58:01.894691 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.901105 kubelet[2669]: E0707 02:58:01.901071 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.901105 kubelet[2669]: W0707 02:58:01.901098 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.901368 kubelet[2669]: E0707 02:58:01.901125 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.938221 kubelet[2669]: E0707 02:58:01.938049 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dq6fd" podUID="524e68d3-c271-42bd-a0b6-ec9248f8255b" Jul 7 02:58:01.943195 kubelet[2669]: E0707 02:58:01.942979 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.943195 kubelet[2669]: W0707 02:58:01.943008 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.943195 kubelet[2669]: E0707 02:58:01.943035 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.944176 kubelet[2669]: E0707 02:58:01.944014 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.944412 kubelet[2669]: W0707 02:58:01.944383 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.944508 kubelet[2669]: E0707 02:58:01.944413 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.945800 kubelet[2669]: E0707 02:58:01.945748 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.945909 kubelet[2669]: W0707 02:58:01.945772 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.945973 kubelet[2669]: E0707 02:58:01.945915 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.946861 kubelet[2669]: E0707 02:58:01.946837 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.946952 kubelet[2669]: W0707 02:58:01.946860 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.947014 kubelet[2669]: E0707 02:58:01.946999 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.948180 kubelet[2669]: E0707 02:58:01.948080 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.948180 kubelet[2669]: W0707 02:58:01.948103 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.948796 kubelet[2669]: E0707 02:58:01.948718 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.949140 kubelet[2669]: I0707 02:58:01.949004 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/524e68d3-c271-42bd-a0b6-ec9248f8255b-kubelet-dir\") pod \"csi-node-driver-dq6fd\" (UID: \"524e68d3-c271-42bd-a0b6-ec9248f8255b\") " pod="calico-system/csi-node-driver-dq6fd" Jul 7 02:58:01.949366 kubelet[2669]: E0707 02:58:01.949315 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.949366 kubelet[2669]: W0707 02:58:01.949363 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.949612 kubelet[2669]: E0707 02:58:01.949584 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.951325 kubelet[2669]: E0707 02:58:01.951298 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.951325 kubelet[2669]: W0707 02:58:01.951321 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.951473 kubelet[2669]: E0707 02:58:01.951363 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.953813 kubelet[2669]: E0707 02:58:01.953784 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.953895 kubelet[2669]: W0707 02:58:01.953817 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.953895 kubelet[2669]: E0707 02:58:01.953836 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.955014 kubelet[2669]: E0707 02:58:01.954988 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.955736 kubelet[2669]: W0707 02:58:01.955127 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.955736 kubelet[2669]: E0707 02:58:01.955163 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.956638 kubelet[2669]: E0707 02:58:01.956610 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.956716 kubelet[2669]: W0707 02:58:01.956650 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.956788 kubelet[2669]: E0707 02:58:01.956735 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.957346 kubelet[2669]: E0707 02:58:01.957310 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.957346 kubelet[2669]: W0707 02:58:01.957332 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.958324 kubelet[2669]: E0707 02:58:01.958294 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.958698 kubelet[2669]: E0707 02:58:01.958667 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.958775 kubelet[2669]: W0707 02:58:01.958697 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.958848 kubelet[2669]: E0707 02:58:01.958793 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.959349 kubelet[2669]: E0707 02:58:01.959307 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.959349 kubelet[2669]: W0707 02:58:01.959348 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.959477 kubelet[2669]: E0707 02:58:01.959367 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.960420 kubelet[2669]: E0707 02:58:01.960394 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.960420 kubelet[2669]: W0707 02:58:01.960417 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.960547 kubelet[2669]: E0707 02:58:01.960434 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.960809 kubelet[2669]: E0707 02:58:01.960783 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.960809 kubelet[2669]: W0707 02:58:01.960806 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.960910 kubelet[2669]: E0707 02:58:01.960823 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.963254 kubelet[2669]: E0707 02:58:01.961538 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.963254 kubelet[2669]: W0707 02:58:01.961560 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.963254 kubelet[2669]: E0707 02:58:01.961577 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.963254 kubelet[2669]: E0707 02:58:01.962426 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.963254 kubelet[2669]: W0707 02:58:01.962443 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.963254 kubelet[2669]: E0707 02:58:01.962459 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.963254 kubelet[2669]: E0707 02:58:01.963243 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.963584 kubelet[2669]: W0707 02:58:01.963272 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.963584 kubelet[2669]: E0707 02:58:01.963289 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.964071 kubelet[2669]: E0707 02:58:01.964046 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.964071 kubelet[2669]: W0707 02:58:01.964069 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.964185 kubelet[2669]: E0707 02:58:01.964086 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.965633 kubelet[2669]: E0707 02:58:01.965605 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.965633 kubelet[2669]: W0707 02:58:01.965629 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.965765 kubelet[2669]: E0707 02:58:01.965649 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.965954 kubelet[2669]: E0707 02:58:01.965932 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.966009 kubelet[2669]: W0707 02:58:01.965967 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.966009 kubelet[2669]: E0707 02:58:01.965986 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.966654 kubelet[2669]: E0707 02:58:01.966619 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.966654 kubelet[2669]: W0707 02:58:01.966642 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.966882 kubelet[2669]: E0707 02:58:01.966764 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.967670 kubelet[2669]: E0707 02:58:01.967642 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:01.967740 kubelet[2669]: W0707 02:58:01.967672 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:01.967740 kubelet[2669]: E0707 02:58:01.967690 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:01.984566 containerd[1504]: time="2025-07-07T02:58:01.984225462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v9ckq,Uid:eceda17f-ff96-4bae-9210-2e384d563b57,Namespace:calico-system,Attempt:0,}" Jul 7 02:58:02.034082 containerd[1504]: time="2025-07-07T02:58:02.034025100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5b65c74f5-zf66b,Uid:eaf78287-646d-4cd3-a847-e333cc9a4cf1,Namespace:calico-system,Attempt:0,} returns sandbox id \"6a7bf1e0fce80f45b400790ba14cc896d4aee5868fe6bf7149898b919de54246\"" Jul 7 02:58:02.043790 containerd[1504]: time="2025-07-07T02:58:02.043746894Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 02:58:02.050612 kubelet[2669]: E0707 02:58:02.050574 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.050747 kubelet[2669]: W0707 02:58:02.050630 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.050747 kubelet[2669]: E0707 02:58:02.050660 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.051879 kubelet[2669]: E0707 02:58:02.051846 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.051879 kubelet[2669]: W0707 02:58:02.051870 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.052495 kubelet[2669]: E0707 02:58:02.052462 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.053376 kubelet[2669]: I0707 02:58:02.053332 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/524e68d3-c271-42bd-a0b6-ec9248f8255b-registration-dir\") pod \"csi-node-driver-dq6fd\" (UID: \"524e68d3-c271-42bd-a0b6-ec9248f8255b\") " pod="calico-system/csi-node-driver-dq6fd" Jul 7 02:58:02.057319 kubelet[2669]: E0707 02:58:02.057290 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.057319 kubelet[2669]: W0707 02:58:02.057316 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.057487 kubelet[2669]: E0707 02:58:02.057366 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.057910 kubelet[2669]: E0707 02:58:02.057751 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.057910 kubelet[2669]: W0707 02:58:02.057792 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.057910 kubelet[2669]: E0707 02:58:02.057868 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.058621 kubelet[2669]: E0707 02:58:02.058584 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.058621 kubelet[2669]: W0707 02:58:02.058607 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.059507 kubelet[2669]: E0707 02:58:02.059321 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.059507 kubelet[2669]: I0707 02:58:02.059374 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/524e68d3-c271-42bd-a0b6-ec9248f8255b-socket-dir\") pod \"csi-node-driver-dq6fd\" (UID: \"524e68d3-c271-42bd-a0b6-ec9248f8255b\") " pod="calico-system/csi-node-driver-dq6fd" Jul 7 02:58:02.062861 kubelet[2669]: E0707 02:58:02.061833 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.062861 kubelet[2669]: W0707 02:58:02.061859 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.062861 kubelet[2669]: E0707 02:58:02.062047 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.062861 kubelet[2669]: E0707 02:58:02.062726 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.062861 kubelet[2669]: W0707 02:58:02.062745 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.063196 kubelet[2669]: E0707 02:58:02.062862 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.064668 kubelet[2669]: E0707 02:58:02.064645 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.064960 kubelet[2669]: W0707 02:58:02.064784 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.065390 kubelet[2669]: E0707 02:58:02.065290 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.065390 kubelet[2669]: I0707 02:58:02.065356 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/524e68d3-c271-42bd-a0b6-ec9248f8255b-varrun\") pod \"csi-node-driver-dq6fd\" (UID: \"524e68d3-c271-42bd-a0b6-ec9248f8255b\") " pod="calico-system/csi-node-driver-dq6fd" Jul 7 02:58:02.065725 kubelet[2669]: E0707 02:58:02.065591 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.065725 kubelet[2669]: W0707 02:58:02.065610 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.065725 kubelet[2669]: E0707 02:58:02.065651 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.066390 kubelet[2669]: E0707 02:58:02.066109 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.066390 kubelet[2669]: W0707 02:58:02.066128 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.066679 kubelet[2669]: E0707 02:58:02.066495 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.067010 kubelet[2669]: E0707 02:58:02.066941 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.067010 kubelet[2669]: W0707 02:58:02.066961 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.068000 kubelet[2669]: E0707 02:58:02.067951 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.068465 kubelet[2669]: E0707 02:58:02.068248 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.068465 kubelet[2669]: W0707 02:58:02.068268 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.068465 kubelet[2669]: E0707 02:58:02.068304 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.068465 kubelet[2669]: I0707 02:58:02.068330 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzp2k\" (UniqueName: \"kubernetes.io/projected/524e68d3-c271-42bd-a0b6-ec9248f8255b-kube-api-access-dzp2k\") pod \"csi-node-driver-dq6fd\" (UID: \"524e68d3-c271-42bd-a0b6-ec9248f8255b\") " pod="calico-system/csi-node-driver-dq6fd" Jul 7 02:58:02.069174 kubelet[2669]: E0707 02:58:02.068909 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.069174 kubelet[2669]: W0707 02:58:02.068934 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.069174 kubelet[2669]: E0707 02:58:02.068978 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.070179 kubelet[2669]: E0707 02:58:02.070005 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.070179 kubelet[2669]: W0707 02:58:02.070025 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.070179 kubelet[2669]: E0707 02:58:02.070041 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.071618 kubelet[2669]: E0707 02:58:02.071579 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.071618 kubelet[2669]: W0707 02:58:02.071607 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.071875 kubelet[2669]: E0707 02:58:02.071633 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.074672 kubelet[2669]: E0707 02:58:02.072740 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.074672 kubelet[2669]: W0707 02:58:02.072761 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.074672 kubelet[2669]: E0707 02:58:02.072778 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.074672 kubelet[2669]: E0707 02:58:02.074571 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.074672 kubelet[2669]: W0707 02:58:02.074588 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.074672 kubelet[2669]: E0707 02:58:02.074605 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.110430 containerd[1504]: time="2025-07-07T02:58:02.107508439Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:58:02.110430 containerd[1504]: time="2025-07-07T02:58:02.107626684Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:58:02.110430 containerd[1504]: time="2025-07-07T02:58:02.107653743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:02.110430 containerd[1504]: time="2025-07-07T02:58:02.107811884Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:02.159507 systemd[1]: Started cri-containerd-588201a3b5149ddcae30d03acb43f85cd30520081176b37e77dc24e14b60f647.scope - libcontainer container 588201a3b5149ddcae30d03acb43f85cd30520081176b37e77dc24e14b60f647. Jul 7 02:58:02.176460 kubelet[2669]: E0707 02:58:02.176208 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.176460 kubelet[2669]: W0707 02:58:02.176256 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.176460 kubelet[2669]: E0707 02:58:02.176285 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.177446 kubelet[2669]: E0707 02:58:02.177285 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.177446 kubelet[2669]: W0707 02:58:02.177305 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.177446 kubelet[2669]: E0707 02:58:02.177332 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.180814 kubelet[2669]: E0707 02:58:02.180368 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.180814 kubelet[2669]: W0707 02:58:02.180392 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.182508 kubelet[2669]: E0707 02:58:02.182309 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.184186 kubelet[2669]: E0707 02:58:02.182598 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.184186 kubelet[2669]: W0707 02:58:02.182637 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.184186 kubelet[2669]: E0707 02:58:02.182681 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.184186 kubelet[2669]: E0707 02:58:02.183372 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.184186 kubelet[2669]: W0707 02:58:02.183388 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.184186 kubelet[2669]: E0707 02:58:02.184143 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.186102 kubelet[2669]: E0707 02:58:02.184733 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.186102 kubelet[2669]: W0707 02:58:02.184851 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.186102 kubelet[2669]: E0707 02:58:02.184881 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.188063 kubelet[2669]: E0707 02:58:02.186392 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.188063 kubelet[2669]: W0707 02:58:02.186518 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.188063 kubelet[2669]: E0707 02:58:02.186580 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.190995 kubelet[2669]: E0707 02:58:02.188389 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.190995 kubelet[2669]: W0707 02:58:02.188413 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.190995 kubelet[2669]: E0707 02:58:02.188471 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.190995 kubelet[2669]: E0707 02:58:02.188938 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.190995 kubelet[2669]: W0707 02:58:02.188954 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.190995 kubelet[2669]: E0707 02:58:02.190413 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.190995 kubelet[2669]: E0707 02:58:02.190598 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.190995 kubelet[2669]: W0707 02:58:02.190613 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.192104 kubelet[2669]: E0707 02:58:02.190989 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.192104 kubelet[2669]: E0707 02:58:02.191404 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.192104 kubelet[2669]: W0707 02:58:02.191420 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.192104 kubelet[2669]: E0707 02:58:02.191693 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.192344 kubelet[2669]: E0707 02:58:02.192145 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.192344 kubelet[2669]: W0707 02:58:02.192161 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.192344 kubelet[2669]: E0707 02:58:02.192264 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.193683 kubelet[2669]: E0707 02:58:02.193511 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.193683 kubelet[2669]: W0707 02:58:02.193553 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.193683 kubelet[2669]: E0707 02:58:02.193670 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.193973 kubelet[2669]: E0707 02:58:02.193864 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.193973 kubelet[2669]: W0707 02:58:02.193879 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.194118 kubelet[2669]: E0707 02:58:02.193998 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.195178 kubelet[2669]: E0707 02:58:02.194321 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.195178 kubelet[2669]: W0707 02:58:02.194355 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.195178 kubelet[2669]: E0707 02:58:02.194856 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.196027 kubelet[2669]: E0707 02:58:02.195362 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.196027 kubelet[2669]: W0707 02:58:02.195378 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.197422 kubelet[2669]: E0707 02:58:02.196215 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.197422 kubelet[2669]: E0707 02:58:02.196699 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.197422 kubelet[2669]: W0707 02:58:02.196716 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.197422 kubelet[2669]: E0707 02:58:02.197098 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.198269 kubelet[2669]: E0707 02:58:02.197760 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.198269 kubelet[2669]: W0707 02:58:02.197781 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.198269 kubelet[2669]: E0707 02:58:02.197878 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.198818 kubelet[2669]: E0707 02:58:02.198587 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.198818 kubelet[2669]: W0707 02:58:02.198608 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.198818 kubelet[2669]: E0707 02:58:02.198625 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.200481 kubelet[2669]: E0707 02:58:02.200447 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.200481 kubelet[2669]: W0707 02:58:02.200471 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.200597 kubelet[2669]: E0707 02:58:02.200489 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.223873 kubelet[2669]: E0707 02:58:02.223839 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:02.224089 kubelet[2669]: W0707 02:58:02.224062 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:02.224268 kubelet[2669]: E0707 02:58:02.224225 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:02.284655 containerd[1504]: time="2025-07-07T02:58:02.284466764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v9ckq,Uid:eceda17f-ff96-4bae-9210-2e384d563b57,Namespace:calico-system,Attempt:0,} returns sandbox id \"588201a3b5149ddcae30d03acb43f85cd30520081176b37e77dc24e14b60f647\"" Jul 7 02:58:03.574831 kubelet[2669]: E0707 02:58:03.574758 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dq6fd" podUID="524e68d3-c271-42bd-a0b6-ec9248f8255b" Jul 7 02:58:03.765772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1413119267.mount: Deactivated successfully. Jul 7 02:58:05.574692 kubelet[2669]: E0707 02:58:05.574598 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dq6fd" podUID="524e68d3-c271-42bd-a0b6-ec9248f8255b" Jul 7 02:58:05.652482 containerd[1504]: time="2025-07-07T02:58:05.651713692Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:05.654383 containerd[1504]: time="2025-07-07T02:58:05.654286651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 02:58:05.655342 containerd[1504]: time="2025-07-07T02:58:05.655220005Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:05.662315 containerd[1504]: time="2025-07-07T02:58:05.662110413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:05.667799 containerd[1504]: time="2025-07-07T02:58:05.666216968Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 3.622261535s" Jul 7 02:58:05.667799 containerd[1504]: time="2025-07-07T02:58:05.667607856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 02:58:05.673523 containerd[1504]: time="2025-07-07T02:58:05.673464021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 02:58:05.720365 containerd[1504]: time="2025-07-07T02:58:05.717274928Z" level=info msg="CreateContainer within sandbox \"6a7bf1e0fce80f45b400790ba14cc896d4aee5868fe6bf7149898b919de54246\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 02:58:05.741034 containerd[1504]: time="2025-07-07T02:58:05.740973367Z" level=info msg="CreateContainer within sandbox \"6a7bf1e0fce80f45b400790ba14cc896d4aee5868fe6bf7149898b919de54246\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"693b41fdf681323052212078535ace88943e75f7f7f4be16a8c926b43eeb9ef6\"" Jul 7 02:58:05.753947 containerd[1504]: time="2025-07-07T02:58:05.753856208Z" level=info msg="StartContainer for \"693b41fdf681323052212078535ace88943e75f7f7f4be16a8c926b43eeb9ef6\"" Jul 7 02:58:05.824551 systemd[1]: Started cri-containerd-693b41fdf681323052212078535ace88943e75f7f7f4be16a8c926b43eeb9ef6.scope - libcontainer container 693b41fdf681323052212078535ace88943e75f7f7f4be16a8c926b43eeb9ef6. Jul 7 02:58:05.910863 containerd[1504]: time="2025-07-07T02:58:05.910805177Z" level=info msg="StartContainer for \"693b41fdf681323052212078535ace88943e75f7f7f4be16a8c926b43eeb9ef6\" returns successfully" Jul 7 02:58:06.810420 kubelet[2669]: E0707 02:58:06.810372 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.810420 kubelet[2669]: W0707 02:58:06.810407 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.810992 kubelet[2669]: E0707 02:58:06.810436 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.811496 kubelet[2669]: E0707 02:58:06.811324 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.811496 kubelet[2669]: W0707 02:58:06.811347 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.811496 kubelet[2669]: E0707 02:58:06.811369 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.812525 kubelet[2669]: E0707 02:58:06.811950 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.812525 kubelet[2669]: W0707 02:58:06.811966 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.812525 kubelet[2669]: E0707 02:58:06.811982 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.813775 kubelet[2669]: E0707 02:58:06.813278 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.813775 kubelet[2669]: W0707 02:58:06.813307 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.813775 kubelet[2669]: E0707 02:58:06.813324 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.814649 kubelet[2669]: E0707 02:58:06.813813 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.814649 kubelet[2669]: W0707 02:58:06.813829 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.814649 kubelet[2669]: E0707 02:58:06.813844 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.814649 kubelet[2669]: E0707 02:58:06.814480 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.814649 kubelet[2669]: W0707 02:58:06.814496 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.814649 kubelet[2669]: E0707 02:58:06.814512 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.815196 kubelet[2669]: E0707 02:58:06.815171 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.815331 kubelet[2669]: W0707 02:58:06.815220 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.815331 kubelet[2669]: E0707 02:58:06.815269 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.816152 kubelet[2669]: E0707 02:58:06.816125 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.816152 kubelet[2669]: W0707 02:58:06.816148 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.816337 kubelet[2669]: E0707 02:58:06.816273 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.816989 kubelet[2669]: E0707 02:58:06.816962 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.816989 kubelet[2669]: W0707 02:58:06.816983 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.817114 kubelet[2669]: E0707 02:58:06.817013 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.817595 kubelet[2669]: E0707 02:58:06.817555 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.817687 kubelet[2669]: W0707 02:58:06.817625 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.817687 kubelet[2669]: E0707 02:58:06.817650 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.818063 kubelet[2669]: E0707 02:58:06.818031 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.818063 kubelet[2669]: W0707 02:58:06.818053 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.818182 kubelet[2669]: E0707 02:58:06.818070 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.818540 kubelet[2669]: E0707 02:58:06.818515 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.818540 kubelet[2669]: W0707 02:58:06.818537 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.818689 kubelet[2669]: E0707 02:58:06.818554 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.818847 kubelet[2669]: E0707 02:58:06.818823 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.818847 kubelet[2669]: W0707 02:58:06.818845 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.818958 kubelet[2669]: E0707 02:58:06.818861 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.819186 kubelet[2669]: E0707 02:58:06.819164 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.819186 kubelet[2669]: W0707 02:58:06.819183 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.819370 kubelet[2669]: E0707 02:58:06.819199 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.819586 kubelet[2669]: E0707 02:58:06.819564 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.819586 kubelet[2669]: W0707 02:58:06.819583 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.819689 kubelet[2669]: E0707 02:58:06.819600 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.829054 kubelet[2669]: E0707 02:58:06.829014 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.829054 kubelet[2669]: W0707 02:58:06.829043 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.829437 kubelet[2669]: E0707 02:58:06.829064 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.829711 kubelet[2669]: E0707 02:58:06.829498 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.829711 kubelet[2669]: W0707 02:58:06.829548 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.829711 kubelet[2669]: E0707 02:58:06.829567 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.829985 kubelet[2669]: E0707 02:58:06.829957 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.830144 kubelet[2669]: W0707 02:58:06.830107 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.830394 kubelet[2669]: E0707 02:58:06.830265 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.831719 kubelet[2669]: E0707 02:58:06.830705 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.831719 kubelet[2669]: W0707 02:58:06.830850 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.831719 kubelet[2669]: E0707 02:58:06.830872 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.832122 kubelet[2669]: E0707 02:58:06.832085 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.832122 kubelet[2669]: W0707 02:58:06.832112 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.832323 kubelet[2669]: E0707 02:58:06.832270 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.832515 kubelet[2669]: E0707 02:58:06.832482 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.832515 kubelet[2669]: W0707 02:58:06.832508 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.832766 kubelet[2669]: E0707 02:58:06.832668 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.832834 kubelet[2669]: E0707 02:58:06.832806 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.832834 kubelet[2669]: W0707 02:58:06.832821 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.832933 kubelet[2669]: E0707 02:58:06.832897 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.833279 kubelet[2669]: E0707 02:58:06.833228 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.833279 kubelet[2669]: W0707 02:58:06.833275 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.833758 kubelet[2669]: E0707 02:58:06.833722 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.834129 kubelet[2669]: E0707 02:58:06.834099 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.834129 kubelet[2669]: W0707 02:58:06.834121 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.834377 kubelet[2669]: E0707 02:58:06.834317 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.835108 kubelet[2669]: E0707 02:58:06.835065 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.835108 kubelet[2669]: W0707 02:58:06.835086 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.835318 kubelet[2669]: E0707 02:58:06.835211 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.835603 kubelet[2669]: E0707 02:58:06.835539 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.835603 kubelet[2669]: W0707 02:58:06.835591 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.835728 kubelet[2669]: E0707 02:58:06.835660 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.836729 kubelet[2669]: E0707 02:58:06.836694 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.836729 kubelet[2669]: W0707 02:58:06.836720 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.836868 kubelet[2669]: E0707 02:58:06.836846 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.837383 kubelet[2669]: E0707 02:58:06.837321 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.837383 kubelet[2669]: W0707 02:58:06.837375 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.837911 kubelet[2669]: E0707 02:58:06.837742 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.838501 kubelet[2669]: E0707 02:58:06.838318 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.838501 kubelet[2669]: W0707 02:58:06.838340 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.838501 kubelet[2669]: E0707 02:58:06.838413 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.838977 kubelet[2669]: E0707 02:58:06.838890 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.838977 kubelet[2669]: W0707 02:58:06.838910 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.838977 kubelet[2669]: E0707 02:58:06.838944 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.839595 kubelet[2669]: E0707 02:58:06.839449 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.839595 kubelet[2669]: W0707 02:58:06.839470 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.839595 kubelet[2669]: E0707 02:58:06.839512 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.840819 kubelet[2669]: E0707 02:58:06.840668 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.840819 kubelet[2669]: W0707 02:58:06.840688 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.840819 kubelet[2669]: E0707 02:58:06.840714 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:06.841452 kubelet[2669]: E0707 02:58:06.841088 2669 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 02:58:06.841452 kubelet[2669]: W0707 02:58:06.841103 2669 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 02:58:06.841452 kubelet[2669]: E0707 02:58:06.841119 2669 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 02:58:07.350008 containerd[1504]: time="2025-07-07T02:58:07.349769441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:07.353059 containerd[1504]: time="2025-07-07T02:58:07.352680597Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 02:58:07.353210 containerd[1504]: time="2025-07-07T02:58:07.353176660Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:07.357227 containerd[1504]: time="2025-07-07T02:58:07.357164850Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:07.358391 containerd[1504]: time="2025-07-07T02:58:07.358177769Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.68465391s" Jul 7 02:58:07.358391 containerd[1504]: time="2025-07-07T02:58:07.358226944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 02:58:07.362063 containerd[1504]: time="2025-07-07T02:58:07.362006066Z" level=info msg="CreateContainer within sandbox \"588201a3b5149ddcae30d03acb43f85cd30520081176b37e77dc24e14b60f647\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 02:58:07.387540 containerd[1504]: time="2025-07-07T02:58:07.387175789Z" level=info msg="CreateContainer within sandbox \"588201a3b5149ddcae30d03acb43f85cd30520081176b37e77dc24e14b60f647\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bf5e02fcdf01f559109f2897186e9c504788c4de409afdbc498272bf5d3127c7\"" Jul 7 02:58:07.390277 containerd[1504]: time="2025-07-07T02:58:07.389807208Z" level=info msg="StartContainer for \"bf5e02fcdf01f559109f2897186e9c504788c4de409afdbc498272bf5d3127c7\"" Jul 7 02:58:07.469497 systemd[1]: Started cri-containerd-bf5e02fcdf01f559109f2897186e9c504788c4de409afdbc498272bf5d3127c7.scope - libcontainer container bf5e02fcdf01f559109f2897186e9c504788c4de409afdbc498272bf5d3127c7. Jul 7 02:58:07.514109 containerd[1504]: time="2025-07-07T02:58:07.514056940Z" level=info msg="StartContainer for \"bf5e02fcdf01f559109f2897186e9c504788c4de409afdbc498272bf5d3127c7\" returns successfully" Jul 7 02:58:07.528090 systemd[1]: cri-containerd-bf5e02fcdf01f559109f2897186e9c504788c4de409afdbc498272bf5d3127c7.scope: Deactivated successfully. Jul 7 02:58:07.576295 kubelet[2669]: E0707 02:58:07.576102 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dq6fd" podUID="524e68d3-c271-42bd-a0b6-ec9248f8255b" Jul 7 02:58:07.586668 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf5e02fcdf01f559109f2897186e9c504788c4de409afdbc498272bf5d3127c7-rootfs.mount: Deactivated successfully. Jul 7 02:58:07.762123 containerd[1504]: time="2025-07-07T02:58:07.756537467Z" level=info msg="shim disconnected" id=bf5e02fcdf01f559109f2897186e9c504788c4de409afdbc498272bf5d3127c7 namespace=k8s.io Jul 7 02:58:07.762123 containerd[1504]: time="2025-07-07T02:58:07.761944885Z" level=warning msg="cleaning up after shim disconnected" id=bf5e02fcdf01f559109f2897186e9c504788c4de409afdbc498272bf5d3127c7 namespace=k8s.io Jul 7 02:58:07.762123 containerd[1504]: time="2025-07-07T02:58:07.761973109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 02:58:07.820841 kubelet[2669]: I0707 02:58:07.820589 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:58:07.824650 containerd[1504]: time="2025-07-07T02:58:07.824040591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 02:58:07.856340 kubelet[2669]: I0707 02:58:07.855462 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5b65c74f5-zf66b" podStartSLOduration=3.225026015 podStartE2EDuration="6.855422238s" podCreationTimestamp="2025-07-07 02:58:01 +0000 UTC" firstStartedPulling="2025-07-07 02:58:02.040974189 +0000 UTC m=+23.619508814" lastFinishedPulling="2025-07-07 02:58:05.671370403 +0000 UTC m=+27.249905037" observedRunningTime="2025-07-07 02:58:06.803727302 +0000 UTC m=+28.382261940" watchObservedRunningTime="2025-07-07 02:58:07.855422238 +0000 UTC m=+29.433956871" Jul 7 02:58:09.575297 kubelet[2669]: E0707 02:58:09.574958 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dq6fd" podUID="524e68d3-c271-42bd-a0b6-ec9248f8255b" Jul 7 02:58:11.578052 kubelet[2669]: E0707 02:58:11.577926 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dq6fd" podUID="524e68d3-c271-42bd-a0b6-ec9248f8255b" Jul 7 02:58:12.755782 containerd[1504]: time="2025-07-07T02:58:12.755699106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:12.758588 containerd[1504]: time="2025-07-07T02:58:12.758517241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 02:58:12.760022 containerd[1504]: time="2025-07-07T02:58:12.759950710Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:12.764128 containerd[1504]: time="2025-07-07T02:58:12.764041845Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:12.765742 containerd[1504]: time="2025-07-07T02:58:12.765545487Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.941443583s" Jul 7 02:58:12.765742 containerd[1504]: time="2025-07-07T02:58:12.765603036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 02:58:12.768960 containerd[1504]: time="2025-07-07T02:58:12.768789945Z" level=info msg="CreateContainer within sandbox \"588201a3b5149ddcae30d03acb43f85cd30520081176b37e77dc24e14b60f647\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 02:58:12.819704 containerd[1504]: time="2025-07-07T02:58:12.819541169Z" level=info msg="CreateContainer within sandbox \"588201a3b5149ddcae30d03acb43f85cd30520081176b37e77dc24e14b60f647\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"72329f08cbd10e346048f304eed8bf4261aaf0c16d0c8a9efd7567b94c42f0e5\"" Jul 7 02:58:12.821027 containerd[1504]: time="2025-07-07T02:58:12.820818588Z" level=info msg="StartContainer for \"72329f08cbd10e346048f304eed8bf4261aaf0c16d0c8a9efd7567b94c42f0e5\"" Jul 7 02:58:12.888476 systemd[1]: Started cri-containerd-72329f08cbd10e346048f304eed8bf4261aaf0c16d0c8a9efd7567b94c42f0e5.scope - libcontainer container 72329f08cbd10e346048f304eed8bf4261aaf0c16d0c8a9efd7567b94c42f0e5. Jul 7 02:58:12.938050 containerd[1504]: time="2025-07-07T02:58:12.937993012Z" level=info msg="StartContainer for \"72329f08cbd10e346048f304eed8bf4261aaf0c16d0c8a9efd7567b94c42f0e5\" returns successfully" Jul 7 02:58:13.579471 kubelet[2669]: E0707 02:58:13.575067 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dq6fd" podUID="524e68d3-c271-42bd-a0b6-ec9248f8255b" Jul 7 02:58:14.270003 systemd[1]: cri-containerd-72329f08cbd10e346048f304eed8bf4261aaf0c16d0c8a9efd7567b94c42f0e5.scope: Deactivated successfully. Jul 7 02:58:14.322913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72329f08cbd10e346048f304eed8bf4261aaf0c16d0c8a9efd7567b94c42f0e5-rootfs.mount: Deactivated successfully. Jul 7 02:58:14.333220 containerd[1504]: time="2025-07-07T02:58:14.333087621Z" level=info msg="shim disconnected" id=72329f08cbd10e346048f304eed8bf4261aaf0c16d0c8a9efd7567b94c42f0e5 namespace=k8s.io Jul 7 02:58:14.334087 containerd[1504]: time="2025-07-07T02:58:14.333219165Z" level=warning msg="cleaning up after shim disconnected" id=72329f08cbd10e346048f304eed8bf4261aaf0c16d0c8a9efd7567b94c42f0e5 namespace=k8s.io Jul 7 02:58:14.334087 containerd[1504]: time="2025-07-07T02:58:14.333268517Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 02:58:14.366817 kubelet[2669]: I0707 02:58:14.366369 2669 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 7 02:58:14.423668 systemd[1]: Created slice kubepods-burstable-pod74713bc6_d2e3_43dc_8c4c_5e5600fd418c.slice - libcontainer container kubepods-burstable-pod74713bc6_d2e3_43dc_8c4c_5e5600fd418c.slice. Jul 7 02:58:14.433269 kubelet[2669]: W0707 02:58:14.430346 2669 reflector.go:561] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:srv-3i0x6.gb1.brightbox.com" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-3i0x6.gb1.brightbox.com' and this object Jul 7 02:58:14.439493 kubelet[2669]: E0707 02:58:14.439416 2669 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:srv-3i0x6.gb1.brightbox.com\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-3i0x6.gb1.brightbox.com' and this object" logger="UnhandledError" Jul 7 02:58:14.440455 systemd[1]: Created slice kubepods-burstable-pod80330edf_c28a_48e6_926d_3066a67afb9f.slice - libcontainer container kubepods-burstable-pod80330edf_c28a_48e6_926d_3066a67afb9f.slice. Jul 7 02:58:14.448010 kubelet[2669]: W0707 02:58:14.447870 2669 reflector.go:561] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:srv-3i0x6.gb1.brightbox.com" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'srv-3i0x6.gb1.brightbox.com' and this object Jul 7 02:58:14.448010 kubelet[2669]: E0707 02:58:14.447926 2669 reflector.go:158] "Unhandled Error" err="object-\"calico-apiserver\"/\"calico-apiserver-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"calico-apiserver-certs\" is forbidden: User \"system:node:srv-3i0x6.gb1.brightbox.com\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'srv-3i0x6.gb1.brightbox.com' and this object" logger="UnhandledError" Jul 7 02:58:14.468140 systemd[1]: Created slice kubepods-besteffort-podc1e51e64_ed54_4337_b03c_5e29a286c65d.slice - libcontainer container kubepods-besteffort-podc1e51e64_ed54_4337_b03c_5e29a286c65d.slice. Jul 7 02:58:14.488329 systemd[1]: Created slice kubepods-besteffort-podc09fc3d5_40d3_4a9f_af15_68963bda8021.slice - libcontainer container kubepods-besteffort-podc09fc3d5_40d3_4a9f_af15_68963bda8021.slice. Jul 7 02:58:14.508592 kubelet[2669]: I0707 02:58:14.504918 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd86p\" (UniqueName: \"kubernetes.io/projected/74713bc6-d2e3-43dc-8c4c-5e5600fd418c-kube-api-access-rd86p\") pod \"coredns-7c65d6cfc9-g7967\" (UID: \"74713bc6-d2e3-43dc-8c4c-5e5600fd418c\") " pod="kube-system/coredns-7c65d6cfc9-g7967" Jul 7 02:58:14.508592 kubelet[2669]: I0707 02:58:14.504990 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f0c66970-d617-482e-81b9-577d7709b687-calico-apiserver-certs\") pod \"calico-apiserver-556c85fc95-nldxn\" (UID: \"f0c66970-d617-482e-81b9-577d7709b687\") " pod="calico-apiserver/calico-apiserver-556c85fc95-nldxn" Jul 7 02:58:14.508592 kubelet[2669]: I0707 02:58:14.505033 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/8242f432-b915-4099-adee-f5f52e15717e-goldmane-key-pair\") pod \"goldmane-58fd7646b9-976bg\" (UID: \"8242f432-b915-4099-adee-f5f52e15717e\") " pod="calico-system/goldmane-58fd7646b9-976bg" Jul 7 02:58:14.508592 kubelet[2669]: I0707 02:58:14.505079 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c1e51e64-ed54-4337-b03c-5e29a286c65d-calico-apiserver-certs\") pod \"calico-apiserver-574fc86944-2st6q\" (UID: \"c1e51e64-ed54-4337-b03c-5e29a286c65d\") " pod="calico-apiserver/calico-apiserver-574fc86944-2st6q" Jul 7 02:58:14.508592 kubelet[2669]: I0707 02:58:14.505115 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls8pz\" (UniqueName: \"kubernetes.io/projected/9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee-kube-api-access-ls8pz\") pod \"whisker-65d9798d4c-j97sf\" (UID: \"9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee\") " pod="calico-system/whisker-65d9798d4c-j97sf" Jul 7 02:58:14.508147 systemd[1]: Created slice kubepods-besteffort-pod79b32a8d_dd25_4e48_a397_425ba7037f30.slice - libcontainer container kubepods-besteffort-pod79b32a8d_dd25_4e48_a397_425ba7037f30.slice. Jul 7 02:58:14.510840 kubelet[2669]: I0707 02:58:14.505166 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hr9t\" (UniqueName: \"kubernetes.io/projected/c1e51e64-ed54-4337-b03c-5e29a286c65d-kube-api-access-5hr9t\") pod \"calico-apiserver-574fc86944-2st6q\" (UID: \"c1e51e64-ed54-4337-b03c-5e29a286c65d\") " pod="calico-apiserver/calico-apiserver-574fc86944-2st6q" Jul 7 02:58:14.510840 kubelet[2669]: I0707 02:58:14.505255 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee-whisker-ca-bundle\") pod \"whisker-65d9798d4c-j97sf\" (UID: \"9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee\") " pod="calico-system/whisker-65d9798d4c-j97sf" Jul 7 02:58:14.510840 kubelet[2669]: I0707 02:58:14.505304 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfmp7\" (UniqueName: \"kubernetes.io/projected/80330edf-c28a-48e6-926d-3066a67afb9f-kube-api-access-rfmp7\") pod \"coredns-7c65d6cfc9-kq75p\" (UID: \"80330edf-c28a-48e6-926d-3066a67afb9f\") " pod="kube-system/coredns-7c65d6cfc9-kq75p" Jul 7 02:58:14.510840 kubelet[2669]: I0707 02:58:14.505339 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8242f432-b915-4099-adee-f5f52e15717e-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-976bg\" (UID: \"8242f432-b915-4099-adee-f5f52e15717e\") " pod="calico-system/goldmane-58fd7646b9-976bg" Jul 7 02:58:14.510840 kubelet[2669]: I0707 02:58:14.508926 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c09fc3d5-40d3-4a9f-af15-68963bda8021-calico-apiserver-certs\") pod \"calico-apiserver-574fc86944-7dt46\" (UID: \"c09fc3d5-40d3-4a9f-af15-68963bda8021\") " pod="calico-apiserver/calico-apiserver-574fc86944-7dt46" Jul 7 02:58:14.512122 kubelet[2669]: I0707 02:58:14.508988 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/80330edf-c28a-48e6-926d-3066a67afb9f-config-volume\") pod \"coredns-7c65d6cfc9-kq75p\" (UID: \"80330edf-c28a-48e6-926d-3066a67afb9f\") " pod="kube-system/coredns-7c65d6cfc9-kq75p" Jul 7 02:58:14.512122 kubelet[2669]: I0707 02:58:14.509035 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwcm4\" (UniqueName: \"kubernetes.io/projected/8242f432-b915-4099-adee-f5f52e15717e-kube-api-access-cwcm4\") pod \"goldmane-58fd7646b9-976bg\" (UID: \"8242f432-b915-4099-adee-f5f52e15717e\") " pod="calico-system/goldmane-58fd7646b9-976bg" Jul 7 02:58:14.512122 kubelet[2669]: I0707 02:58:14.509613 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee-whisker-backend-key-pair\") pod \"whisker-65d9798d4c-j97sf\" (UID: \"9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee\") " pod="calico-system/whisker-65d9798d4c-j97sf" Jul 7 02:58:14.512122 kubelet[2669]: I0707 02:58:14.509663 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8242f432-b915-4099-adee-f5f52e15717e-config\") pod \"goldmane-58fd7646b9-976bg\" (UID: \"8242f432-b915-4099-adee-f5f52e15717e\") " pod="calico-system/goldmane-58fd7646b9-976bg" Jul 7 02:58:14.512122 kubelet[2669]: I0707 02:58:14.509699 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/79b32a8d-dd25-4e48-a397-425ba7037f30-tigera-ca-bundle\") pod \"calico-kube-controllers-6d899c5f84-rp2zh\" (UID: \"79b32a8d-dd25-4e48-a397-425ba7037f30\") " pod="calico-system/calico-kube-controllers-6d899c5f84-rp2zh" Jul 7 02:58:14.514061 kubelet[2669]: I0707 02:58:14.509744 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74713bc6-d2e3-43dc-8c4c-5e5600fd418c-config-volume\") pod \"coredns-7c65d6cfc9-g7967\" (UID: \"74713bc6-d2e3-43dc-8c4c-5e5600fd418c\") " pod="kube-system/coredns-7c65d6cfc9-g7967" Jul 7 02:58:14.514061 kubelet[2669]: I0707 02:58:14.509799 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2k2h\" (UniqueName: \"kubernetes.io/projected/f0c66970-d617-482e-81b9-577d7709b687-kube-api-access-c2k2h\") pod \"calico-apiserver-556c85fc95-nldxn\" (UID: \"f0c66970-d617-482e-81b9-577d7709b687\") " pod="calico-apiserver/calico-apiserver-556c85fc95-nldxn" Jul 7 02:58:14.514061 kubelet[2669]: I0707 02:58:14.509850 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-965j8\" (UniqueName: \"kubernetes.io/projected/c09fc3d5-40d3-4a9f-af15-68963bda8021-kube-api-access-965j8\") pod \"calico-apiserver-574fc86944-7dt46\" (UID: \"c09fc3d5-40d3-4a9f-af15-68963bda8021\") " pod="calico-apiserver/calico-apiserver-574fc86944-7dt46" Jul 7 02:58:14.514061 kubelet[2669]: I0707 02:58:14.509925 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l95m7\" (UniqueName: \"kubernetes.io/projected/79b32a8d-dd25-4e48-a397-425ba7037f30-kube-api-access-l95m7\") pod \"calico-kube-controllers-6d899c5f84-rp2zh\" (UID: \"79b32a8d-dd25-4e48-a397-425ba7037f30\") " pod="calico-system/calico-kube-controllers-6d899c5f84-rp2zh" Jul 7 02:58:14.530904 systemd[1]: Created slice kubepods-besteffort-pod8242f432_b915_4099_adee_f5f52e15717e.slice - libcontainer container kubepods-besteffort-pod8242f432_b915_4099_adee_f5f52e15717e.slice. Jul 7 02:58:14.546442 systemd[1]: Created slice kubepods-besteffort-podf0c66970_d617_482e_81b9_577d7709b687.slice - libcontainer container kubepods-besteffort-podf0c66970_d617_482e_81b9_577d7709b687.slice. Jul 7 02:58:14.557564 systemd[1]: Created slice kubepods-besteffort-pod9a6eddc6_e7ad_41f6_82bc_72ff6e0687ee.slice - libcontainer container kubepods-besteffort-pod9a6eddc6_e7ad_41f6_82bc_72ff6e0687ee.slice. Jul 7 02:58:14.750174 containerd[1504]: time="2025-07-07T02:58:14.750115570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g7967,Uid:74713bc6-d2e3-43dc-8c4c-5e5600fd418c,Namespace:kube-system,Attempt:0,}" Jul 7 02:58:14.757892 containerd[1504]: time="2025-07-07T02:58:14.757835732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kq75p,Uid:80330edf-c28a-48e6-926d-3066a67afb9f,Namespace:kube-system,Attempt:0,}" Jul 7 02:58:14.830407 containerd[1504]: time="2025-07-07T02:58:14.830139625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d899c5f84-rp2zh,Uid:79b32a8d-dd25-4e48-a397-425ba7037f30,Namespace:calico-system,Attempt:0,}" Jul 7 02:58:14.843605 containerd[1504]: time="2025-07-07T02:58:14.843566750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-976bg,Uid:8242f432-b915-4099-adee-f5f52e15717e,Namespace:calico-system,Attempt:0,}" Jul 7 02:58:14.867512 containerd[1504]: time="2025-07-07T02:58:14.867368438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65d9798d4c-j97sf,Uid:9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee,Namespace:calico-system,Attempt:0,}" Jul 7 02:58:14.890110 containerd[1504]: time="2025-07-07T02:58:14.887582552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 02:58:15.152606 containerd[1504]: time="2025-07-07T02:58:15.152090806Z" level=error msg="Failed to destroy network for sandbox \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.160801 containerd[1504]: time="2025-07-07T02:58:15.160498351Z" level=error msg="encountered an error cleaning up failed sandbox \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.160801 containerd[1504]: time="2025-07-07T02:58:15.160591039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kq75p,Uid:80330edf-c28a-48e6-926d-3066a67afb9f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.169776 containerd[1504]: time="2025-07-07T02:58:15.169621222Z" level=error msg="Failed to destroy network for sandbox \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.170556 containerd[1504]: time="2025-07-07T02:58:15.170256991Z" level=error msg="encountered an error cleaning up failed sandbox \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.170556 containerd[1504]: time="2025-07-07T02:58:15.170328115Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g7967,Uid:74713bc6-d2e3-43dc-8c4c-5e5600fd418c,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.170556 containerd[1504]: time="2025-07-07T02:58:15.170460401Z" level=error msg="Failed to destroy network for sandbox \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.171032 containerd[1504]: time="2025-07-07T02:58:15.170996326Z" level=error msg="encountered an error cleaning up failed sandbox \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.171216 containerd[1504]: time="2025-07-07T02:58:15.171148443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d899c5f84-rp2zh,Uid:79b32a8d-dd25-4e48-a397-425ba7037f30,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.171706 containerd[1504]: time="2025-07-07T02:58:15.171606175Z" level=error msg="Failed to destroy network for sandbox \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.172076 containerd[1504]: time="2025-07-07T02:58:15.172040822Z" level=error msg="encountered an error cleaning up failed sandbox \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.172536 containerd[1504]: time="2025-07-07T02:58:15.172191204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-976bg,Uid:8242f432-b915-4099-adee-f5f52e15717e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.172536 containerd[1504]: time="2025-07-07T02:58:15.172335959Z" level=error msg="Failed to destroy network for sandbox \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.172828 kubelet[2669]: E0707 02:58:15.172553 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.172828 kubelet[2669]: E0707 02:58:15.172677 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-976bg" Jul 7 02:58:15.172828 kubelet[2669]: E0707 02:58:15.172723 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-976bg" Jul 7 02:58:15.174074 kubelet[2669]: E0707 02:58:15.172789 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-976bg_calico-system(8242f432-b915-4099-adee-f5f52e15717e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-976bg_calico-system(8242f432-b915-4099-adee-f5f52e15717e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-976bg" podUID="8242f432-b915-4099-adee-f5f52e15717e" Jul 7 02:58:15.174074 kubelet[2669]: E0707 02:58:15.172862 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.174074 kubelet[2669]: E0707 02:58:15.172894 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kq75p" Jul 7 02:58:15.174389 containerd[1504]: time="2025-07-07T02:58:15.173586923Z" level=error msg="encountered an error cleaning up failed sandbox \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.174389 containerd[1504]: time="2025-07-07T02:58:15.173637106Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65d9798d4c-j97sf,Uid:9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.174529 kubelet[2669]: E0707 02:58:15.172916 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-kq75p" Jul 7 02:58:15.174529 kubelet[2669]: E0707 02:58:15.172954 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-kq75p_kube-system(80330edf-c28a-48e6-926d-3066a67afb9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-kq75p_kube-system(80330edf-c28a-48e6-926d-3066a67afb9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kq75p" podUID="80330edf-c28a-48e6-926d-3066a67afb9f" Jul 7 02:58:15.174529 kubelet[2669]: E0707 02:58:15.173004 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.174734 kubelet[2669]: E0707 02:58:15.173038 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-g7967" Jul 7 02:58:15.174734 kubelet[2669]: E0707 02:58:15.173059 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-g7967" Jul 7 02:58:15.174734 kubelet[2669]: E0707 02:58:15.173105 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-g7967_kube-system(74713bc6-d2e3-43dc-8c4c-5e5600fd418c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-g7967_kube-system(74713bc6-d2e3-43dc-8c4c-5e5600fd418c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-g7967" podUID="74713bc6-d2e3-43dc-8c4c-5e5600fd418c" Jul 7 02:58:15.174888 kubelet[2669]: E0707 02:58:15.173150 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.174888 kubelet[2669]: E0707 02:58:15.174360 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d899c5f84-rp2zh" Jul 7 02:58:15.174888 kubelet[2669]: E0707 02:58:15.174397 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6d899c5f84-rp2zh" Jul 7 02:58:15.175392 kubelet[2669]: E0707 02:58:15.174455 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6d899c5f84-rp2zh_calico-system(79b32a8d-dd25-4e48-a397-425ba7037f30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6d899c5f84-rp2zh_calico-system(79b32a8d-dd25-4e48-a397-425ba7037f30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d899c5f84-rp2zh" podUID="79b32a8d-dd25-4e48-a397-425ba7037f30" Jul 7 02:58:15.175392 kubelet[2669]: E0707 02:58:15.175327 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.175392 kubelet[2669]: E0707 02:58:15.175370 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65d9798d4c-j97sf" Jul 7 02:58:15.175930 kubelet[2669]: E0707 02:58:15.175396 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65d9798d4c-j97sf" Jul 7 02:58:15.175930 kubelet[2669]: E0707 02:58:15.175436 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65d9798d4c-j97sf_calico-system(9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65d9798d4c-j97sf_calico-system(9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65d9798d4c-j97sf" podUID="9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee" Jul 7 02:58:15.582783 systemd[1]: Created slice kubepods-besteffort-pod524e68d3_c271_42bd_a0b6_ec9248f8255b.slice - libcontainer container kubepods-besteffort-pod524e68d3_c271_42bd_a0b6_ec9248f8255b.slice. Jul 7 02:58:15.586582 containerd[1504]: time="2025-07-07T02:58:15.586536971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dq6fd,Uid:524e68d3-c271-42bd-a0b6-ec9248f8255b,Namespace:calico-system,Attempt:0,}" Jul 7 02:58:15.649109 kubelet[2669]: E0707 02:58:15.648079 2669 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 7 02:58:15.649109 kubelet[2669]: E0707 02:58:15.648332 2669 projected.go:194] Error preparing data for projected volume kube-api-access-5hr9t for pod calico-apiserver/calico-apiserver-574fc86944-2st6q: failed to sync configmap cache: timed out waiting for the condition Jul 7 02:58:15.659294 kubelet[2669]: E0707 02:58:15.659257 2669 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 7 02:58:15.659420 kubelet[2669]: E0707 02:58:15.659355 2669 projected.go:194] Error preparing data for projected volume kube-api-access-c2k2h for pod calico-apiserver/calico-apiserver-556c85fc95-nldxn: failed to sync configmap cache: timed out waiting for the condition Jul 7 02:58:15.663376 kubelet[2669]: E0707 02:58:15.662943 2669 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c1e51e64-ed54-4337-b03c-5e29a286c65d-kube-api-access-5hr9t podName:c1e51e64-ed54-4337-b03c-5e29a286c65d nodeName:}" failed. No retries permitted until 2025-07-07 02:58:16.148848641 +0000 UTC m=+37.727383267 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-5hr9t" (UniqueName: "kubernetes.io/projected/c1e51e64-ed54-4337-b03c-5e29a286c65d-kube-api-access-5hr9t") pod "calico-apiserver-574fc86944-2st6q" (UID: "c1e51e64-ed54-4337-b03c-5e29a286c65d") : failed to sync configmap cache: timed out waiting for the condition Jul 7 02:58:15.663376 kubelet[2669]: E0707 02:58:15.663036 2669 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f0c66970-d617-482e-81b9-577d7709b687-kube-api-access-c2k2h podName:f0c66970-d617-482e-81b9-577d7709b687 nodeName:}" failed. No retries permitted until 2025-07-07 02:58:16.163007826 +0000 UTC m=+37.741542459 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-c2k2h" (UniqueName: "kubernetes.io/projected/f0c66970-d617-482e-81b9-577d7709b687-kube-api-access-c2k2h") pod "calico-apiserver-556c85fc95-nldxn" (UID: "f0c66970-d617-482e-81b9-577d7709b687") : failed to sync configmap cache: timed out waiting for the condition Jul 7 02:58:15.667746 kubelet[2669]: E0707 02:58:15.667711 2669 projected.go:288] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 7 02:58:15.667746 kubelet[2669]: E0707 02:58:15.667743 2669 projected.go:194] Error preparing data for projected volume kube-api-access-965j8 for pod calico-apiserver/calico-apiserver-574fc86944-7dt46: failed to sync configmap cache: timed out waiting for the condition Jul 7 02:58:15.667932 kubelet[2669]: E0707 02:58:15.667840 2669 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c09fc3d5-40d3-4a9f-af15-68963bda8021-kube-api-access-965j8 podName:c09fc3d5-40d3-4a9f-af15-68963bda8021 nodeName:}" failed. No retries permitted until 2025-07-07 02:58:16.167822593 +0000 UTC m=+37.746357215 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-965j8" (UniqueName: "kubernetes.io/projected/c09fc3d5-40d3-4a9f-af15-68963bda8021-kube-api-access-965j8") pod "calico-apiserver-574fc86944-7dt46" (UID: "c09fc3d5-40d3-4a9f-af15-68963bda8021") : failed to sync configmap cache: timed out waiting for the condition Jul 7 02:58:15.697004 containerd[1504]: time="2025-07-07T02:58:15.696903461Z" level=error msg="Failed to destroy network for sandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.699906 containerd[1504]: time="2025-07-07T02:58:15.699859529Z" level=error msg="encountered an error cleaning up failed sandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.700534 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e-shm.mount: Deactivated successfully. Jul 7 02:58:15.700813 containerd[1504]: time="2025-07-07T02:58:15.700531066Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dq6fd,Uid:524e68d3-c271-42bd-a0b6-ec9248f8255b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.702581 kubelet[2669]: E0707 02:58:15.702516 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:15.702715 kubelet[2669]: E0707 02:58:15.702630 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dq6fd" Jul 7 02:58:15.702715 kubelet[2669]: E0707 02:58:15.702673 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dq6fd" Jul 7 02:58:15.702860 kubelet[2669]: E0707 02:58:15.702732 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dq6fd_calico-system(524e68d3-c271-42bd-a0b6-ec9248f8255b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dq6fd_calico-system(524e68d3-c271-42bd-a0b6-ec9248f8255b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dq6fd" podUID="524e68d3-c271-42bd-a0b6-ec9248f8255b" Jul 7 02:58:15.888110 kubelet[2669]: I0707 02:58:15.887277 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Jul 7 02:58:15.890949 kubelet[2669]: I0707 02:58:15.890535 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Jul 7 02:58:15.912602 kubelet[2669]: I0707 02:58:15.912533 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Jul 7 02:58:15.914773 kubelet[2669]: I0707 02:58:15.914518 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Jul 7 02:58:15.918302 kubelet[2669]: I0707 02:58:15.917610 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:58:15.924265 kubelet[2669]: I0707 02:58:15.923923 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Jul 7 02:58:15.938304 containerd[1504]: time="2025-07-07T02:58:15.937349784Z" level=info msg="StopPodSandbox for \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\"" Jul 7 02:58:15.938510 containerd[1504]: time="2025-07-07T02:58:15.938478870Z" level=info msg="StopPodSandbox for \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\"" Jul 7 02:58:15.939401 containerd[1504]: time="2025-07-07T02:58:15.939366941Z" level=info msg="Ensure that sandbox e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0 in task-service has been cleanup successfully" Jul 7 02:58:15.939573 containerd[1504]: time="2025-07-07T02:58:15.939542251Z" level=info msg="StopPodSandbox for \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\"" Jul 7 02:58:15.942520 containerd[1504]: time="2025-07-07T02:58:15.942487000Z" level=info msg="StopPodSandbox for \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\"" Jul 7 02:58:15.942883 containerd[1504]: time="2025-07-07T02:58:15.942836538Z" level=info msg="Ensure that sandbox 9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f in task-service has been cleanup successfully" Jul 7 02:58:15.943180 containerd[1504]: time="2025-07-07T02:58:15.939374235Z" level=info msg="Ensure that sandbox 3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf in task-service has been cleanup successfully" Jul 7 02:58:15.946147 containerd[1504]: time="2025-07-07T02:58:15.946053722Z" level=info msg="StopPodSandbox for \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\"" Jul 7 02:58:15.946550 containerd[1504]: time="2025-07-07T02:58:15.946110206Z" level=info msg="Ensure that sandbox fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e in task-service has been cleanup successfully" Jul 7 02:58:15.947387 containerd[1504]: time="2025-07-07T02:58:15.947338394Z" level=info msg="StopPodSandbox for \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\"" Jul 7 02:58:15.949182 containerd[1504]: time="2025-07-07T02:58:15.949136782Z" level=info msg="Ensure that sandbox c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc in task-service has been cleanup successfully" Jul 7 02:58:15.950481 containerd[1504]: time="2025-07-07T02:58:15.947450609Z" level=info msg="Ensure that sandbox 2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e in task-service has been cleanup successfully" Jul 7 02:58:16.063740 containerd[1504]: time="2025-07-07T02:58:16.063663371Z" level=error msg="StopPodSandbox for \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\" failed" error="failed to destroy network for sandbox \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.064201 kubelet[2669]: E0707 02:58:16.064007 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Jul 7 02:58:16.064487 containerd[1504]: time="2025-07-07T02:58:16.064443925Z" level=error msg="StopPodSandbox for \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\" failed" error="failed to destroy network for sandbox \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.064797 kubelet[2669]: E0707 02:58:16.064727 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Jul 7 02:58:16.066096 kubelet[2669]: E0707 02:58:16.064781 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc"} Jul 7 02:58:16.066382 kubelet[2669]: E0707 02:58:16.064091 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e"} Jul 7 02:58:16.066382 kubelet[2669]: E0707 02:58:16.066280 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"79b32a8d-dd25-4e48-a397-425ba7037f30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:58:16.066382 kubelet[2669]: E0707 02:58:16.066296 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:58:16.066382 kubelet[2669]: E0707 02:58:16.066320 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"79b32a8d-dd25-4e48-a397-425ba7037f30\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6d899c5f84-rp2zh" podUID="79b32a8d-dd25-4e48-a397-425ba7037f30" Jul 7 02:58:16.066734 kubelet[2669]: E0707 02:58:16.066341 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65d9798d4c-j97sf" podUID="9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee" Jul 7 02:58:16.080896 containerd[1504]: time="2025-07-07T02:58:16.080729732Z" level=error msg="StopPodSandbox for \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\" failed" error="failed to destroy network for sandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.081492 kubelet[2669]: E0707 02:58:16.081065 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:58:16.081492 kubelet[2669]: E0707 02:58:16.081132 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e"} Jul 7 02:58:16.081492 kubelet[2669]: E0707 02:58:16.081473 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"524e68d3-c271-42bd-a0b6-ec9248f8255b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:58:16.081870 kubelet[2669]: E0707 02:58:16.081520 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"524e68d3-c271-42bd-a0b6-ec9248f8255b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dq6fd" podUID="524e68d3-c271-42bd-a0b6-ec9248f8255b" Jul 7 02:58:16.082295 containerd[1504]: time="2025-07-07T02:58:16.082109715Z" level=error msg="StopPodSandbox for \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\" failed" error="failed to destroy network for sandbox \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.083457 kubelet[2669]: E0707 02:58:16.083332 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Jul 7 02:58:16.083457 kubelet[2669]: E0707 02:58:16.083378 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf"} Jul 7 02:58:16.083457 kubelet[2669]: E0707 02:58:16.083427 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8242f432-b915-4099-adee-f5f52e15717e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:58:16.083664 kubelet[2669]: E0707 02:58:16.083483 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8242f432-b915-4099-adee-f5f52e15717e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-976bg" podUID="8242f432-b915-4099-adee-f5f52e15717e" Jul 7 02:58:16.089027 containerd[1504]: time="2025-07-07T02:58:16.088428360Z" level=error msg="StopPodSandbox for \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\" failed" error="failed to destroy network for sandbox \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.089714 kubelet[2669]: E0707 02:58:16.089364 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Jul 7 02:58:16.089714 kubelet[2669]: E0707 02:58:16.089466 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f"} Jul 7 02:58:16.089714 kubelet[2669]: E0707 02:58:16.089525 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"80330edf-c28a-48e6-926d-3066a67afb9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:58:16.089714 kubelet[2669]: E0707 02:58:16.089570 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"80330edf-c28a-48e6-926d-3066a67afb9f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-kq75p" podUID="80330edf-c28a-48e6-926d-3066a67afb9f" Jul 7 02:58:16.090635 containerd[1504]: time="2025-07-07T02:58:16.090198407Z" level=error msg="StopPodSandbox for \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\" failed" error="failed to destroy network for sandbox \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.090733 kubelet[2669]: E0707 02:58:16.090451 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Jul 7 02:58:16.090733 kubelet[2669]: E0707 02:58:16.090508 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0"} Jul 7 02:58:16.090733 kubelet[2669]: E0707 02:58:16.090543 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74713bc6-d2e3-43dc-8c4c-5e5600fd418c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:58:16.090733 kubelet[2669]: E0707 02:58:16.090576 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74713bc6-d2e3-43dc-8c4c-5e5600fd418c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-g7967" podUID="74713bc6-d2e3-43dc-8c4c-5e5600fd418c" Jul 7 02:58:16.278470 containerd[1504]: time="2025-07-07T02:58:16.278410993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574fc86944-2st6q,Uid:c1e51e64-ed54-4337-b03c-5e29a286c65d,Namespace:calico-apiserver,Attempt:0,}" Jul 7 02:58:16.312671 containerd[1504]: time="2025-07-07T02:58:16.311115942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574fc86944-7dt46,Uid:c09fc3d5-40d3-4a9f-af15-68963bda8021,Namespace:calico-apiserver,Attempt:0,}" Jul 7 02:58:16.359445 containerd[1504]: time="2025-07-07T02:58:16.358906466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556c85fc95-nldxn,Uid:f0c66970-d617-482e-81b9-577d7709b687,Namespace:calico-apiserver,Attempt:0,}" Jul 7 02:58:16.470520 containerd[1504]: time="2025-07-07T02:58:16.470439141Z" level=error msg="Failed to destroy network for sandbox \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.475097 containerd[1504]: time="2025-07-07T02:58:16.475057869Z" level=error msg="encountered an error cleaning up failed sandbox \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.475645 containerd[1504]: time="2025-07-07T02:58:16.475590581Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574fc86944-2st6q,Uid:c1e51e64-ed54-4337-b03c-5e29a286c65d,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.476807 kubelet[2669]: E0707 02:58:16.476652 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.478129 kubelet[2669]: E0707 02:58:16.476776 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-574fc86944-2st6q" Jul 7 02:58:16.478129 kubelet[2669]: E0707 02:58:16.477471 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-574fc86944-2st6q" Jul 7 02:58:16.478129 kubelet[2669]: E0707 02:58:16.477603 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-574fc86944-2st6q_calico-apiserver(c1e51e64-ed54-4337-b03c-5e29a286c65d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-574fc86944-2st6q_calico-apiserver(c1e51e64-ed54-4337-b03c-5e29a286c65d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-574fc86944-2st6q" podUID="c1e51e64-ed54-4337-b03c-5e29a286c65d" Jul 7 02:58:16.497355 containerd[1504]: time="2025-07-07T02:58:16.497281961Z" level=error msg="Failed to destroy network for sandbox \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.497917 containerd[1504]: time="2025-07-07T02:58:16.497738950Z" level=error msg="encountered an error cleaning up failed sandbox \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.497917 containerd[1504]: time="2025-07-07T02:58:16.497818907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574fc86944-7dt46,Uid:c09fc3d5-40d3-4a9f-af15-68963bda8021,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.498388 kubelet[2669]: E0707 02:58:16.498148 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.498388 kubelet[2669]: E0707 02:58:16.498331 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-574fc86944-7dt46" Jul 7 02:58:16.498528 kubelet[2669]: E0707 02:58:16.498406 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-574fc86944-7dt46" Jul 7 02:58:16.499254 kubelet[2669]: E0707 02:58:16.498517 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-574fc86944-7dt46_calico-apiserver(c09fc3d5-40d3-4a9f-af15-68963bda8021)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-574fc86944-7dt46_calico-apiserver(c09fc3d5-40d3-4a9f-af15-68963bda8021)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-574fc86944-7dt46" podUID="c09fc3d5-40d3-4a9f-af15-68963bda8021" Jul 7 02:58:16.521003 containerd[1504]: time="2025-07-07T02:58:16.520813342Z" level=error msg="Failed to destroy network for sandbox \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.521671 containerd[1504]: time="2025-07-07T02:58:16.521563992Z" level=error msg="encountered an error cleaning up failed sandbox \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.521854 containerd[1504]: time="2025-07-07T02:58:16.521693788Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556c85fc95-nldxn,Uid:f0c66970-d617-482e-81b9-577d7709b687,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.522105 kubelet[2669]: E0707 02:58:16.521992 2669 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:16.522105 kubelet[2669]: E0707 02:58:16.522070 2669 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556c85fc95-nldxn" Jul 7 02:58:16.522363 kubelet[2669]: E0707 02:58:16.522113 2669 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-556c85fc95-nldxn" Jul 7 02:58:16.522363 kubelet[2669]: E0707 02:58:16.522190 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-556c85fc95-nldxn_calico-apiserver(f0c66970-d617-482e-81b9-577d7709b687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-556c85fc95-nldxn_calico-apiserver(f0c66970-d617-482e-81b9-577d7709b687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556c85fc95-nldxn" podUID="f0c66970-d617-482e-81b9-577d7709b687" Jul 7 02:58:16.929367 kubelet[2669]: I0707 02:58:16.927879 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Jul 7 02:58:16.929835 containerd[1504]: time="2025-07-07T02:58:16.929497749Z" level=info msg="StopPodSandbox for \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\"" Jul 7 02:58:16.930335 containerd[1504]: time="2025-07-07T02:58:16.930211370Z" level=info msg="Ensure that sandbox 7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f in task-service has been cleanup successfully" Jul 7 02:58:16.932572 kubelet[2669]: I0707 02:58:16.932147 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Jul 7 02:58:16.933958 containerd[1504]: time="2025-07-07T02:58:16.933884874Z" level=info msg="StopPodSandbox for \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\"" Jul 7 02:58:16.935430 containerd[1504]: time="2025-07-07T02:58:16.935337066Z" level=info msg="Ensure that sandbox 0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc in task-service has been cleanup successfully" Jul 7 02:58:16.938219 kubelet[2669]: I0707 02:58:16.938048 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Jul 7 02:58:16.939759 containerd[1504]: time="2025-07-07T02:58:16.939725469Z" level=info msg="StopPodSandbox for \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\"" Jul 7 02:58:16.939975 containerd[1504]: time="2025-07-07T02:58:16.939945861Z" level=info msg="Ensure that sandbox d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02 in task-service has been cleanup successfully" Jul 7 02:58:17.017861 containerd[1504]: time="2025-07-07T02:58:17.017794439Z" level=error msg="StopPodSandbox for \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\" failed" error="failed to destroy network for sandbox \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:17.019267 kubelet[2669]: E0707 02:58:17.019145 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Jul 7 02:58:17.019392 kubelet[2669]: E0707 02:58:17.019328 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f"} Jul 7 02:58:17.019483 kubelet[2669]: E0707 02:58:17.019401 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c1e51e64-ed54-4337-b03c-5e29a286c65d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:58:17.019726 kubelet[2669]: E0707 02:58:17.019439 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c1e51e64-ed54-4337-b03c-5e29a286c65d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-574fc86944-2st6q" podUID="c1e51e64-ed54-4337-b03c-5e29a286c65d" Jul 7 02:58:17.028761 containerd[1504]: time="2025-07-07T02:58:17.028504602Z" level=error msg="StopPodSandbox for \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\" failed" error="failed to destroy network for sandbox \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:17.029260 kubelet[2669]: E0707 02:58:17.028774 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Jul 7 02:58:17.029260 kubelet[2669]: E0707 02:58:17.028834 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02"} Jul 7 02:58:17.029260 kubelet[2669]: E0707 02:58:17.028894 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c09fc3d5-40d3-4a9f-af15-68963bda8021\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:58:17.029260 kubelet[2669]: E0707 02:58:17.028933 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c09fc3d5-40d3-4a9f-af15-68963bda8021\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-574fc86944-7dt46" podUID="c09fc3d5-40d3-4a9f-af15-68963bda8021" Jul 7 02:58:17.030851 containerd[1504]: time="2025-07-07T02:58:17.030487606Z" level=error msg="StopPodSandbox for \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\" failed" error="failed to destroy network for sandbox \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:17.030973 kubelet[2669]: E0707 02:58:17.030665 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Jul 7 02:58:17.030973 kubelet[2669]: E0707 02:58:17.030706 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc"} Jul 7 02:58:17.030973 kubelet[2669]: E0707 02:58:17.030743 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f0c66970-d617-482e-81b9-577d7709b687\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:58:17.030973 kubelet[2669]: E0707 02:58:17.030783 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f0c66970-d617-482e-81b9-577d7709b687\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-556c85fc95-nldxn" podUID="f0c66970-d617-482e-81b9-577d7709b687" Jul 7 02:58:17.321807 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc-shm.mount: Deactivated successfully. Jul 7 02:58:17.322025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02-shm.mount: Deactivated successfully. Jul 7 02:58:17.322196 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f-shm.mount: Deactivated successfully. Jul 7 02:58:21.053005 kubelet[2669]: I0707 02:58:21.051710 2669 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 02:58:25.541704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3217732046.mount: Deactivated successfully. Jul 7 02:58:25.638581 containerd[1504]: time="2025-07-07T02:58:25.637431341Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:25.664394 containerd[1504]: time="2025-07-07T02:58:25.664277808Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 02:58:25.684853 containerd[1504]: time="2025-07-07T02:58:25.684774253Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:25.688569 containerd[1504]: time="2025-07-07T02:58:25.688474047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:25.693727 containerd[1504]: time="2025-07-07T02:58:25.693292555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 10.80173569s" Jul 7 02:58:25.693727 containerd[1504]: time="2025-07-07T02:58:25.693356628Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 02:58:25.789076 containerd[1504]: time="2025-07-07T02:58:25.788634432Z" level=info msg="CreateContainer within sandbox \"588201a3b5149ddcae30d03acb43f85cd30520081176b37e77dc24e14b60f647\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 02:58:25.919095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2539376580.mount: Deactivated successfully. Jul 7 02:58:25.929771 containerd[1504]: time="2025-07-07T02:58:25.929691214Z" level=info msg="CreateContainer within sandbox \"588201a3b5149ddcae30d03acb43f85cd30520081176b37e77dc24e14b60f647\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0717f2ff88ad11892d3b921105fec3b330e5c32b454e87863e2f0b0b7e26864d\"" Jul 7 02:58:25.932335 containerd[1504]: time="2025-07-07T02:58:25.931029706Z" level=info msg="StartContainer for \"0717f2ff88ad11892d3b921105fec3b330e5c32b454e87863e2f0b0b7e26864d\"" Jul 7 02:58:26.120470 systemd[1]: Started cri-containerd-0717f2ff88ad11892d3b921105fec3b330e5c32b454e87863e2f0b0b7e26864d.scope - libcontainer container 0717f2ff88ad11892d3b921105fec3b330e5c32b454e87863e2f0b0b7e26864d. Jul 7 02:58:26.197313 containerd[1504]: time="2025-07-07T02:58:26.196886695Z" level=info msg="StartContainer for \"0717f2ff88ad11892d3b921105fec3b330e5c32b454e87863e2f0b0b7e26864d\" returns successfully" Jul 7 02:58:26.463731 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 02:58:26.464789 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 02:58:26.578513 containerd[1504]: time="2025-07-07T02:58:26.578434321Z" level=info msg="StopPodSandbox for \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\"" Jul 7 02:58:26.684896 containerd[1504]: time="2025-07-07T02:58:26.684811870Z" level=error msg="StopPodSandbox for \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\" failed" error="failed to destroy network for sandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 02:58:26.685791 kubelet[2669]: E0707 02:58:26.685461 2669 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:58:26.685791 kubelet[2669]: E0707 02:58:26.685592 2669 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e"} Jul 7 02:58:26.685791 kubelet[2669]: E0707 02:58:26.685677 2669 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"524e68d3-c271-42bd-a0b6-ec9248f8255b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jul 7 02:58:26.685791 kubelet[2669]: E0707 02:58:26.685741 2669 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"524e68d3-c271-42bd-a0b6-ec9248f8255b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dq6fd" podUID="524e68d3-c271-42bd-a0b6-ec9248f8255b" Jul 7 02:58:26.818639 containerd[1504]: time="2025-07-07T02:58:26.818569708Z" level=info msg="StopPodSandbox for \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\"" Jul 7 02:58:27.347441 containerd[1504]: 2025-07-07 02:58:27.006 [INFO][3945] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Jul 7 02:58:27.347441 containerd[1504]: 2025-07-07 02:58:27.006 [INFO][3945] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" iface="eth0" netns="/var/run/netns/cni-ba7ed077-f283-a4c5-e1d2-2046e57e67cd" Jul 7 02:58:27.347441 containerd[1504]: 2025-07-07 02:58:27.006 [INFO][3945] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" iface="eth0" netns="/var/run/netns/cni-ba7ed077-f283-a4c5-e1d2-2046e57e67cd" Jul 7 02:58:27.347441 containerd[1504]: 2025-07-07 02:58:27.008 [INFO][3945] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" iface="eth0" netns="/var/run/netns/cni-ba7ed077-f283-a4c5-e1d2-2046e57e67cd" Jul 7 02:58:27.347441 containerd[1504]: 2025-07-07 02:58:27.009 [INFO][3945] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Jul 7 02:58:27.347441 containerd[1504]: 2025-07-07 02:58:27.009 [INFO][3945] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Jul 7 02:58:27.347441 containerd[1504]: 2025-07-07 02:58:27.316 [INFO][3952] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" HandleID="k8s-pod-network.fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-whisker--65d9798d4c--j97sf-eth0" Jul 7 02:58:27.347441 containerd[1504]: 2025-07-07 02:58:27.320 [INFO][3952] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:27.347441 containerd[1504]: 2025-07-07 02:58:27.320 [INFO][3952] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:27.347441 containerd[1504]: 2025-07-07 02:58:27.338 [WARNING][3952] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" HandleID="k8s-pod-network.fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-whisker--65d9798d4c--j97sf-eth0" Jul 7 02:58:27.347441 containerd[1504]: 2025-07-07 02:58:27.338 [INFO][3952] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" HandleID="k8s-pod-network.fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-whisker--65d9798d4c--j97sf-eth0" Jul 7 02:58:27.347441 containerd[1504]: 2025-07-07 02:58:27.341 [INFO][3952] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:27.347441 containerd[1504]: 2025-07-07 02:58:27.345 [INFO][3945] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Jul 7 02:58:27.349797 containerd[1504]: time="2025-07-07T02:58:27.347676546Z" level=info msg="TearDown network for sandbox \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\" successfully" Jul 7 02:58:27.349797 containerd[1504]: time="2025-07-07T02:58:27.347714226Z" level=info msg="StopPodSandbox for \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\" returns successfully" Jul 7 02:58:27.353351 systemd[1]: run-netns-cni\x2dba7ed077\x2df283\x2da4c5\x2de1d2\x2d2046e57e67cd.mount: Deactivated successfully. Jul 7 02:58:27.517369 kubelet[2669]: I0707 02:58:27.516773 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee-whisker-backend-key-pair\") pod \"9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee\" (UID: \"9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee\") " Jul 7 02:58:27.517369 kubelet[2669]: I0707 02:58:27.516972 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls8pz\" (UniqueName: \"kubernetes.io/projected/9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee-kube-api-access-ls8pz\") pod \"9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee\" (UID: \"9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee\") " Jul 7 02:58:27.528206 kubelet[2669]: I0707 02:58:27.525931 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee-whisker-ca-bundle\") pod \"9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee\" (UID: \"9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee\") " Jul 7 02:58:27.548080 kubelet[2669]: I0707 02:58:27.543861 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee" (UID: "9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 7 02:58:27.547757 systemd[1]: var-lib-kubelet-pods-9a6eddc6\x2de7ad\x2d41f6\x2d82bc\x2d72ff6e0687ee-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dls8pz.mount: Deactivated successfully. Jul 7 02:58:27.549659 kubelet[2669]: I0707 02:58:27.548213 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee" (UID: "9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 02:58:27.550156 kubelet[2669]: I0707 02:58:27.548743 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee-kube-api-access-ls8pz" (OuterVolumeSpecName: "kube-api-access-ls8pz") pod "9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee" (UID: "9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee"). InnerVolumeSpecName "kube-api-access-ls8pz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 02:58:27.555871 systemd[1]: var-lib-kubelet-pods-9a6eddc6\x2de7ad\x2d41f6\x2d82bc\x2d72ff6e0687ee-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 02:58:27.577644 containerd[1504]: time="2025-07-07T02:58:27.577075797Z" level=info msg="StopPodSandbox for \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\"" Jul 7 02:58:27.626938 kubelet[2669]: I0707 02:58:27.626626 2669 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls8pz\" (UniqueName: \"kubernetes.io/projected/9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee-kube-api-access-ls8pz\") on node \"srv-3i0x6.gb1.brightbox.com\" DevicePath \"\"" Jul 7 02:58:27.626938 kubelet[2669]: I0707 02:58:27.626679 2669 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee-whisker-ca-bundle\") on node \"srv-3i0x6.gb1.brightbox.com\" DevicePath \"\"" Jul 7 02:58:27.626938 kubelet[2669]: I0707 02:58:27.626703 2669 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee-whisker-backend-key-pair\") on node \"srv-3i0x6.gb1.brightbox.com\" DevicePath \"\"" Jul 7 02:58:27.668852 kubelet[2669]: I0707 02:58:27.657003 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v9ckq" podStartSLOduration=3.247876397 podStartE2EDuration="26.654012811s" podCreationTimestamp="2025-07-07 02:58:01 +0000 UTC" firstStartedPulling="2025-07-07 02:58:02.289141764 +0000 UTC m=+23.867676391" lastFinishedPulling="2025-07-07 02:58:25.695278184 +0000 UTC m=+47.273812805" observedRunningTime="2025-07-07 02:58:27.042627237 +0000 UTC m=+48.621161876" watchObservedRunningTime="2025-07-07 02:58:27.654012811 +0000 UTC m=+49.232547456" Jul 7 02:58:27.716373 containerd[1504]: 2025-07-07 02:58:27.652 [INFO][4003] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Jul 7 02:58:27.716373 containerd[1504]: 2025-07-07 02:58:27.654 [INFO][4003] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" iface="eth0" netns="/var/run/netns/cni-1d003ff8-acad-d733-f2ec-91bd5c1010df" Jul 7 02:58:27.716373 containerd[1504]: 2025-07-07 02:58:27.655 [INFO][4003] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" iface="eth0" netns="/var/run/netns/cni-1d003ff8-acad-d733-f2ec-91bd5c1010df" Jul 7 02:58:27.716373 containerd[1504]: 2025-07-07 02:58:27.656 [INFO][4003] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" iface="eth0" netns="/var/run/netns/cni-1d003ff8-acad-d733-f2ec-91bd5c1010df" Jul 7 02:58:27.716373 containerd[1504]: 2025-07-07 02:58:27.656 [INFO][4003] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Jul 7 02:58:27.716373 containerd[1504]: 2025-07-07 02:58:27.656 [INFO][4003] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Jul 7 02:58:27.716373 containerd[1504]: 2025-07-07 02:58:27.699 [INFO][4011] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" HandleID="k8s-pod-network.d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:27.716373 containerd[1504]: 2025-07-07 02:58:27.699 [INFO][4011] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:27.716373 containerd[1504]: 2025-07-07 02:58:27.699 [INFO][4011] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:27.716373 containerd[1504]: 2025-07-07 02:58:27.708 [WARNING][4011] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" HandleID="k8s-pod-network.d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:27.716373 containerd[1504]: 2025-07-07 02:58:27.709 [INFO][4011] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" HandleID="k8s-pod-network.d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:27.716373 containerd[1504]: 2025-07-07 02:58:27.710 [INFO][4011] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:27.716373 containerd[1504]: 2025-07-07 02:58:27.714 [INFO][4003] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Jul 7 02:58:27.718683 containerd[1504]: time="2025-07-07T02:58:27.717974556Z" level=info msg="TearDown network for sandbox \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\" successfully" Jul 7 02:58:27.718683 containerd[1504]: time="2025-07-07T02:58:27.718027529Z" level=info msg="StopPodSandbox for \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\" returns successfully" Jul 7 02:58:27.719266 containerd[1504]: time="2025-07-07T02:58:27.719165881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574fc86944-7dt46,Uid:c09fc3d5-40d3-4a9f-af15-68963bda8021,Namespace:calico-apiserver,Attempt:1,}" Jul 7 02:58:27.722101 systemd[1]: run-netns-cni\x2d1d003ff8\x2dacad\x2dd733\x2df2ec\x2d91bd5c1010df.mount: Deactivated successfully. Jul 7 02:58:27.921469 systemd-networkd[1433]: cali14d4bc10b6d: Link UP Jul 7 02:58:27.922664 systemd-networkd[1433]: cali14d4bc10b6d: Gained carrier Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.787 [INFO][4020] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.803 [INFO][4020] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0 calico-apiserver-574fc86944- calico-apiserver c09fc3d5-40d3-4a9f-af15-68963bda8021 953 0 2025-07-07 02:57:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:574fc86944 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-3i0x6.gb1.brightbox.com calico-apiserver-574fc86944-7dt46 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali14d4bc10b6d [] [] }} ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-7dt46" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-" Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.803 [INFO][4020] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-7dt46" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.843 [INFO][4031] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" HandleID="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.844 [INFO][4031] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" HandleID="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002caff0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-3i0x6.gb1.brightbox.com", "pod":"calico-apiserver-574fc86944-7dt46", "timestamp":"2025-07-07 02:58:27.84335918 +0000 UTC"}, Hostname:"srv-3i0x6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.844 [INFO][4031] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.844 [INFO][4031] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.844 [INFO][4031] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3i0x6.gb1.brightbox.com' Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.857 [INFO][4031] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.869 [INFO][4031] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.876 [INFO][4031] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.879 [INFO][4031] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.882 [INFO][4031] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.882 [INFO][4031] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.884 [INFO][4031] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.890 [INFO][4031] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.900 [INFO][4031] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.129/26] block=192.168.59.128/26 handle="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.900 [INFO][4031] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.129/26] handle="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.900 [INFO][4031] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:27.948615 containerd[1504]: 2025-07-07 02:58:27.900 [INFO][4031] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.129/26] IPv6=[] ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" HandleID="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:27.951368 containerd[1504]: 2025-07-07 02:58:27.903 [INFO][4020] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-7dt46" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0", GenerateName:"calico-apiserver-574fc86944-", Namespace:"calico-apiserver", SelfLink:"", UID:"c09fc3d5-40d3-4a9f-af15-68963bda8021", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574fc86944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-574fc86944-7dt46", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14d4bc10b6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:27.951368 containerd[1504]: 2025-07-07 02:58:27.904 [INFO][4020] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.129/32] ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-7dt46" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:27.951368 containerd[1504]: 2025-07-07 02:58:27.904 [INFO][4020] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14d4bc10b6d ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-7dt46" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:27.951368 containerd[1504]: 2025-07-07 02:58:27.923 [INFO][4020] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-7dt46" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:27.951368 containerd[1504]: 2025-07-07 02:58:27.925 [INFO][4020] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-7dt46" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0", GenerateName:"calico-apiserver-574fc86944-", Namespace:"calico-apiserver", SelfLink:"", UID:"c09fc3d5-40d3-4a9f-af15-68963bda8021", ResourceVersion:"953", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574fc86944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d", Pod:"calico-apiserver-574fc86944-7dt46", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14d4bc10b6d", MAC:"0e:b6:33:ee:65:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:27.951368 containerd[1504]: 2025-07-07 02:58:27.941 [INFO][4020] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-7dt46" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:27.997525 systemd[1]: Removed slice kubepods-besteffort-pod9a6eddc6_e7ad_41f6_82bc_72ff6e0687ee.slice - libcontainer container kubepods-besteffort-pod9a6eddc6_e7ad_41f6_82bc_72ff6e0687ee.slice. Jul 7 02:58:28.017283 containerd[1504]: time="2025-07-07T02:58:28.017093045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:58:28.019588 containerd[1504]: time="2025-07-07T02:58:28.018076098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:58:28.019588 containerd[1504]: time="2025-07-07T02:58:28.018113595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:28.019588 containerd[1504]: time="2025-07-07T02:58:28.018269044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:28.080848 systemd[1]: Started cri-containerd-1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d.scope - libcontainer container 1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d. Jul 7 02:58:28.172471 systemd[1]: Created slice kubepods-besteffort-pod5c665995_4960_47fc_b1cb_307e78262564.slice - libcontainer container kubepods-besteffort-pod5c665995_4960_47fc_b1cb_307e78262564.slice. Jul 7 02:58:28.264494 containerd[1504]: time="2025-07-07T02:58:28.264437371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574fc86944-7dt46,Uid:c09fc3d5-40d3-4a9f-af15-68963bda8021,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d\"" Jul 7 02:58:28.274300 containerd[1504]: time="2025-07-07T02:58:28.274164872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 02:58:28.332552 kubelet[2669]: I0707 02:58:28.332476 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5c665995-4960-47fc-b1cb-307e78262564-whisker-backend-key-pair\") pod \"whisker-668b845577-6qbdj\" (UID: \"5c665995-4960-47fc-b1cb-307e78262564\") " pod="calico-system/whisker-668b845577-6qbdj" Jul 7 02:58:28.333387 kubelet[2669]: I0707 02:58:28.332570 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd9mr\" (UniqueName: \"kubernetes.io/projected/5c665995-4960-47fc-b1cb-307e78262564-kube-api-access-vd9mr\") pod \"whisker-668b845577-6qbdj\" (UID: \"5c665995-4960-47fc-b1cb-307e78262564\") " pod="calico-system/whisker-668b845577-6qbdj" Jul 7 02:58:28.333387 kubelet[2669]: I0707 02:58:28.332607 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c665995-4960-47fc-b1cb-307e78262564-whisker-ca-bundle\") pod \"whisker-668b845577-6qbdj\" (UID: \"5c665995-4960-47fc-b1cb-307e78262564\") " pod="calico-system/whisker-668b845577-6qbdj" Jul 7 02:58:28.481527 containerd[1504]: time="2025-07-07T02:58:28.481023342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-668b845577-6qbdj,Uid:5c665995-4960-47fc-b1cb-307e78262564,Namespace:calico-system,Attempt:0,}" Jul 7 02:58:28.577709 containerd[1504]: time="2025-07-07T02:58:28.577508100Z" level=info msg="StopPodSandbox for \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\"" Jul 7 02:58:28.588855 kubelet[2669]: I0707 02:58:28.587663 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee" path="/var/lib/kubelet/pods/9a6eddc6-e7ad-41f6-82bc-72ff6e0687ee/volumes" Jul 7 02:58:28.806908 systemd-networkd[1433]: cali0d27d37ec49: Link UP Jul 7 02:58:28.810410 systemd-networkd[1433]: cali0d27d37ec49: Gained carrier Jul 7 02:58:28.845718 containerd[1504]: 2025-07-07 02:58:28.675 [INFO][4140] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Jul 7 02:58:28.845718 containerd[1504]: 2025-07-07 02:58:28.675 [INFO][4140] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" iface="eth0" netns="/var/run/netns/cni-7b2a840a-1624-6209-9d0c-1f181f8e1818" Jul 7 02:58:28.845718 containerd[1504]: 2025-07-07 02:58:28.676 [INFO][4140] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" iface="eth0" netns="/var/run/netns/cni-7b2a840a-1624-6209-9d0c-1f181f8e1818" Jul 7 02:58:28.845718 containerd[1504]: 2025-07-07 02:58:28.677 [INFO][4140] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" iface="eth0" netns="/var/run/netns/cni-7b2a840a-1624-6209-9d0c-1f181f8e1818" Jul 7 02:58:28.845718 containerd[1504]: 2025-07-07 02:58:28.677 [INFO][4140] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Jul 7 02:58:28.845718 containerd[1504]: 2025-07-07 02:58:28.677 [INFO][4140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Jul 7 02:58:28.845718 containerd[1504]: 2025-07-07 02:58:28.731 [INFO][4156] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" HandleID="k8s-pod-network.9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:28.845718 containerd[1504]: 2025-07-07 02:58:28.731 [INFO][4156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:28.845718 containerd[1504]: 2025-07-07 02:58:28.792 [INFO][4156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:28.845718 containerd[1504]: 2025-07-07 02:58:28.809 [WARNING][4156] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" HandleID="k8s-pod-network.9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:28.845718 containerd[1504]: 2025-07-07 02:58:28.809 [INFO][4156] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" HandleID="k8s-pod-network.9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:28.845718 containerd[1504]: 2025-07-07 02:58:28.820 [INFO][4156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:28.845718 containerd[1504]: 2025-07-07 02:58:28.833 [INFO][4140] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Jul 7 02:58:28.848661 containerd[1504]: time="2025-07-07T02:58:28.847013445Z" level=info msg="TearDown network for sandbox \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\" successfully" Jul 7 02:58:28.848661 containerd[1504]: time="2025-07-07T02:58:28.847048568Z" level=info msg="StopPodSandbox for \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\" returns successfully" Jul 7 02:58:28.851201 containerd[1504]: time="2025-07-07T02:58:28.851149396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kq75p,Uid:80330edf-c28a-48e6-926d-3066a67afb9f,Namespace:kube-system,Attempt:1,}" Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.535 [INFO][4110] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.560 [INFO][4110] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3i0x6.gb1.brightbox.com-k8s-whisker--668b845577--6qbdj-eth0 whisker-668b845577- calico-system 5c665995-4960-47fc-b1cb-307e78262564 973 0 2025-07-07 02:58:28 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:668b845577 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-3i0x6.gb1.brightbox.com whisker-668b845577-6qbdj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0d27d37ec49 [] [] }} ContainerID="315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" Namespace="calico-system" Pod="whisker-668b845577-6qbdj" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-whisker--668b845577--6qbdj-" Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.560 [INFO][4110] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" Namespace="calico-system" Pod="whisker-668b845577-6qbdj" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-whisker--668b845577--6qbdj-eth0" Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.717 [INFO][4135] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" HandleID="k8s-pod-network.315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" Workload="srv--3i0x6.gb1.brightbox.com-k8s-whisker--668b845577--6qbdj-eth0" Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.720 [INFO][4135] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" HandleID="k8s-pod-network.315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" Workload="srv--3i0x6.gb1.brightbox.com-k8s-whisker--668b845577--6qbdj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122320), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-3i0x6.gb1.brightbox.com", "pod":"whisker-668b845577-6qbdj", "timestamp":"2025-07-07 02:58:28.715585393 +0000 UTC"}, Hostname:"srv-3i0x6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.720 [INFO][4135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.720 [INFO][4135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.720 [INFO][4135] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3i0x6.gb1.brightbox.com' Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.736 [INFO][4135] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.755 [INFO][4135] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.762 [INFO][4135] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.765 [INFO][4135] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.768 [INFO][4135] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.768 [INFO][4135] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.772 [INFO][4135] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987 Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.778 [INFO][4135] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.791 [INFO][4135] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.130/26] block=192.168.59.128/26 handle="k8s-pod-network.315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.791 [INFO][4135] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.130/26] handle="k8s-pod-network.315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.791 [INFO][4135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:28.855619 containerd[1504]: 2025-07-07 02:58:28.791 [INFO][4135] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.130/26] IPv6=[] ContainerID="315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" HandleID="k8s-pod-network.315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" Workload="srv--3i0x6.gb1.brightbox.com-k8s-whisker--668b845577--6qbdj-eth0" Jul 7 02:58:28.860477 containerd[1504]: 2025-07-07 02:58:28.797 [INFO][4110] cni-plugin/k8s.go 418: Populated endpoint ContainerID="315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" Namespace="calico-system" Pod="whisker-668b845577-6qbdj" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-whisker--668b845577--6qbdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-whisker--668b845577--6qbdj-eth0", GenerateName:"whisker-668b845577-", Namespace:"calico-system", SelfLink:"", UID:"5c665995-4960-47fc-b1cb-307e78262564", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"668b845577", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"", Pod:"whisker-668b845577-6qbdj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0d27d37ec49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:28.860477 containerd[1504]: 2025-07-07 02:58:28.797 [INFO][4110] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.130/32] ContainerID="315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" Namespace="calico-system" Pod="whisker-668b845577-6qbdj" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-whisker--668b845577--6qbdj-eth0" Jul 7 02:58:28.860477 containerd[1504]: 2025-07-07 02:58:28.797 [INFO][4110] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d27d37ec49 ContainerID="315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" Namespace="calico-system" Pod="whisker-668b845577-6qbdj" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-whisker--668b845577--6qbdj-eth0" Jul 7 02:58:28.860477 containerd[1504]: 2025-07-07 02:58:28.813 [INFO][4110] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" Namespace="calico-system" Pod="whisker-668b845577-6qbdj" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-whisker--668b845577--6qbdj-eth0" Jul 7 02:58:28.860477 containerd[1504]: 2025-07-07 02:58:28.818 [INFO][4110] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" Namespace="calico-system" Pod="whisker-668b845577-6qbdj" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-whisker--668b845577--6qbdj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-whisker--668b845577--6qbdj-eth0", GenerateName:"whisker-668b845577-", Namespace:"calico-system", SelfLink:"", UID:"5c665995-4960-47fc-b1cb-307e78262564", ResourceVersion:"973", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"668b845577", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987", Pod:"whisker-668b845577-6qbdj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0d27d37ec49", MAC:"66:3d:0d:ed:2f:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:28.860477 containerd[1504]: 2025-07-07 02:58:28.844 [INFO][4110] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987" Namespace="calico-system" Pod="whisker-668b845577-6qbdj" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-whisker--668b845577--6qbdj-eth0" Jul 7 02:58:28.856040 systemd[1]: run-netns-cni\x2d7b2a840a\x2d1624\x2d6209\x2d9d0c\x2d1f181f8e1818.mount: Deactivated successfully. Jul 7 02:58:29.038989 containerd[1504]: time="2025-07-07T02:58:29.031172765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:58:29.038989 containerd[1504]: time="2025-07-07T02:58:29.031295690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:58:29.038989 containerd[1504]: time="2025-07-07T02:58:29.031347411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:29.038989 containerd[1504]: time="2025-07-07T02:58:29.031492778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:29.226540 systemd[1]: Started cri-containerd-315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987.scope - libcontainer container 315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987. Jul 7 02:58:29.431454 systemd-networkd[1433]: cali1e03612df5f: Link UP Jul 7 02:58:29.433412 systemd-networkd[1433]: cali1e03612df5f: Gained carrier Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.124 [INFO][4205] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.178 [INFO][4205] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0 coredns-7c65d6cfc9- kube-system 80330edf-c28a-48e6-926d-3066a67afb9f 977 0 2025-07-07 02:57:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-3i0x6.gb1.brightbox.com coredns-7c65d6cfc9-kq75p eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1e03612df5f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kq75p" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-" Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.181 [INFO][4205] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kq75p" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.333 [INFO][4288] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" HandleID="k8s-pod-network.19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.333 [INFO][4288] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" HandleID="k8s-pod-network.19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00037c030), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-3i0x6.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-kq75p", "timestamp":"2025-07-07 02:58:29.33253222 +0000 UTC"}, Hostname:"srv-3i0x6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.333 [INFO][4288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.334 [INFO][4288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.334 [INFO][4288] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3i0x6.gb1.brightbox.com' Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.352 [INFO][4288] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.366 [INFO][4288] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.381 [INFO][4288] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.385 [INFO][4288] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.390 [INFO][4288] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.390 [INFO][4288] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.394 [INFO][4288] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0 Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.401 [INFO][4288] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.415 [INFO][4288] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.131/26] block=192.168.59.128/26 handle="k8s-pod-network.19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.416 [INFO][4288] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.131/26] handle="k8s-pod-network.19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.416 [INFO][4288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:29.467836 containerd[1504]: 2025-07-07 02:58:29.416 [INFO][4288] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.131/26] IPv6=[] ContainerID="19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" HandleID="k8s-pod-network.19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:29.469398 containerd[1504]: 2025-07-07 02:58:29.424 [INFO][4205] cni-plugin/k8s.go 418: Populated endpoint ContainerID="19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kq75p" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"80330edf-c28a-48e6-926d-3066a67afb9f", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-kq75p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e03612df5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:29.469398 containerd[1504]: 2025-07-07 02:58:29.425 [INFO][4205] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.131/32] ContainerID="19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kq75p" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:29.469398 containerd[1504]: 2025-07-07 02:58:29.425 [INFO][4205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1e03612df5f ContainerID="19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kq75p" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:29.469398 containerd[1504]: 2025-07-07 02:58:29.433 [INFO][4205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kq75p" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:29.469398 containerd[1504]: 2025-07-07 02:58:29.435 [INFO][4205] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kq75p" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"80330edf-c28a-48e6-926d-3066a67afb9f", ResourceVersion:"977", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0", Pod:"coredns-7c65d6cfc9-kq75p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e03612df5f", MAC:"ba:85:ea:de:fa:95", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:29.469398 containerd[1504]: 2025-07-07 02:58:29.456 [INFO][4205] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0" Namespace="kube-system" Pod="coredns-7c65d6cfc9-kq75p" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:29.520815 systemd-networkd[1433]: cali14d4bc10b6d: Gained IPv6LL Jul 7 02:58:29.563260 containerd[1504]: time="2025-07-07T02:58:29.562755758Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:58:29.564116 containerd[1504]: time="2025-07-07T02:58:29.563170118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:58:29.564901 containerd[1504]: time="2025-07-07T02:58:29.564819486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:29.565465 containerd[1504]: time="2025-07-07T02:58:29.565392802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:29.591135 containerd[1504]: time="2025-07-07T02:58:29.590653605Z" level=info msg="StopPodSandbox for \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\"" Jul 7 02:58:29.608279 containerd[1504]: time="2025-07-07T02:58:29.608212273Z" level=info msg="StopPodSandbox for \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\"" Jul 7 02:58:29.747369 containerd[1504]: time="2025-07-07T02:58:29.740299651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-668b845577-6qbdj,Uid:5c665995-4960-47fc-b1cb-307e78262564,Namespace:calico-system,Attempt:0,} returns sandbox id \"315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987\"" Jul 7 02:58:29.760734 systemd[1]: Started cri-containerd-19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0.scope - libcontainer container 19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0. Jul 7 02:58:29.906577 containerd[1504]: time="2025-07-07T02:58:29.906513004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-kq75p,Uid:80330edf-c28a-48e6-926d-3066a67afb9f,Namespace:kube-system,Attempt:1,} returns sandbox id \"19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0\"" Jul 7 02:58:29.925267 containerd[1504]: time="2025-07-07T02:58:29.922029661Z" level=info msg="CreateContainer within sandbox \"19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 02:58:29.979008 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount686473379.mount: Deactivated successfully. Jul 7 02:58:29.993452 containerd[1504]: time="2025-07-07T02:58:29.992962468Z" level=info msg="CreateContainer within sandbox \"19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0e31b94e560e628fda9c9f98d8789bc07d4da6cfacc6edbabd795d2324f7d15\"" Jul 7 02:58:30.001089 containerd[1504]: 2025-07-07 02:58:29.854 [INFO][4355] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Jul 7 02:58:30.001089 containerd[1504]: 2025-07-07 02:58:29.854 [INFO][4355] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" iface="eth0" netns="/var/run/netns/cni-89ba597d-2dca-0d13-f535-78d7dba26116" Jul 7 02:58:30.001089 containerd[1504]: 2025-07-07 02:58:29.855 [INFO][4355] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" iface="eth0" netns="/var/run/netns/cni-89ba597d-2dca-0d13-f535-78d7dba26116" Jul 7 02:58:30.001089 containerd[1504]: 2025-07-07 02:58:29.860 [INFO][4355] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" iface="eth0" netns="/var/run/netns/cni-89ba597d-2dca-0d13-f535-78d7dba26116" Jul 7 02:58:30.001089 containerd[1504]: 2025-07-07 02:58:29.860 [INFO][4355] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Jul 7 02:58:30.001089 containerd[1504]: 2025-07-07 02:58:29.862 [INFO][4355] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Jul 7 02:58:30.001089 containerd[1504]: 2025-07-07 02:58:29.942 [INFO][4401] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" HandleID="k8s-pod-network.3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Workload="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:30.001089 containerd[1504]: 2025-07-07 02:58:29.943 [INFO][4401] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:30.001089 containerd[1504]: 2025-07-07 02:58:29.943 [INFO][4401] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:30.001089 containerd[1504]: 2025-07-07 02:58:29.963 [WARNING][4401] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" HandleID="k8s-pod-network.3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Workload="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:30.001089 containerd[1504]: 2025-07-07 02:58:29.963 [INFO][4401] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" HandleID="k8s-pod-network.3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Workload="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:30.001089 containerd[1504]: 2025-07-07 02:58:29.981 [INFO][4401] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:30.001089 containerd[1504]: 2025-07-07 02:58:29.984 [INFO][4355] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Jul 7 02:58:30.007136 containerd[1504]: time="2025-07-07T02:58:30.005756932Z" level=info msg="TearDown network for sandbox \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\" successfully" Jul 7 02:58:30.007136 containerd[1504]: time="2025-07-07T02:58:30.006615370Z" level=info msg="StopPodSandbox for \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\" returns successfully" Jul 7 02:58:30.009285 containerd[1504]: time="2025-07-07T02:58:30.005938971Z" level=info msg="StartContainer for \"a0e31b94e560e628fda9c9f98d8789bc07d4da6cfacc6edbabd795d2324f7d15\"" Jul 7 02:58:30.017739 containerd[1504]: time="2025-07-07T02:58:30.017581115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-976bg,Uid:8242f432-b915-4099-adee-f5f52e15717e,Namespace:calico-system,Attempt:1,}" Jul 7 02:58:30.125623 containerd[1504]: 2025-07-07 02:58:29.962 [INFO][4378] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Jul 7 02:58:30.125623 containerd[1504]: 2025-07-07 02:58:29.965 [INFO][4378] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" iface="eth0" netns="/var/run/netns/cni-53a70345-860a-a4ca-c8e7-fcb4b8042b21" Jul 7 02:58:30.125623 containerd[1504]: 2025-07-07 02:58:29.967 [INFO][4378] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" iface="eth0" netns="/var/run/netns/cni-53a70345-860a-a4ca-c8e7-fcb4b8042b21" Jul 7 02:58:30.125623 containerd[1504]: 2025-07-07 02:58:29.976 [INFO][4378] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" iface="eth0" netns="/var/run/netns/cni-53a70345-860a-a4ca-c8e7-fcb4b8042b21" Jul 7 02:58:30.125623 containerd[1504]: 2025-07-07 02:58:29.977 [INFO][4378] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Jul 7 02:58:30.125623 containerd[1504]: 2025-07-07 02:58:29.977 [INFO][4378] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Jul 7 02:58:30.125623 containerd[1504]: 2025-07-07 02:58:30.076 [INFO][4415] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" HandleID="k8s-pod-network.7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:30.125623 containerd[1504]: 2025-07-07 02:58:30.082 [INFO][4415] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:30.125623 containerd[1504]: 2025-07-07 02:58:30.082 [INFO][4415] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:30.125623 containerd[1504]: 2025-07-07 02:58:30.101 [WARNING][4415] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" HandleID="k8s-pod-network.7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:30.125623 containerd[1504]: 2025-07-07 02:58:30.102 [INFO][4415] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" HandleID="k8s-pod-network.7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:30.125623 containerd[1504]: 2025-07-07 02:58:30.109 [INFO][4415] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:30.125623 containerd[1504]: 2025-07-07 02:58:30.115 [INFO][4378] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Jul 7 02:58:30.132509 containerd[1504]: time="2025-07-07T02:58:30.126209859Z" level=info msg="TearDown network for sandbox \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\" successfully" Jul 7 02:58:30.132509 containerd[1504]: time="2025-07-07T02:58:30.126349556Z" level=info msg="StopPodSandbox for \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\" returns successfully" Jul 7 02:58:30.133450 containerd[1504]: time="2025-07-07T02:58:30.132876835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574fc86944-2st6q,Uid:c1e51e64-ed54-4337-b03c-5e29a286c65d,Namespace:calico-apiserver,Attempt:1,}" Jul 7 02:58:30.160458 systemd[1]: Started cri-containerd-a0e31b94e560e628fda9c9f98d8789bc07d4da6cfacc6edbabd795d2324f7d15.scope - libcontainer container a0e31b94e560e628fda9c9f98d8789bc07d4da6cfacc6edbabd795d2324f7d15. Jul 7 02:58:30.218522 systemd-networkd[1433]: cali0d27d37ec49: Gained IPv6LL Jul 7 02:58:30.326270 containerd[1504]: time="2025-07-07T02:58:30.325666322Z" level=info msg="StartContainer for \"a0e31b94e560e628fda9c9f98d8789bc07d4da6cfacc6edbabd795d2324f7d15\" returns successfully" Jul 7 02:58:30.573041 systemd[1]: run-netns-cni\x2d53a70345\x2d860a\x2da4ca\x2dc8e7\x2dfcb4b8042b21.mount: Deactivated successfully. Jul 7 02:58:30.573186 systemd[1]: run-netns-cni\x2d89ba597d\x2d2dca\x2d0d13\x2df535\x2d78d7dba26116.mount: Deactivated successfully. Jul 7 02:58:30.582226 containerd[1504]: time="2025-07-07T02:58:30.580696831Z" level=info msg="StopPodSandbox for \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\"" Jul 7 02:58:30.603336 containerd[1504]: time="2025-07-07T02:58:30.603058808Z" level=info msg="StopPodSandbox for \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\"" Jul 7 02:58:30.769834 systemd-networkd[1433]: cali7c028471174: Link UP Jul 7 02:58:30.778500 systemd-networkd[1433]: cali7c028471174: Gained carrier Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.196 [INFO][4429] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.295 [INFO][4429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0 goldmane-58fd7646b9- calico-system 8242f432-b915-4099-adee-f5f52e15717e 991 0 2025-07-07 02:58:01 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:58fd7646b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-3i0x6.gb1.brightbox.com goldmane-58fd7646b9-976bg eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali7c028471174 [] [] }} ContainerID="9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" Namespace="calico-system" Pod="goldmane-58fd7646b9-976bg" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-" Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.298 [INFO][4429] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" Namespace="calico-system" Pod="goldmane-58fd7646b9-976bg" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.540 [INFO][4479] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" HandleID="k8s-pod-network.9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" Workload="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.541 [INFO][4479] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" HandleID="k8s-pod-network.9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" Workload="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003441d0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-3i0x6.gb1.brightbox.com", "pod":"goldmane-58fd7646b9-976bg", "timestamp":"2025-07-07 02:58:30.538971929 +0000 UTC"}, Hostname:"srv-3i0x6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.544 [INFO][4479] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.544 [INFO][4479] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.544 [INFO][4479] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3i0x6.gb1.brightbox.com' Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.596 [INFO][4479] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.649 [INFO][4479] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.669 [INFO][4479] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.677 [INFO][4479] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.685 [INFO][4479] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.688 [INFO][4479] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.695 [INFO][4479] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7 Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.705 [INFO][4479] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.733 [INFO][4479] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.132/26] block=192.168.59.128/26 handle="k8s-pod-network.9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.733 [INFO][4479] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.132/26] handle="k8s-pod-network.9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.733 [INFO][4479] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:30.847030 containerd[1504]: 2025-07-07 02:58:30.733 [INFO][4479] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.132/26] IPv6=[] ContainerID="9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" HandleID="k8s-pod-network.9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" Workload="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:30.850418 containerd[1504]: 2025-07-07 02:58:30.750 [INFO][4429] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" Namespace="calico-system" Pod="goldmane-58fd7646b9-976bg" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8242f432-b915-4099-adee-f5f52e15717e", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-58fd7646b9-976bg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c028471174", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:30.850418 containerd[1504]: 2025-07-07 02:58:30.752 [INFO][4429] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.132/32] ContainerID="9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" Namespace="calico-system" Pod="goldmane-58fd7646b9-976bg" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:30.850418 containerd[1504]: 2025-07-07 02:58:30.753 [INFO][4429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c028471174 ContainerID="9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" Namespace="calico-system" Pod="goldmane-58fd7646b9-976bg" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:30.850418 containerd[1504]: 2025-07-07 02:58:30.768 [INFO][4429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" Namespace="calico-system" Pod="goldmane-58fd7646b9-976bg" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:30.850418 containerd[1504]: 2025-07-07 02:58:30.780 [INFO][4429] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" Namespace="calico-system" Pod="goldmane-58fd7646b9-976bg" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8242f432-b915-4099-adee-f5f52e15717e", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7", Pod:"goldmane-58fd7646b9-976bg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c028471174", MAC:"de:c5:12:f7:cb:ac", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:30.850418 containerd[1504]: 2025-07-07 02:58:30.842 [INFO][4429] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7" Namespace="calico-system" Pod="goldmane-58fd7646b9-976bg" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:31.025096 containerd[1504]: time="2025-07-07T02:58:31.021922803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:58:31.025096 containerd[1504]: time="2025-07-07T02:58:31.022049861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:58:31.025096 containerd[1504]: time="2025-07-07T02:58:31.022070823Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:31.025096 containerd[1504]: time="2025-07-07T02:58:31.022228873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:31.030684 systemd-networkd[1433]: cali19487381a85: Link UP Jul 7 02:58:31.031109 systemd-networkd[1433]: cali19487381a85: Gained carrier Jul 7 02:58:31.051473 systemd-networkd[1433]: cali1e03612df5f: Gained IPv6LL Jul 7 02:58:31.111776 kubelet[2669]: I0707 02:58:31.111599 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-kq75p" podStartSLOduration=46.111572528 podStartE2EDuration="46.111572528s" podCreationTimestamp="2025-07-07 02:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:58:31.10590357 +0000 UTC m=+52.684438217" watchObservedRunningTime="2025-07-07 02:58:31.111572528 +0000 UTC m=+52.690107168" Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.255 [INFO][4450] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.332 [INFO][4450] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0 calico-apiserver-574fc86944- calico-apiserver c1e51e64-ed54-4337-b03c-5e29a286c65d 993 0 2025-07-07 02:57:56 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:574fc86944 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-3i0x6.gb1.brightbox.com calico-apiserver-574fc86944-2st6q eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali19487381a85 [] [] }} ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-2st6q" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-" Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.332 [INFO][4450] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-2st6q" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.558 [INFO][4485] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" HandleID="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.561 [INFO][4485] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" HandleID="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b4790), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-3i0x6.gb1.brightbox.com", "pod":"calico-apiserver-574fc86944-2st6q", "timestamp":"2025-07-07 02:58:30.558314642 +0000 UTC"}, Hostname:"srv-3i0x6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.567 [INFO][4485] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.735 [INFO][4485] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.735 [INFO][4485] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3i0x6.gb1.brightbox.com' Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.805 [INFO][4485] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.827 [INFO][4485] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.876 [INFO][4485] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.885 [INFO][4485] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.902 [INFO][4485] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.902 [INFO][4485] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.915 [INFO][4485] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.937 [INFO][4485] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.986 [INFO][4485] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.133/26] block=192.168.59.128/26 handle="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.988 [INFO][4485] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.133/26] handle="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.988 [INFO][4485] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:31.124177 containerd[1504]: 2025-07-07 02:58:30.990 [INFO][4485] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.133/26] IPv6=[] ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" HandleID="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:31.126348 containerd[1504]: 2025-07-07 02:58:31.007 [INFO][4450] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-2st6q" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0", GenerateName:"calico-apiserver-574fc86944-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1e51e64-ed54-4337-b03c-5e29a286c65d", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574fc86944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-574fc86944-2st6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19487381a85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:31.126348 containerd[1504]: 2025-07-07 02:58:31.007 [INFO][4450] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.133/32] ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-2st6q" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:31.126348 containerd[1504]: 2025-07-07 02:58:31.007 [INFO][4450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19487381a85 ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-2st6q" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:31.126348 containerd[1504]: 2025-07-07 02:58:31.055 [INFO][4450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-2st6q" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:31.126348 containerd[1504]: 2025-07-07 02:58:31.064 [INFO][4450] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-2st6q" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0", GenerateName:"calico-apiserver-574fc86944-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1e51e64-ed54-4337-b03c-5e29a286c65d", ResourceVersion:"993", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574fc86944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b", Pod:"calico-apiserver-574fc86944-2st6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19487381a85", MAC:"ce:a8:b4:5f:3b:bc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:31.126348 containerd[1504]: 2025-07-07 02:58:31.115 [INFO][4450] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Namespace="calico-apiserver" Pod="calico-apiserver-574fc86944-2st6q" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:31.190941 systemd[1]: Started cri-containerd-9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7.scope - libcontainer container 9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7. Jul 7 02:58:31.195428 containerd[1504]: 2025-07-07 02:58:30.930 [INFO][4511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Jul 7 02:58:31.195428 containerd[1504]: 2025-07-07 02:58:30.933 [INFO][4511] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" iface="eth0" netns="/var/run/netns/cni-9afa9ac0-b429-5ca0-e33c-3a55e8a7eb5f" Jul 7 02:58:31.195428 containerd[1504]: 2025-07-07 02:58:30.933 [INFO][4511] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" iface="eth0" netns="/var/run/netns/cni-9afa9ac0-b429-5ca0-e33c-3a55e8a7eb5f" Jul 7 02:58:31.195428 containerd[1504]: 2025-07-07 02:58:30.936 [INFO][4511] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" iface="eth0" netns="/var/run/netns/cni-9afa9ac0-b429-5ca0-e33c-3a55e8a7eb5f" Jul 7 02:58:31.195428 containerd[1504]: 2025-07-07 02:58:30.936 [INFO][4511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Jul 7 02:58:31.195428 containerd[1504]: 2025-07-07 02:58:30.936 [INFO][4511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Jul 7 02:58:31.195428 containerd[1504]: 2025-07-07 02:58:31.140 [INFO][4550] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" HandleID="k8s-pod-network.c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:31.195428 containerd[1504]: 2025-07-07 02:58:31.141 [INFO][4550] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:31.195428 containerd[1504]: 2025-07-07 02:58:31.141 [INFO][4550] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:31.195428 containerd[1504]: 2025-07-07 02:58:31.160 [WARNING][4550] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" HandleID="k8s-pod-network.c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:31.195428 containerd[1504]: 2025-07-07 02:58:31.160 [INFO][4550] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" HandleID="k8s-pod-network.c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:31.195428 containerd[1504]: 2025-07-07 02:58:31.164 [INFO][4550] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:31.195428 containerd[1504]: 2025-07-07 02:58:31.175 [INFO][4511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Jul 7 02:58:31.199503 containerd[1504]: time="2025-07-07T02:58:31.199043105Z" level=info msg="TearDown network for sandbox \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\" successfully" Jul 7 02:58:31.199503 containerd[1504]: time="2025-07-07T02:58:31.199081907Z" level=info msg="StopPodSandbox for \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\" returns successfully" Jul 7 02:58:31.212214 systemd[1]: run-netns-cni\x2d9afa9ac0\x2db429\x2d5ca0\x2de33c\x2d3a55e8a7eb5f.mount: Deactivated successfully. Jul 7 02:58:31.217825 containerd[1504]: time="2025-07-07T02:58:31.217781278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d899c5f84-rp2zh,Uid:79b32a8d-dd25-4e48-a397-425ba7037f30,Namespace:calico-system,Attempt:1,}" Jul 7 02:58:31.278175 containerd[1504]: 2025-07-07 02:58:30.960 [INFO][4520] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Jul 7 02:58:31.278175 containerd[1504]: 2025-07-07 02:58:30.960 [INFO][4520] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" iface="eth0" netns="/var/run/netns/cni-3cefb1a8-0073-1a00-c421-e7b23707ada9" Jul 7 02:58:31.278175 containerd[1504]: 2025-07-07 02:58:30.964 [INFO][4520] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" iface="eth0" netns="/var/run/netns/cni-3cefb1a8-0073-1a00-c421-e7b23707ada9" Jul 7 02:58:31.278175 containerd[1504]: 2025-07-07 02:58:30.967 [INFO][4520] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" iface="eth0" netns="/var/run/netns/cni-3cefb1a8-0073-1a00-c421-e7b23707ada9" Jul 7 02:58:31.278175 containerd[1504]: 2025-07-07 02:58:30.967 [INFO][4520] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Jul 7 02:58:31.278175 containerd[1504]: 2025-07-07 02:58:30.967 [INFO][4520] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Jul 7 02:58:31.278175 containerd[1504]: 2025-07-07 02:58:31.218 [INFO][4560] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" HandleID="k8s-pod-network.e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:31.278175 containerd[1504]: 2025-07-07 02:58:31.218 [INFO][4560] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:31.278175 containerd[1504]: 2025-07-07 02:58:31.218 [INFO][4560] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:31.278175 containerd[1504]: 2025-07-07 02:58:31.243 [WARNING][4560] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" HandleID="k8s-pod-network.e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:31.278175 containerd[1504]: 2025-07-07 02:58:31.243 [INFO][4560] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" HandleID="k8s-pod-network.e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:31.278175 containerd[1504]: 2025-07-07 02:58:31.251 [INFO][4560] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:31.278175 containerd[1504]: 2025-07-07 02:58:31.263 [INFO][4520] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Jul 7 02:58:31.279585 containerd[1504]: time="2025-07-07T02:58:31.278616807Z" level=info msg="TearDown network for sandbox \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\" successfully" Jul 7 02:58:31.279585 containerd[1504]: time="2025-07-07T02:58:31.279332698Z" level=info msg="StopPodSandbox for \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\" returns successfully" Jul 7 02:58:31.294280 containerd[1504]: time="2025-07-07T02:58:31.293965843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g7967,Uid:74713bc6-d2e3-43dc-8c4c-5e5600fd418c,Namespace:kube-system,Attempt:1,}" Jul 7 02:58:31.306400 containerd[1504]: time="2025-07-07T02:58:31.304918305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:58:31.306400 containerd[1504]: time="2025-07-07T02:58:31.305016061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:58:31.306400 containerd[1504]: time="2025-07-07T02:58:31.305033648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:31.306400 containerd[1504]: time="2025-07-07T02:58:31.305151711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:31.393461 systemd[1]: Started cri-containerd-e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b.scope - libcontainer container e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b. Jul 7 02:58:31.582294 systemd[1]: run-netns-cni\x2d3cefb1a8\x2d0073\x2d1a00\x2dc421\x2de7b23707ada9.mount: Deactivated successfully. Jul 7 02:58:31.590386 containerd[1504]: time="2025-07-07T02:58:31.589625450Z" level=info msg="StopPodSandbox for \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\"" Jul 7 02:58:32.070581 systemd-networkd[1433]: calid97c0ab1dc8: Link UP Jul 7 02:58:32.073076 systemd-networkd[1433]: calid97c0ab1dc8: Gained carrier Jul 7 02:58:32.094216 containerd[1504]: time="2025-07-07T02:58:32.094059528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-976bg,Uid:8242f432-b915-4099-adee-f5f52e15717e,Namespace:calico-system,Attempt:1,} returns sandbox id \"9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7\"" Jul 7 02:58:32.125527 systemd-networkd[1433]: caliac3cf4200d0: Link UP Jul 7 02:58:32.132521 systemd-networkd[1433]: caliac3cf4200d0: Gained carrier Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.466 [INFO][4607] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.511 [INFO][4607] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0 calico-kube-controllers-6d899c5f84- calico-system 79b32a8d-dd25-4e48-a397-425ba7037f30 1001 0 2025-07-07 02:58:02 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6d899c5f84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-3i0x6.gb1.brightbox.com calico-kube-controllers-6d899c5f84-rp2zh eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid97c0ab1dc8 [] [] }} ContainerID="10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" Namespace="calico-system" Pod="calico-kube-controllers-6d899c5f84-rp2zh" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-" Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.511 [INFO][4607] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" Namespace="calico-system" Pod="calico-kube-controllers-6d899c5f84-rp2zh" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.773 [INFO][4654] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" HandleID="k8s-pod-network.10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.774 [INFO][4654] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" HandleID="k8s-pod-network.10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000423e40), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-3i0x6.gb1.brightbox.com", "pod":"calico-kube-controllers-6d899c5f84-rp2zh", "timestamp":"2025-07-07 02:58:31.773301437 +0000 UTC"}, Hostname:"srv-3i0x6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.775 [INFO][4654] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.775 [INFO][4654] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.775 [INFO][4654] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3i0x6.gb1.brightbox.com' Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.812 [INFO][4654] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.857 [INFO][4654] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.883 [INFO][4654] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.891 [INFO][4654] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.903 [INFO][4654] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.903 [INFO][4654] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.906 [INFO][4654] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467 Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.923 [INFO][4654] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.948 [INFO][4654] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.134/26] block=192.168.59.128/26 handle="k8s-pod-network.10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.948 [INFO][4654] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.134/26] handle="k8s-pod-network.10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.948 [INFO][4654] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:32.146148 containerd[1504]: 2025-07-07 02:58:31.948 [INFO][4654] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.134/26] IPv6=[] ContainerID="10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" HandleID="k8s-pod-network.10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:32.149208 containerd[1504]: 2025-07-07 02:58:32.000 [INFO][4607] cni-plugin/k8s.go 418: Populated endpoint ContainerID="10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" Namespace="calico-system" Pod="calico-kube-controllers-6d899c5f84-rp2zh" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0", GenerateName:"calico-kube-controllers-6d899c5f84-", Namespace:"calico-system", SelfLink:"", UID:"79b32a8d-dd25-4e48-a397-425ba7037f30", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d899c5f84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-6d899c5f84-rp2zh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid97c0ab1dc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:32.149208 containerd[1504]: 2025-07-07 02:58:32.001 [INFO][4607] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.134/32] ContainerID="10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" Namespace="calico-system" Pod="calico-kube-controllers-6d899c5f84-rp2zh" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:32.149208 containerd[1504]: 2025-07-07 02:58:32.001 [INFO][4607] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid97c0ab1dc8 ContainerID="10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" Namespace="calico-system" Pod="calico-kube-controllers-6d899c5f84-rp2zh" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:32.149208 containerd[1504]: 2025-07-07 02:58:32.074 [INFO][4607] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" Namespace="calico-system" Pod="calico-kube-controllers-6d899c5f84-rp2zh" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:32.149208 containerd[1504]: 2025-07-07 02:58:32.078 [INFO][4607] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" Namespace="calico-system" Pod="calico-kube-controllers-6d899c5f84-rp2zh" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0", GenerateName:"calico-kube-controllers-6d899c5f84-", Namespace:"calico-system", SelfLink:"", UID:"79b32a8d-dd25-4e48-a397-425ba7037f30", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d899c5f84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467", Pod:"calico-kube-controllers-6d899c5f84-rp2zh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid97c0ab1dc8", MAC:"aa:87:76:6b:45:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:32.149208 containerd[1504]: 2025-07-07 02:58:32.129 [INFO][4607] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467" Namespace="calico-system" Pod="calico-kube-controllers-6d899c5f84-rp2zh" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:32.197837 containerd[1504]: 2025-07-07 02:58:31.877 [INFO][4673] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Jul 7 02:58:32.197837 containerd[1504]: 2025-07-07 02:58:31.877 [INFO][4673] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" iface="eth0" netns="/var/run/netns/cni-7436f173-d967-16d3-bb62-e93e77287870" Jul 7 02:58:32.197837 containerd[1504]: 2025-07-07 02:58:31.878 [INFO][4673] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" iface="eth0" netns="/var/run/netns/cni-7436f173-d967-16d3-bb62-e93e77287870" Jul 7 02:58:32.197837 containerd[1504]: 2025-07-07 02:58:31.878 [INFO][4673] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" iface="eth0" netns="/var/run/netns/cni-7436f173-d967-16d3-bb62-e93e77287870" Jul 7 02:58:32.197837 containerd[1504]: 2025-07-07 02:58:31.879 [INFO][4673] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Jul 7 02:58:32.197837 containerd[1504]: 2025-07-07 02:58:31.879 [INFO][4673] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Jul 7 02:58:32.197837 containerd[1504]: 2025-07-07 02:58:32.038 [INFO][4691] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" HandleID="k8s-pod-network.0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:32.197837 containerd[1504]: 2025-07-07 02:58:32.039 [INFO][4691] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:32.197837 containerd[1504]: 2025-07-07 02:58:32.097 [INFO][4691] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:32.197837 containerd[1504]: 2025-07-07 02:58:32.157 [WARNING][4691] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" HandleID="k8s-pod-network.0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:32.197837 containerd[1504]: 2025-07-07 02:58:32.157 [INFO][4691] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" HandleID="k8s-pod-network.0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:32.197837 containerd[1504]: 2025-07-07 02:58:32.171 [INFO][4691] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:32.197837 containerd[1504]: 2025-07-07 02:58:32.190 [INFO][4673] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Jul 7 02:58:32.205025 containerd[1504]: time="2025-07-07T02:58:32.198388967Z" level=info msg="TearDown network for sandbox \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\" successfully" Jul 7 02:58:32.205025 containerd[1504]: time="2025-07-07T02:58:32.198529086Z" level=info msg="StopPodSandbox for \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\" returns successfully" Jul 7 02:58:32.213293 containerd[1504]: time="2025-07-07T02:58:32.212503639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556c85fc95-nldxn,Uid:f0c66970-d617-482e-81b9-577d7709b687,Namespace:calico-apiserver,Attempt:1,}" Jul 7 02:58:32.212935 systemd[1]: run-netns-cni\x2d7436f173\x2dd967\x2d16d3\x2dbb62\x2de93e77287870.mount: Deactivated successfully. Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:31.546 [INFO][4633] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:31.630 [INFO][4633] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0 coredns-7c65d6cfc9- kube-system 74713bc6-d2e3-43dc-8c4c-5e5600fd418c 1003 0 2025-07-07 02:57:45 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-3i0x6.gb1.brightbox.com coredns-7c65d6cfc9-g7967 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliac3cf4200d0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7967" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-" Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:31.630 [INFO][4633] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7967" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:31.853 [INFO][4676] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" HandleID="k8s-pod-network.a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:31.854 [INFO][4676] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" HandleID="k8s-pod-network.a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003578b0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-3i0x6.gb1.brightbox.com", "pod":"coredns-7c65d6cfc9-g7967", "timestamp":"2025-07-07 02:58:31.853147916 +0000 UTC"}, Hostname:"srv-3i0x6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:31.854 [INFO][4676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:31.950 [INFO][4676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:31.950 [INFO][4676] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3i0x6.gb1.brightbox.com' Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:31.992 [INFO][4676] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:32.012 [INFO][4676] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:32.032 [INFO][4676] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:32.035 [INFO][4676] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:32.042 [INFO][4676] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:32.043 [INFO][4676] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:32.050 [INFO][4676] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:32.066 [INFO][4676] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:32.097 [INFO][4676] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.135/26] block=192.168.59.128/26 handle="k8s-pod-network.a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:32.097 [INFO][4676] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.135/26] handle="k8s-pod-network.a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:32.097 [INFO][4676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:32.221767 containerd[1504]: 2025-07-07 02:58:32.097 [INFO][4676] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.135/26] IPv6=[] ContainerID="a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" HandleID="k8s-pod-network.a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:32.226953 containerd[1504]: 2025-07-07 02:58:32.103 [INFO][4633] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7967" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"74713bc6-d2e3-43dc-8c4c-5e5600fd418c", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"", Pod:"coredns-7c65d6cfc9-g7967", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac3cf4200d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:32.226953 containerd[1504]: 2025-07-07 02:58:32.103 [INFO][4633] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.135/32] ContainerID="a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7967" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:32.226953 containerd[1504]: 2025-07-07 02:58:32.104 [INFO][4633] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliac3cf4200d0 ContainerID="a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7967" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:32.226953 containerd[1504]: 2025-07-07 02:58:32.141 [INFO][4633] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7967" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:32.226953 containerd[1504]: 2025-07-07 02:58:32.142 [INFO][4633] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7967" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"74713bc6-d2e3-43dc-8c4c-5e5600fd418c", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b", Pod:"coredns-7c65d6cfc9-g7967", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac3cf4200d0", MAC:"ea:2b:f1:f7:a7:5d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:32.226953 containerd[1504]: 2025-07-07 02:58:32.187 [INFO][4633] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b" Namespace="kube-system" Pod="coredns-7c65d6cfc9-g7967" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:32.292990 containerd[1504]: time="2025-07-07T02:58:32.292552440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-574fc86944-2st6q,Uid:c1e51e64-ed54-4337-b03c-5e29a286c65d,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b\"" Jul 7 02:58:32.325105 containerd[1504]: time="2025-07-07T02:58:32.324306939Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:58:32.325105 containerd[1504]: time="2025-07-07T02:58:32.324405877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:58:32.325105 containerd[1504]: time="2025-07-07T02:58:32.324424735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:32.328855 containerd[1504]: time="2025-07-07T02:58:32.327042467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:32.435490 systemd[1]: Started cri-containerd-10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467.scope - libcontainer container 10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467. Jul 7 02:58:32.445260 containerd[1504]: time="2025-07-07T02:58:32.444559771Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:58:32.445397 containerd[1504]: time="2025-07-07T02:58:32.445207865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:58:32.445677 containerd[1504]: time="2025-07-07T02:58:32.445396305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:32.446420 containerd[1504]: time="2025-07-07T02:58:32.446340594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:32.522413 systemd-networkd[1433]: cali19487381a85: Gained IPv6LL Jul 7 02:58:32.534531 systemd[1]: Started cri-containerd-a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b.scope - libcontainer container a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b. Jul 7 02:58:32.653333 systemd-networkd[1433]: cali7c028471174: Gained IPv6LL Jul 7 02:58:32.747845 containerd[1504]: time="2025-07-07T02:58:32.747673173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6d899c5f84-rp2zh,Uid:79b32a8d-dd25-4e48-a397-425ba7037f30,Namespace:calico-system,Attempt:1,} returns sandbox id \"10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467\"" Jul 7 02:58:32.749122 containerd[1504]: time="2025-07-07T02:58:32.749068286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-g7967,Uid:74713bc6-d2e3-43dc-8c4c-5e5600fd418c,Namespace:kube-system,Attempt:1,} returns sandbox id \"a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b\"" Jul 7 02:58:32.758170 containerd[1504]: time="2025-07-07T02:58:32.757941700Z" level=info msg="CreateContainer within sandbox \"a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 02:58:32.791973 containerd[1504]: time="2025-07-07T02:58:32.791513867Z" level=info msg="CreateContainer within sandbox \"a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"609367214c422c9fa9f3a61574cea1c4e82e6ec5253af04df3cb200b1b8013c2\"" Jul 7 02:58:32.794270 containerd[1504]: time="2025-07-07T02:58:32.793355448Z" level=info msg="StartContainer for \"609367214c422c9fa9f3a61574cea1c4e82e6ec5253af04df3cb200b1b8013c2\"" Jul 7 02:58:32.822459 systemd-networkd[1433]: cali2981b6817a9: Link UP Jul 7 02:58:32.823331 systemd-networkd[1433]: cali2981b6817a9: Gained carrier Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.410 [INFO][4754] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.478 [INFO][4754] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0 calico-apiserver-556c85fc95- calico-apiserver f0c66970-d617-482e-81b9-577d7709b687 1018 0 2025-07-07 02:57:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:556c85fc95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-3i0x6.gb1.brightbox.com calico-apiserver-556c85fc95-nldxn eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali2981b6817a9 [] [] }} ContainerID="e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-nldxn" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-" Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.478 [INFO][4754] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-nldxn" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.655 [INFO][4817] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" HandleID="k8s-pod-network.e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.656 [INFO][4817] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" HandleID="k8s-pod-network.e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b3860), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-3i0x6.gb1.brightbox.com", "pod":"calico-apiserver-556c85fc95-nldxn", "timestamp":"2025-07-07 02:58:32.655331495 +0000 UTC"}, Hostname:"srv-3i0x6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.656 [INFO][4817] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.656 [INFO][4817] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.656 [INFO][4817] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3i0x6.gb1.brightbox.com' Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.667 [INFO][4817] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.688 [INFO][4817] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.735 [INFO][4817] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.746 [INFO][4817] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.756 [INFO][4817] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.756 [INFO][4817] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.766 [INFO][4817] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.779 [INFO][4817] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.802 [INFO][4817] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.136/26] block=192.168.59.128/26 handle="k8s-pod-network.e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.802 [INFO][4817] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.136/26] handle="k8s-pod-network.e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.803 [INFO][4817] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:32.881104 containerd[1504]: 2025-07-07 02:58:32.803 [INFO][4817] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.136/26] IPv6=[] ContainerID="e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" HandleID="k8s-pod-network.e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:32.883714 containerd[1504]: 2025-07-07 02:58:32.810 [INFO][4754] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-nldxn" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0", GenerateName:"calico-apiserver-556c85fc95-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0c66970-d617-482e-81b9-577d7709b687", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556c85fc95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-556c85fc95-nldxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2981b6817a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:32.883714 containerd[1504]: 2025-07-07 02:58:32.812 [INFO][4754] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.136/32] ContainerID="e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-nldxn" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:32.883714 containerd[1504]: 2025-07-07 02:58:32.812 [INFO][4754] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2981b6817a9 ContainerID="e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-nldxn" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:32.883714 containerd[1504]: 2025-07-07 02:58:32.821 [INFO][4754] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-nldxn" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:32.883714 containerd[1504]: 2025-07-07 02:58:32.824 [INFO][4754] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-nldxn" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0", GenerateName:"calico-apiserver-556c85fc95-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0c66970-d617-482e-81b9-577d7709b687", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556c85fc95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f", Pod:"calico-apiserver-556c85fc95-nldxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2981b6817a9", MAC:"36:02:0f:bd:a4:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:32.883714 containerd[1504]: 2025-07-07 02:58:32.870 [INFO][4754] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-nldxn" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:32.925591 systemd[1]: Started cri-containerd-609367214c422c9fa9f3a61574cea1c4e82e6ec5253af04df3cb200b1b8013c2.scope - libcontainer container 609367214c422c9fa9f3a61574cea1c4e82e6ec5253af04df3cb200b1b8013c2. Jul 7 02:58:33.028426 containerd[1504]: time="2025-07-07T02:58:33.027911504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:58:33.028426 containerd[1504]: time="2025-07-07T02:58:33.028077151Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:58:33.028426 containerd[1504]: time="2025-07-07T02:58:33.028112844Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:33.039429 containerd[1504]: time="2025-07-07T02:58:33.028320124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:33.053076 containerd[1504]: time="2025-07-07T02:58:33.052793119Z" level=info msg="StartContainer for \"609367214c422c9fa9f3a61574cea1c4e82e6ec5253af04df3cb200b1b8013c2\" returns successfully" Jul 7 02:58:33.123199 systemd[1]: Started cri-containerd-e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f.scope - libcontainer container e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f. Jul 7 02:58:33.149883 kubelet[2669]: I0707 02:58:33.147406 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-g7967" podStartSLOduration=48.147360585 podStartE2EDuration="48.147360585s" podCreationTimestamp="2025-07-07 02:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:58:33.146266778 +0000 UTC m=+54.724801424" watchObservedRunningTime="2025-07-07 02:58:33.147360585 +0000 UTC m=+54.725895216" Jul 7 02:58:33.226485 systemd-networkd[1433]: caliac3cf4200d0: Gained IPv6LL Jul 7 02:58:33.609291 kernel: bpftool[4943]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jul 7 02:58:33.676171 systemd-networkd[1433]: calid97c0ab1dc8: Gained IPv6LL Jul 7 02:58:33.714073 containerd[1504]: time="2025-07-07T02:58:33.713849443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556c85fc95-nldxn,Uid:f0c66970-d617-482e-81b9-577d7709b687,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f\"" Jul 7 02:58:33.995414 systemd-networkd[1433]: cali2981b6817a9: Gained IPv6LL Jul 7 02:58:34.481092 systemd-networkd[1433]: vxlan.calico: Link UP Jul 7 02:58:34.481114 systemd-networkd[1433]: vxlan.calico: Gained carrier Jul 7 02:58:35.294879 containerd[1504]: time="2025-07-07T02:58:35.294793863Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:35.297014 containerd[1504]: time="2025-07-07T02:58:35.296916365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 02:58:35.298978 containerd[1504]: time="2025-07-07T02:58:35.298891595Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:35.303186 containerd[1504]: time="2025-07-07T02:58:35.303102405Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:35.304468 containerd[1504]: time="2025-07-07T02:58:35.304363212Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 7.029267473s" Jul 7 02:58:35.304670 containerd[1504]: time="2025-07-07T02:58:35.304442062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 02:58:35.309106 containerd[1504]: time="2025-07-07T02:58:35.308712501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 02:58:35.315714 containerd[1504]: time="2025-07-07T02:58:35.315665748Z" level=info msg="CreateContainer within sandbox \"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 02:58:35.338184 containerd[1504]: time="2025-07-07T02:58:35.338132927Z" level=info msg="CreateContainer within sandbox \"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c\"" Jul 7 02:58:35.340115 containerd[1504]: time="2025-07-07T02:58:35.340078006Z" level=info msg="StartContainer for \"64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c\"" Jul 7 02:58:35.490553 systemd[1]: Started cri-containerd-64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c.scope - libcontainer container 64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c. Jul 7 02:58:35.597630 containerd[1504]: time="2025-07-07T02:58:35.597487020Z" level=info msg="StartContainer for \"64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c\" returns successfully" Jul 7 02:58:36.180295 kubelet[2669]: I0707 02:58:36.180062 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-574fc86944-7dt46" podStartSLOduration=33.14478335 podStartE2EDuration="40.179438916s" podCreationTimestamp="2025-07-07 02:57:56 +0000 UTC" firstStartedPulling="2025-07-07 02:58:28.27371611 +0000 UTC m=+49.852250737" lastFinishedPulling="2025-07-07 02:58:35.308371665 +0000 UTC m=+56.886906303" observedRunningTime="2025-07-07 02:58:36.179069362 +0000 UTC m=+57.757604001" watchObservedRunningTime="2025-07-07 02:58:36.179438916 +0000 UTC m=+57.757973541" Jul 7 02:58:36.234638 systemd-networkd[1433]: vxlan.calico: Gained IPv6LL Jul 7 02:58:37.206096 containerd[1504]: time="2025-07-07T02:58:37.205191737Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:37.211927 containerd[1504]: time="2025-07-07T02:58:37.210995059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 02:58:37.214609 containerd[1504]: time="2025-07-07T02:58:37.214493911Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:37.218768 containerd[1504]: time="2025-07-07T02:58:37.218641448Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:37.222484 containerd[1504]: time="2025-07-07T02:58:37.221685586Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.912925104s" Jul 7 02:58:37.222767 containerd[1504]: time="2025-07-07T02:58:37.222733669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 02:58:37.225629 containerd[1504]: time="2025-07-07T02:58:37.225343174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 02:58:37.227586 containerd[1504]: time="2025-07-07T02:58:37.227370765Z" level=info msg="CreateContainer within sandbox \"315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 02:58:37.250390 containerd[1504]: time="2025-07-07T02:58:37.250331846Z" level=info msg="CreateContainer within sandbox \"315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"aa1678b5bb3902920a918aab41665e818532822bdda105f2ce4b418724d8db4d\"" Jul 7 02:58:37.252391 containerd[1504]: time="2025-07-07T02:58:37.252218499Z" level=info msg="StartContainer for \"aa1678b5bb3902920a918aab41665e818532822bdda105f2ce4b418724d8db4d\"" Jul 7 02:58:37.337519 systemd[1]: Started cri-containerd-aa1678b5bb3902920a918aab41665e818532822bdda105f2ce4b418724d8db4d.scope - libcontainer container aa1678b5bb3902920a918aab41665e818532822bdda105f2ce4b418724d8db4d. Jul 7 02:58:37.407857 containerd[1504]: time="2025-07-07T02:58:37.407780509Z" level=info msg="StartContainer for \"aa1678b5bb3902920a918aab41665e818532822bdda105f2ce4b418724d8db4d\" returns successfully" Jul 7 02:58:38.770637 containerd[1504]: time="2025-07-07T02:58:38.768618767Z" level=info msg="StopPodSandbox for \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\"" Jul 7 02:58:39.075374 containerd[1504]: 2025-07-07 02:58:38.967 [WARNING][5140] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0", GenerateName:"calico-apiserver-574fc86944-", Namespace:"calico-apiserver", SelfLink:"", UID:"c09fc3d5-40d3-4a9f-af15-68963bda8021", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574fc86944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d", Pod:"calico-apiserver-574fc86944-7dt46", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14d4bc10b6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:39.075374 containerd[1504]: 2025-07-07 02:58:38.969 [INFO][5140] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Jul 7 02:58:39.075374 containerd[1504]: 2025-07-07 02:58:38.969 [INFO][5140] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" iface="eth0" netns="" Jul 7 02:58:39.075374 containerd[1504]: 2025-07-07 02:58:38.969 [INFO][5140] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Jul 7 02:58:39.075374 containerd[1504]: 2025-07-07 02:58:38.969 [INFO][5140] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Jul 7 02:58:39.075374 containerd[1504]: 2025-07-07 02:58:39.043 [INFO][5147] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" HandleID="k8s-pod-network.d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:39.075374 containerd[1504]: 2025-07-07 02:58:39.043 [INFO][5147] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:39.075374 containerd[1504]: 2025-07-07 02:58:39.044 [INFO][5147] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:39.075374 containerd[1504]: 2025-07-07 02:58:39.062 [WARNING][5147] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" HandleID="k8s-pod-network.d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:39.075374 containerd[1504]: 2025-07-07 02:58:39.062 [INFO][5147] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" HandleID="k8s-pod-network.d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:39.075374 containerd[1504]: 2025-07-07 02:58:39.066 [INFO][5147] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:39.075374 containerd[1504]: 2025-07-07 02:58:39.071 [INFO][5140] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Jul 7 02:58:39.075374 containerd[1504]: time="2025-07-07T02:58:39.074069888Z" level=info msg="TearDown network for sandbox \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\" successfully" Jul 7 02:58:39.075374 containerd[1504]: time="2025-07-07T02:58:39.074121240Z" level=info msg="StopPodSandbox for \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\" returns successfully" Jul 7 02:58:39.165060 containerd[1504]: time="2025-07-07T02:58:39.164964407Z" level=info msg="RemovePodSandbox for \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\"" Jul 7 02:58:39.165707 containerd[1504]: time="2025-07-07T02:58:39.165083522Z" level=info msg="Forcibly stopping sandbox \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\"" Jul 7 02:58:39.562125 containerd[1504]: 2025-07-07 02:58:39.386 [WARNING][5176] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0", GenerateName:"calico-apiserver-574fc86944-", Namespace:"calico-apiserver", SelfLink:"", UID:"c09fc3d5-40d3-4a9f-af15-68963bda8021", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574fc86944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d", Pod:"calico-apiserver-574fc86944-7dt46", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali14d4bc10b6d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:39.562125 containerd[1504]: 2025-07-07 02:58:39.388 [INFO][5176] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Jul 7 02:58:39.562125 containerd[1504]: 2025-07-07 02:58:39.388 [INFO][5176] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" iface="eth0" netns="" Jul 7 02:58:39.562125 containerd[1504]: 2025-07-07 02:58:39.388 [INFO][5176] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Jul 7 02:58:39.562125 containerd[1504]: 2025-07-07 02:58:39.388 [INFO][5176] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Jul 7 02:58:39.562125 containerd[1504]: 2025-07-07 02:58:39.526 [INFO][5193] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" HandleID="k8s-pod-network.d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:39.562125 containerd[1504]: 2025-07-07 02:58:39.526 [INFO][5193] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:39.562125 containerd[1504]: 2025-07-07 02:58:39.526 [INFO][5193] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:39.562125 containerd[1504]: 2025-07-07 02:58:39.541 [WARNING][5193] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" HandleID="k8s-pod-network.d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:39.562125 containerd[1504]: 2025-07-07 02:58:39.541 [INFO][5193] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" HandleID="k8s-pod-network.d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:58:39.562125 containerd[1504]: 2025-07-07 02:58:39.547 [INFO][5193] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:39.562125 containerd[1504]: 2025-07-07 02:58:39.558 [INFO][5176] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02" Jul 7 02:58:39.564211 containerd[1504]: time="2025-07-07T02:58:39.562844675Z" level=info msg="TearDown network for sandbox \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\" successfully" Jul 7 02:58:39.604123 containerd[1504]: time="2025-07-07T02:58:39.602206324Z" level=info msg="StopPodSandbox for \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\"" Jul 7 02:58:39.616593 containerd[1504]: time="2025-07-07T02:58:39.603511817Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:58:39.626528 containerd[1504]: time="2025-07-07T02:58:39.626341366Z" level=info msg="RemovePodSandbox \"d737c4dccf1b9829da1751db0500080ff86d1e5de14083d7a023459c1305ee02\" returns successfully" Jul 7 02:58:39.629220 containerd[1504]: time="2025-07-07T02:58:39.628828508Z" level=info msg="StopPodSandbox for \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\"" Jul 7 02:58:40.006161 containerd[1504]: 2025-07-07 02:58:39.822 [WARNING][5217] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"74713bc6-d2e3-43dc-8c4c-5e5600fd418c", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b", Pod:"coredns-7c65d6cfc9-g7967", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac3cf4200d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:40.006161 containerd[1504]: 2025-07-07 02:58:39.824 [INFO][5217] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Jul 7 02:58:40.006161 containerd[1504]: 2025-07-07 02:58:39.824 [INFO][5217] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" iface="eth0" netns="" Jul 7 02:58:40.006161 containerd[1504]: 2025-07-07 02:58:39.825 [INFO][5217] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Jul 7 02:58:40.006161 containerd[1504]: 2025-07-07 02:58:39.825 [INFO][5217] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Jul 7 02:58:40.006161 containerd[1504]: 2025-07-07 02:58:39.962 [INFO][5229] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" HandleID="k8s-pod-network.e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:40.006161 containerd[1504]: 2025-07-07 02:58:39.963 [INFO][5229] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:40.006161 containerd[1504]: 2025-07-07 02:58:39.964 [INFO][5229] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:40.006161 containerd[1504]: 2025-07-07 02:58:39.979 [WARNING][5229] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" HandleID="k8s-pod-network.e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:40.006161 containerd[1504]: 2025-07-07 02:58:39.979 [INFO][5229] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" HandleID="k8s-pod-network.e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:40.006161 containerd[1504]: 2025-07-07 02:58:39.984 [INFO][5229] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:40.006161 containerd[1504]: 2025-07-07 02:58:39.990 [INFO][5217] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Jul 7 02:58:40.006161 containerd[1504]: time="2025-07-07T02:58:40.005947520Z" level=info msg="TearDown network for sandbox \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\" successfully" Jul 7 02:58:40.006161 containerd[1504]: time="2025-07-07T02:58:40.006006790Z" level=info msg="StopPodSandbox for \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\" returns successfully" Jul 7 02:58:40.012044 containerd[1504]: time="2025-07-07T02:58:40.011322326Z" level=info msg="RemovePodSandbox for \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\"" Jul 7 02:58:40.012044 containerd[1504]: time="2025-07-07T02:58:40.011369519Z" level=info msg="Forcibly stopping sandbox \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\"" Jul 7 02:58:40.043061 containerd[1504]: 2025-07-07 02:58:39.867 [INFO][5216] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:58:40.043061 containerd[1504]: 2025-07-07 02:58:39.872 [INFO][5216] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" iface="eth0" netns="/var/run/netns/cni-8b92ede2-6cf6-3444-4d48-33950ffbbb2c" Jul 7 02:58:40.043061 containerd[1504]: 2025-07-07 02:58:39.873 [INFO][5216] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" iface="eth0" netns="/var/run/netns/cni-8b92ede2-6cf6-3444-4d48-33950ffbbb2c" Jul 7 02:58:40.043061 containerd[1504]: 2025-07-07 02:58:39.874 [INFO][5216] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" iface="eth0" netns="/var/run/netns/cni-8b92ede2-6cf6-3444-4d48-33950ffbbb2c" Jul 7 02:58:40.043061 containerd[1504]: 2025-07-07 02:58:39.874 [INFO][5216] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:58:40.043061 containerd[1504]: 2025-07-07 02:58:39.875 [INFO][5216] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:58:40.043061 containerd[1504]: 2025-07-07 02:58:40.004 [INFO][5234] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" HandleID="k8s-pod-network.2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:58:40.043061 containerd[1504]: 2025-07-07 02:58:40.006 [INFO][5234] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:40.043061 containerd[1504]: 2025-07-07 02:58:40.006 [INFO][5234] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:40.043061 containerd[1504]: 2025-07-07 02:58:40.024 [WARNING][5234] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" HandleID="k8s-pod-network.2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:58:40.043061 containerd[1504]: 2025-07-07 02:58:40.024 [INFO][5234] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" HandleID="k8s-pod-network.2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:58:40.043061 containerd[1504]: 2025-07-07 02:58:40.031 [INFO][5234] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:40.043061 containerd[1504]: 2025-07-07 02:58:40.039 [INFO][5216] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:58:40.049272 containerd[1504]: time="2025-07-07T02:58:40.048333174Z" level=info msg="TearDown network for sandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\" successfully" Jul 7 02:58:40.049272 containerd[1504]: time="2025-07-07T02:58:40.048516852Z" level=info msg="StopPodSandbox for \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\" returns successfully" Jul 7 02:58:40.051675 systemd[1]: run-netns-cni\x2d8b92ede2\x2d6cf6\x2d3444\x2d4d48\x2d33950ffbbb2c.mount: Deactivated successfully. Jul 7 02:58:40.061338 containerd[1504]: time="2025-07-07T02:58:40.060220078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dq6fd,Uid:524e68d3-c271-42bd-a0b6-ec9248f8255b,Namespace:calico-system,Attempt:1,}" Jul 7 02:58:40.389745 containerd[1504]: 2025-07-07 02:58:40.208 [WARNING][5251] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"74713bc6-d2e3-43dc-8c4c-5e5600fd418c", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"a112aabe4c05d5f7035100caba9336a5b9654b1b809a60d28c4e770709ea018b", Pod:"coredns-7c65d6cfc9-g7967", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliac3cf4200d0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:40.389745 containerd[1504]: 2025-07-07 02:58:40.208 [INFO][5251] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Jul 7 02:58:40.389745 containerd[1504]: 2025-07-07 02:58:40.208 [INFO][5251] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" iface="eth0" netns="" Jul 7 02:58:40.389745 containerd[1504]: 2025-07-07 02:58:40.208 [INFO][5251] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Jul 7 02:58:40.389745 containerd[1504]: 2025-07-07 02:58:40.208 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Jul 7 02:58:40.389745 containerd[1504]: 2025-07-07 02:58:40.348 [INFO][5272] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" HandleID="k8s-pod-network.e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:40.389745 containerd[1504]: 2025-07-07 02:58:40.350 [INFO][5272] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:40.389745 containerd[1504]: 2025-07-07 02:58:40.350 [INFO][5272] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:40.389745 containerd[1504]: 2025-07-07 02:58:40.370 [WARNING][5272] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" HandleID="k8s-pod-network.e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:40.389745 containerd[1504]: 2025-07-07 02:58:40.370 [INFO][5272] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" HandleID="k8s-pod-network.e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--g7967-eth0" Jul 7 02:58:40.389745 containerd[1504]: 2025-07-07 02:58:40.379 [INFO][5272] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:40.389745 containerd[1504]: 2025-07-07 02:58:40.385 [INFO][5251] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0" Jul 7 02:58:40.389745 containerd[1504]: time="2025-07-07T02:58:40.389689161Z" level=info msg="TearDown network for sandbox \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\" successfully" Jul 7 02:58:40.457384 containerd[1504]: time="2025-07-07T02:58:40.457328904Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:58:40.460360 containerd[1504]: time="2025-07-07T02:58:40.460327258Z" level=info msg="RemovePodSandbox \"e7f82dac02d1a6a26a35f5ed1ac8ca9e9ab2b37cf637eacffc865c5d1a61c2a0\" returns successfully" Jul 7 02:58:40.461425 containerd[1504]: time="2025-07-07T02:58:40.461376598Z" level=info msg="StopPodSandbox for \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\"" Jul 7 02:58:40.638914 systemd-networkd[1433]: cali4e2a6f22139: Link UP Jul 7 02:58:40.644094 systemd-networkd[1433]: cali4e2a6f22139: Gained carrier Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.359 [INFO][5257] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0 csi-node-driver- calico-system 524e68d3-c271-42bd-a0b6-ec9248f8255b 1081 0 2025-07-07 02:58:01 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-3i0x6.gb1.brightbox.com csi-node-driver-dq6fd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4e2a6f22139 [] [] }} ContainerID="d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" Namespace="calico-system" Pod="csi-node-driver-dq6fd" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-" Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.359 [INFO][5257] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" Namespace="calico-system" Pod="csi-node-driver-dq6fd" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.475 [INFO][5283] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" HandleID="k8s-pod-network.d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.477 [INFO][5283] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" HandleID="k8s-pod-network.d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000374040), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-3i0x6.gb1.brightbox.com", "pod":"csi-node-driver-dq6fd", "timestamp":"2025-07-07 02:58:40.473226116 +0000 UTC"}, Hostname:"srv-3i0x6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.478 [INFO][5283] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.479 [INFO][5283] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.479 [INFO][5283] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3i0x6.gb1.brightbox.com' Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.504 [INFO][5283] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.534 [INFO][5283] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.556 [INFO][5283] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.563 [INFO][5283] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.572 [INFO][5283] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.572 [INFO][5283] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.578 [INFO][5283] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.594 [INFO][5283] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.612 [INFO][5283] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.137/26] block=192.168.59.128/26 handle="k8s-pod-network.d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.614 [INFO][5283] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.137/26] handle="k8s-pod-network.d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.614 [INFO][5283] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:40.691500 containerd[1504]: 2025-07-07 02:58:40.614 [INFO][5283] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.137/26] IPv6=[] ContainerID="d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" HandleID="k8s-pod-network.d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:58:40.694623 containerd[1504]: 2025-07-07 02:58:40.621 [INFO][5257] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" Namespace="calico-system" Pod="csi-node-driver-dq6fd" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"524e68d3-c271-42bd-a0b6-ec9248f8255b", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-dq6fd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e2a6f22139", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:40.694623 containerd[1504]: 2025-07-07 02:58:40.623 [INFO][5257] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.137/32] ContainerID="d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" Namespace="calico-system" Pod="csi-node-driver-dq6fd" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:58:40.694623 containerd[1504]: 2025-07-07 02:58:40.623 [INFO][5257] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4e2a6f22139 ContainerID="d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" Namespace="calico-system" Pod="csi-node-driver-dq6fd" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:58:40.694623 containerd[1504]: 2025-07-07 02:58:40.648 [INFO][5257] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" Namespace="calico-system" Pod="csi-node-driver-dq6fd" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:58:40.694623 containerd[1504]: 2025-07-07 02:58:40.651 [INFO][5257] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" Namespace="calico-system" Pod="csi-node-driver-dq6fd" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"524e68d3-c271-42bd-a0b6-ec9248f8255b", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e", Pod:"csi-node-driver-dq6fd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e2a6f22139", MAC:"82:ba:ac:80:52:f4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:40.694623 containerd[1504]: 2025-07-07 02:58:40.672 [INFO][5257] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e" Namespace="calico-system" Pod="csi-node-driver-dq6fd" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:58:40.805964 containerd[1504]: time="2025-07-07T02:58:40.805079117Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:58:40.805964 containerd[1504]: time="2025-07-07T02:58:40.805174008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:58:40.805964 containerd[1504]: time="2025-07-07T02:58:40.805194496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:40.805964 containerd[1504]: time="2025-07-07T02:58:40.805479932Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:58:40.828208 containerd[1504]: 2025-07-07 02:58:40.726 [WARNING][5302] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0", GenerateName:"calico-kube-controllers-6d899c5f84-", Namespace:"calico-system", SelfLink:"", UID:"79b32a8d-dd25-4e48-a397-425ba7037f30", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d899c5f84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467", Pod:"calico-kube-controllers-6d899c5f84-rp2zh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid97c0ab1dc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:40.828208 containerd[1504]: 2025-07-07 02:58:40.728 [INFO][5302] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Jul 7 02:58:40.828208 containerd[1504]: 2025-07-07 02:58:40.728 [INFO][5302] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" iface="eth0" netns="" Jul 7 02:58:40.828208 containerd[1504]: 2025-07-07 02:58:40.729 [INFO][5302] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Jul 7 02:58:40.828208 containerd[1504]: 2025-07-07 02:58:40.729 [INFO][5302] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Jul 7 02:58:40.828208 containerd[1504]: 2025-07-07 02:58:40.794 [INFO][5323] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" HandleID="k8s-pod-network.c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:40.828208 containerd[1504]: 2025-07-07 02:58:40.795 [INFO][5323] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:40.828208 containerd[1504]: 2025-07-07 02:58:40.795 [INFO][5323] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:40.828208 containerd[1504]: 2025-07-07 02:58:40.811 [WARNING][5323] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" HandleID="k8s-pod-network.c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:40.828208 containerd[1504]: 2025-07-07 02:58:40.811 [INFO][5323] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" HandleID="k8s-pod-network.c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:40.828208 containerd[1504]: 2025-07-07 02:58:40.820 [INFO][5323] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:40.828208 containerd[1504]: 2025-07-07 02:58:40.824 [INFO][5302] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Jul 7 02:58:40.828208 containerd[1504]: time="2025-07-07T02:58:40.827856255Z" level=info msg="TearDown network for sandbox \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\" successfully" Jul 7 02:58:40.828208 containerd[1504]: time="2025-07-07T02:58:40.827899991Z" level=info msg="StopPodSandbox for \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\" returns successfully" Jul 7 02:58:40.829779 containerd[1504]: time="2025-07-07T02:58:40.828727560Z" level=info msg="RemovePodSandbox for \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\"" Jul 7 02:58:40.829779 containerd[1504]: time="2025-07-07T02:58:40.828764280Z" level=info msg="Forcibly stopping sandbox \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\"" Jul 7 02:58:40.864150 systemd[1]: Started cri-containerd-d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e.scope - libcontainer container d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e. Jul 7 02:58:40.962276 containerd[1504]: time="2025-07-07T02:58:40.962205254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dq6fd,Uid:524e68d3-c271-42bd-a0b6-ec9248f8255b,Namespace:calico-system,Attempt:1,} returns sandbox id \"d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e\"" Jul 7 02:58:41.051885 containerd[1504]: 2025-07-07 02:58:40.959 [WARNING][5363] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0", GenerateName:"calico-kube-controllers-6d899c5f84-", Namespace:"calico-system", SelfLink:"", UID:"79b32a8d-dd25-4e48-a397-425ba7037f30", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6d899c5f84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467", Pod:"calico-kube-controllers-6d899c5f84-rp2zh", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid97c0ab1dc8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:41.051885 containerd[1504]: 2025-07-07 02:58:40.959 [INFO][5363] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Jul 7 02:58:41.051885 containerd[1504]: 2025-07-07 02:58:40.959 [INFO][5363] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" iface="eth0" netns="" Jul 7 02:58:41.051885 containerd[1504]: 2025-07-07 02:58:40.959 [INFO][5363] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Jul 7 02:58:41.051885 containerd[1504]: 2025-07-07 02:58:40.959 [INFO][5363] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Jul 7 02:58:41.051885 containerd[1504]: 2025-07-07 02:58:41.017 [INFO][5383] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" HandleID="k8s-pod-network.c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:41.051885 containerd[1504]: 2025-07-07 02:58:41.018 [INFO][5383] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:41.051885 containerd[1504]: 2025-07-07 02:58:41.018 [INFO][5383] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:41.051885 containerd[1504]: 2025-07-07 02:58:41.033 [WARNING][5383] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" HandleID="k8s-pod-network.c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:41.051885 containerd[1504]: 2025-07-07 02:58:41.033 [INFO][5383] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" HandleID="k8s-pod-network.c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--kube--controllers--6d899c5f84--rp2zh-eth0" Jul 7 02:58:41.051885 containerd[1504]: 2025-07-07 02:58:41.037 [INFO][5383] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:41.051885 containerd[1504]: 2025-07-07 02:58:41.043 [INFO][5363] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc" Jul 7 02:58:41.053443 containerd[1504]: time="2025-07-07T02:58:41.052753472Z" level=info msg="TearDown network for sandbox \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\" successfully" Jul 7 02:58:41.057665 containerd[1504]: time="2025-07-07T02:58:41.057519786Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:58:41.057665 containerd[1504]: time="2025-07-07T02:58:41.057589424Z" level=info msg="RemovePodSandbox \"c27c068f7f9cc3f4b6309a9e4d370ce7b753200b3081dacf79f209b106b138cc\" returns successfully" Jul 7 02:58:41.058747 containerd[1504]: time="2025-07-07T02:58:41.058344749Z" level=info msg="StopPodSandbox for \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\"" Jul 7 02:58:41.181051 containerd[1504]: 2025-07-07 02:58:41.121 [WARNING][5397] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"80330edf-c28a-48e6-926d-3066a67afb9f", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0", Pod:"coredns-7c65d6cfc9-kq75p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e03612df5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:41.181051 containerd[1504]: 2025-07-07 02:58:41.122 [INFO][5397] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Jul 7 02:58:41.181051 containerd[1504]: 2025-07-07 02:58:41.122 [INFO][5397] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" iface="eth0" netns="" Jul 7 02:58:41.181051 containerd[1504]: 2025-07-07 02:58:41.122 [INFO][5397] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Jul 7 02:58:41.181051 containerd[1504]: 2025-07-07 02:58:41.122 [INFO][5397] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Jul 7 02:58:41.181051 containerd[1504]: 2025-07-07 02:58:41.161 [INFO][5404] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" HandleID="k8s-pod-network.9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:41.181051 containerd[1504]: 2025-07-07 02:58:41.161 [INFO][5404] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:41.181051 containerd[1504]: 2025-07-07 02:58:41.161 [INFO][5404] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:41.181051 containerd[1504]: 2025-07-07 02:58:41.172 [WARNING][5404] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" HandleID="k8s-pod-network.9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:41.181051 containerd[1504]: 2025-07-07 02:58:41.172 [INFO][5404] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" HandleID="k8s-pod-network.9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:41.181051 containerd[1504]: 2025-07-07 02:58:41.175 [INFO][5404] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:41.181051 containerd[1504]: 2025-07-07 02:58:41.178 [INFO][5397] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Jul 7 02:58:41.182638 containerd[1504]: time="2025-07-07T02:58:41.181150681Z" level=info msg="TearDown network for sandbox \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\" successfully" Jul 7 02:58:41.182638 containerd[1504]: time="2025-07-07T02:58:41.181188386Z" level=info msg="StopPodSandbox for \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\" returns successfully" Jul 7 02:58:41.182638 containerd[1504]: time="2025-07-07T02:58:41.181926027Z" level=info msg="RemovePodSandbox for \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\"" Jul 7 02:58:41.182638 containerd[1504]: time="2025-07-07T02:58:41.181961230Z" level=info msg="Forcibly stopping sandbox \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\"" Jul 7 02:58:41.351974 containerd[1504]: 2025-07-07 02:58:41.260 [WARNING][5419] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"80330edf-c28a-48e6-926d-3066a67afb9f", ResourceVersion:"1010", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"19bc955bf68d4f8f97f7fd6163364ee21ed1e7313971127f5105c82fed7a86c0", Pod:"coredns-7c65d6cfc9-kq75p", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1e03612df5f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:41.351974 containerd[1504]: 2025-07-07 02:58:41.262 [INFO][5419] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Jul 7 02:58:41.351974 containerd[1504]: 2025-07-07 02:58:41.262 [INFO][5419] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" iface="eth0" netns="" Jul 7 02:58:41.351974 containerd[1504]: 2025-07-07 02:58:41.262 [INFO][5419] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Jul 7 02:58:41.351974 containerd[1504]: 2025-07-07 02:58:41.262 [INFO][5419] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Jul 7 02:58:41.351974 containerd[1504]: 2025-07-07 02:58:41.319 [INFO][5427] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" HandleID="k8s-pod-network.9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:41.351974 containerd[1504]: 2025-07-07 02:58:41.320 [INFO][5427] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:41.351974 containerd[1504]: 2025-07-07 02:58:41.320 [INFO][5427] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:41.351974 containerd[1504]: 2025-07-07 02:58:41.342 [WARNING][5427] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" HandleID="k8s-pod-network.9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:41.351974 containerd[1504]: 2025-07-07 02:58:41.342 [INFO][5427] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" HandleID="k8s-pod-network.9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-coredns--7c65d6cfc9--kq75p-eth0" Jul 7 02:58:41.351974 containerd[1504]: 2025-07-07 02:58:41.344 [INFO][5427] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:41.351974 containerd[1504]: 2025-07-07 02:58:41.348 [INFO][5419] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f" Jul 7 02:58:41.351974 containerd[1504]: time="2025-07-07T02:58:41.351943426Z" level=info msg="TearDown network for sandbox \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\" successfully" Jul 7 02:58:41.362083 containerd[1504]: time="2025-07-07T02:58:41.361696729Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:58:41.362083 containerd[1504]: time="2025-07-07T02:58:41.361788065Z" level=info msg="RemovePodSandbox \"9a21688ac64a882da4b7787cbe4993c3d0772c1132374b9e5aa9811149e68b3f\" returns successfully" Jul 7 02:58:41.387388 containerd[1504]: time="2025-07-07T02:58:41.387262325Z" level=info msg="StopPodSandbox for \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\"" Jul 7 02:58:41.540042 containerd[1504]: 2025-07-07 02:58:41.466 [WARNING][5441] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8242f432-b915-4099-adee-f5f52e15717e", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7", Pod:"goldmane-58fd7646b9-976bg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c028471174", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:41.540042 containerd[1504]: 2025-07-07 02:58:41.466 [INFO][5441] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Jul 7 02:58:41.540042 containerd[1504]: 2025-07-07 02:58:41.466 [INFO][5441] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" iface="eth0" netns="" Jul 7 02:58:41.540042 containerd[1504]: 2025-07-07 02:58:41.467 [INFO][5441] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Jul 7 02:58:41.540042 containerd[1504]: 2025-07-07 02:58:41.467 [INFO][5441] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Jul 7 02:58:41.540042 containerd[1504]: 2025-07-07 02:58:41.514 [INFO][5448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" HandleID="k8s-pod-network.3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Workload="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:41.540042 containerd[1504]: 2025-07-07 02:58:41.515 [INFO][5448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:41.540042 containerd[1504]: 2025-07-07 02:58:41.515 [INFO][5448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:41.540042 containerd[1504]: 2025-07-07 02:58:41.526 [WARNING][5448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" HandleID="k8s-pod-network.3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Workload="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:41.540042 containerd[1504]: 2025-07-07 02:58:41.527 [INFO][5448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" HandleID="k8s-pod-network.3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Workload="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:41.540042 containerd[1504]: 2025-07-07 02:58:41.532 [INFO][5448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:41.540042 containerd[1504]: 2025-07-07 02:58:41.535 [INFO][5441] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Jul 7 02:58:41.540042 containerd[1504]: time="2025-07-07T02:58:41.539639291Z" level=info msg="TearDown network for sandbox \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\" successfully" Jul 7 02:58:41.540042 containerd[1504]: time="2025-07-07T02:58:41.539687435Z" level=info msg="StopPodSandbox for \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\" returns successfully" Jul 7 02:58:41.543485 containerd[1504]: time="2025-07-07T02:58:41.541712383Z" level=info msg="RemovePodSandbox for \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\"" Jul 7 02:58:41.543485 containerd[1504]: time="2025-07-07T02:58:41.541753296Z" level=info msg="Forcibly stopping sandbox \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\"" Jul 7 02:58:41.724089 containerd[1504]: 2025-07-07 02:58:41.643 [WARNING][5463] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0", GenerateName:"goldmane-58fd7646b9-", Namespace:"calico-system", SelfLink:"", UID:"8242f432-b915-4099-adee-f5f52e15717e", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"58fd7646b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7", Pod:"goldmane-58fd7646b9-976bg", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali7c028471174", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:41.724089 containerd[1504]: 2025-07-07 02:58:41.643 [INFO][5463] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Jul 7 02:58:41.724089 containerd[1504]: 2025-07-07 02:58:41.643 [INFO][5463] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" iface="eth0" netns="" Jul 7 02:58:41.724089 containerd[1504]: 2025-07-07 02:58:41.644 [INFO][5463] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Jul 7 02:58:41.724089 containerd[1504]: 2025-07-07 02:58:41.644 [INFO][5463] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Jul 7 02:58:41.724089 containerd[1504]: 2025-07-07 02:58:41.695 [INFO][5470] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" HandleID="k8s-pod-network.3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Workload="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:41.724089 containerd[1504]: 2025-07-07 02:58:41.696 [INFO][5470] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:41.724089 containerd[1504]: 2025-07-07 02:58:41.696 [INFO][5470] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:41.724089 containerd[1504]: 2025-07-07 02:58:41.710 [WARNING][5470] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" HandleID="k8s-pod-network.3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Workload="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:41.724089 containerd[1504]: 2025-07-07 02:58:41.710 [INFO][5470] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" HandleID="k8s-pod-network.3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Workload="srv--3i0x6.gb1.brightbox.com-k8s-goldmane--58fd7646b9--976bg-eth0" Jul 7 02:58:41.724089 containerd[1504]: 2025-07-07 02:58:41.713 [INFO][5470] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:41.724089 containerd[1504]: 2025-07-07 02:58:41.719 [INFO][5463] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf" Jul 7 02:58:41.726188 containerd[1504]: time="2025-07-07T02:58:41.724091958Z" level=info msg="TearDown network for sandbox \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\" successfully" Jul 7 02:58:41.730348 containerd[1504]: time="2025-07-07T02:58:41.730112458Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:58:41.730348 containerd[1504]: time="2025-07-07T02:58:41.730196193Z" level=info msg="RemovePodSandbox \"3fd84f2a087827be9bd8aab771392bf74bee7469b981d59b37be76e73e15a8bf\" returns successfully" Jul 7 02:58:41.731222 containerd[1504]: time="2025-07-07T02:58:41.731188320Z" level=info msg="StopPodSandbox for \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\"" Jul 7 02:58:41.888425 containerd[1504]: 2025-07-07 02:58:41.805 [WARNING][5484] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0", GenerateName:"calico-apiserver-556c85fc95-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0c66970-d617-482e-81b9-577d7709b687", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556c85fc95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f", Pod:"calico-apiserver-556c85fc95-nldxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2981b6817a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:41.888425 containerd[1504]: 2025-07-07 02:58:41.807 [INFO][5484] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Jul 7 02:58:41.888425 containerd[1504]: 2025-07-07 02:58:41.807 [INFO][5484] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" iface="eth0" netns="" Jul 7 02:58:41.888425 containerd[1504]: 2025-07-07 02:58:41.807 [INFO][5484] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Jul 7 02:58:41.888425 containerd[1504]: 2025-07-07 02:58:41.807 [INFO][5484] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Jul 7 02:58:41.888425 containerd[1504]: 2025-07-07 02:58:41.858 [INFO][5491] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" HandleID="k8s-pod-network.0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:41.888425 containerd[1504]: 2025-07-07 02:58:41.859 [INFO][5491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:41.888425 containerd[1504]: 2025-07-07 02:58:41.859 [INFO][5491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:41.888425 containerd[1504]: 2025-07-07 02:58:41.880 [WARNING][5491] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" HandleID="k8s-pod-network.0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:41.888425 containerd[1504]: 2025-07-07 02:58:41.880 [INFO][5491] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" HandleID="k8s-pod-network.0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:41.888425 containerd[1504]: 2025-07-07 02:58:41.883 [INFO][5491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:41.888425 containerd[1504]: 2025-07-07 02:58:41.886 [INFO][5484] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Jul 7 02:58:41.889584 containerd[1504]: time="2025-07-07T02:58:41.888817702Z" level=info msg="TearDown network for sandbox \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\" successfully" Jul 7 02:58:41.889584 containerd[1504]: time="2025-07-07T02:58:41.888852323Z" level=info msg="StopPodSandbox for \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\" returns successfully" Jul 7 02:58:41.890516 containerd[1504]: time="2025-07-07T02:58:41.890115461Z" level=info msg="RemovePodSandbox for \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\"" Jul 7 02:58:41.890516 containerd[1504]: time="2025-07-07T02:58:41.890159652Z" level=info msg="Forcibly stopping sandbox \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\"" Jul 7 02:58:41.994516 systemd-networkd[1433]: cali4e2a6f22139: Gained IPv6LL Jul 7 02:58:42.071323 containerd[1504]: 2025-07-07 02:58:41.981 [WARNING][5507] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0", GenerateName:"calico-apiserver-556c85fc95-", Namespace:"calico-apiserver", SelfLink:"", UID:"f0c66970-d617-482e-81b9-577d7709b687", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556c85fc95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f", Pod:"calico-apiserver-556c85fc95-nldxn", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali2981b6817a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:42.071323 containerd[1504]: 2025-07-07 02:58:41.981 [INFO][5507] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Jul 7 02:58:42.071323 containerd[1504]: 2025-07-07 02:58:41.982 [INFO][5507] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" iface="eth0" netns="" Jul 7 02:58:42.071323 containerd[1504]: 2025-07-07 02:58:41.982 [INFO][5507] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Jul 7 02:58:42.071323 containerd[1504]: 2025-07-07 02:58:41.982 [INFO][5507] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Jul 7 02:58:42.071323 containerd[1504]: 2025-07-07 02:58:42.050 [INFO][5514] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" HandleID="k8s-pod-network.0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:42.071323 containerd[1504]: 2025-07-07 02:58:42.050 [INFO][5514] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:42.071323 containerd[1504]: 2025-07-07 02:58:42.050 [INFO][5514] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:42.071323 containerd[1504]: 2025-07-07 02:58:42.062 [WARNING][5514] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" HandleID="k8s-pod-network.0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:42.071323 containerd[1504]: 2025-07-07 02:58:42.062 [INFO][5514] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" HandleID="k8s-pod-network.0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--nldxn-eth0" Jul 7 02:58:42.071323 containerd[1504]: 2025-07-07 02:58:42.064 [INFO][5514] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:42.071323 containerd[1504]: 2025-07-07 02:58:42.067 [INFO][5507] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc" Jul 7 02:58:42.071323 containerd[1504]: time="2025-07-07T02:58:42.070965638Z" level=info msg="TearDown network for sandbox \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\" successfully" Jul 7 02:58:42.077714 containerd[1504]: time="2025-07-07T02:58:42.077675612Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:58:42.077923 containerd[1504]: time="2025-07-07T02:58:42.077891098Z" level=info msg="RemovePodSandbox \"0fe2ed312aaa7310b8e99b3588250b4c4e4b06bdd76f995fdb38ca9e4b218cbc\" returns successfully" Jul 7 02:58:42.078744 containerd[1504]: time="2025-07-07T02:58:42.078709248Z" level=info msg="StopPodSandbox for \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\"" Jul 7 02:58:42.190951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2856209356.mount: Deactivated successfully. Jul 7 02:58:42.327161 containerd[1504]: 2025-07-07 02:58:42.202 [WARNING][5528] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-whisker--65d9798d4c--j97sf-eth0" Jul 7 02:58:42.327161 containerd[1504]: 2025-07-07 02:58:42.202 [INFO][5528] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Jul 7 02:58:42.327161 containerd[1504]: 2025-07-07 02:58:42.203 [INFO][5528] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" iface="eth0" netns="" Jul 7 02:58:42.327161 containerd[1504]: 2025-07-07 02:58:42.203 [INFO][5528] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Jul 7 02:58:42.327161 containerd[1504]: 2025-07-07 02:58:42.203 [INFO][5528] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Jul 7 02:58:42.327161 containerd[1504]: 2025-07-07 02:58:42.295 [INFO][5535] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" HandleID="k8s-pod-network.fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-whisker--65d9798d4c--j97sf-eth0" Jul 7 02:58:42.327161 containerd[1504]: 2025-07-07 02:58:42.296 [INFO][5535] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:42.327161 containerd[1504]: 2025-07-07 02:58:42.297 [INFO][5535] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:42.327161 containerd[1504]: 2025-07-07 02:58:42.317 [WARNING][5535] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" HandleID="k8s-pod-network.fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-whisker--65d9798d4c--j97sf-eth0" Jul 7 02:58:42.327161 containerd[1504]: 2025-07-07 02:58:42.317 [INFO][5535] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" HandleID="k8s-pod-network.fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-whisker--65d9798d4c--j97sf-eth0" Jul 7 02:58:42.327161 containerd[1504]: 2025-07-07 02:58:42.321 [INFO][5535] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:42.327161 containerd[1504]: 2025-07-07 02:58:42.323 [INFO][5528] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Jul 7 02:58:42.327161 containerd[1504]: time="2025-07-07T02:58:42.326970808Z" level=info msg="TearDown network for sandbox \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\" successfully" Jul 7 02:58:42.327161 containerd[1504]: time="2025-07-07T02:58:42.327013493Z" level=info msg="StopPodSandbox for \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\" returns successfully" Jul 7 02:58:42.331186 containerd[1504]: time="2025-07-07T02:58:42.330325192Z" level=info msg="RemovePodSandbox for \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\"" Jul 7 02:58:42.331186 containerd[1504]: time="2025-07-07T02:58:42.330368351Z" level=info msg="Forcibly stopping sandbox \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\"" Jul 7 02:58:42.593436 containerd[1504]: 2025-07-07 02:58:42.467 [WARNING][5554] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-whisker--65d9798d4c--j97sf-eth0" Jul 7 02:58:42.593436 containerd[1504]: 2025-07-07 02:58:42.467 [INFO][5554] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Jul 7 02:58:42.593436 containerd[1504]: 2025-07-07 02:58:42.467 [INFO][5554] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" iface="eth0" netns="" Jul 7 02:58:42.593436 containerd[1504]: 2025-07-07 02:58:42.467 [INFO][5554] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Jul 7 02:58:42.593436 containerd[1504]: 2025-07-07 02:58:42.468 [INFO][5554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Jul 7 02:58:42.593436 containerd[1504]: 2025-07-07 02:58:42.552 [INFO][5561] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" HandleID="k8s-pod-network.fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-whisker--65d9798d4c--j97sf-eth0" Jul 7 02:58:42.593436 containerd[1504]: 2025-07-07 02:58:42.553 [INFO][5561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:42.593436 containerd[1504]: 2025-07-07 02:58:42.554 [INFO][5561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:42.593436 containerd[1504]: 2025-07-07 02:58:42.575 [WARNING][5561] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" HandleID="k8s-pod-network.fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-whisker--65d9798d4c--j97sf-eth0" Jul 7 02:58:42.593436 containerd[1504]: 2025-07-07 02:58:42.575 [INFO][5561] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" HandleID="k8s-pod-network.fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-whisker--65d9798d4c--j97sf-eth0" Jul 7 02:58:42.593436 containerd[1504]: 2025-07-07 02:58:42.580 [INFO][5561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:42.593436 containerd[1504]: 2025-07-07 02:58:42.587 [INFO][5554] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e" Jul 7 02:58:42.593436 containerd[1504]: time="2025-07-07T02:58:42.593400518Z" level=info msg="TearDown network for sandbox \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\" successfully" Jul 7 02:58:42.629631 containerd[1504]: time="2025-07-07T02:58:42.629552214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:58:42.631506 containerd[1504]: time="2025-07-07T02:58:42.629651199Z" level=info msg="RemovePodSandbox \"fe2e501c88d5327bd5523fc8ad4eb85f4cf1b327f9f3c09bebdf017954dad97e\" returns successfully" Jul 7 02:58:42.631506 containerd[1504]: time="2025-07-07T02:58:42.630787759Z" level=info msg="StopPodSandbox for \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\"" Jul 7 02:58:43.032262 containerd[1504]: 2025-07-07 02:58:42.849 [WARNING][5575] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0", GenerateName:"calico-apiserver-574fc86944-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1e51e64-ed54-4337-b03c-5e29a286c65d", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574fc86944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b", Pod:"calico-apiserver-574fc86944-2st6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19487381a85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:43.032262 containerd[1504]: 2025-07-07 02:58:42.850 [INFO][5575] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Jul 7 02:58:43.032262 containerd[1504]: 2025-07-07 02:58:42.851 [INFO][5575] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" iface="eth0" netns="" Jul 7 02:58:43.032262 containerd[1504]: 2025-07-07 02:58:42.851 [INFO][5575] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Jul 7 02:58:43.032262 containerd[1504]: 2025-07-07 02:58:42.852 [INFO][5575] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Jul 7 02:58:43.032262 containerd[1504]: 2025-07-07 02:58:42.982 [INFO][5582] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" HandleID="k8s-pod-network.7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:43.032262 containerd[1504]: 2025-07-07 02:58:42.983 [INFO][5582] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:43.032262 containerd[1504]: 2025-07-07 02:58:42.983 [INFO][5582] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:43.032262 containerd[1504]: 2025-07-07 02:58:43.012 [WARNING][5582] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" HandleID="k8s-pod-network.7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:43.032262 containerd[1504]: 2025-07-07 02:58:43.012 [INFO][5582] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" HandleID="k8s-pod-network.7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:43.032262 containerd[1504]: 2025-07-07 02:58:43.023 [INFO][5582] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:43.032262 containerd[1504]: 2025-07-07 02:58:43.029 [INFO][5575] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Jul 7 02:58:43.032262 containerd[1504]: time="2025-07-07T02:58:43.031757262Z" level=info msg="TearDown network for sandbox \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\" successfully" Jul 7 02:58:43.032262 containerd[1504]: time="2025-07-07T02:58:43.031794130Z" level=info msg="StopPodSandbox for \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\" returns successfully" Jul 7 02:58:43.035735 containerd[1504]: time="2025-07-07T02:58:43.033157088Z" level=info msg="RemovePodSandbox for \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\"" Jul 7 02:58:43.035735 containerd[1504]: time="2025-07-07T02:58:43.033192437Z" level=info msg="Forcibly stopping sandbox \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\"" Jul 7 02:58:43.302553 containerd[1504]: 2025-07-07 02:58:43.170 [WARNING][5596] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0", GenerateName:"calico-apiserver-574fc86944-", Namespace:"calico-apiserver", SelfLink:"", UID:"c1e51e64-ed54-4337-b03c-5e29a286c65d", ResourceVersion:"1007", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 57, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"574fc86944", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b", Pod:"calico-apiserver-574fc86944-2st6q", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali19487381a85", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:58:43.302553 containerd[1504]: 2025-07-07 02:58:43.174 [INFO][5596] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Jul 7 02:58:43.302553 containerd[1504]: 2025-07-07 02:58:43.174 [INFO][5596] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" iface="eth0" netns="" Jul 7 02:58:43.302553 containerd[1504]: 2025-07-07 02:58:43.174 [INFO][5596] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Jul 7 02:58:43.302553 containerd[1504]: 2025-07-07 02:58:43.174 [INFO][5596] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Jul 7 02:58:43.302553 containerd[1504]: 2025-07-07 02:58:43.275 [INFO][5603] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" HandleID="k8s-pod-network.7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:43.302553 containerd[1504]: 2025-07-07 02:58:43.275 [INFO][5603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:58:43.302553 containerd[1504]: 2025-07-07 02:58:43.275 [INFO][5603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:58:43.302553 containerd[1504]: 2025-07-07 02:58:43.289 [WARNING][5603] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" HandleID="k8s-pod-network.7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:43.302553 containerd[1504]: 2025-07-07 02:58:43.289 [INFO][5603] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" HandleID="k8s-pod-network.7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:58:43.302553 containerd[1504]: 2025-07-07 02:58:43.292 [INFO][5603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:58:43.302553 containerd[1504]: 2025-07-07 02:58:43.298 [INFO][5596] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f" Jul 7 02:58:43.302553 containerd[1504]: time="2025-07-07T02:58:43.301526288Z" level=info msg="TearDown network for sandbox \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\" successfully" Jul 7 02:58:43.345946 containerd[1504]: time="2025-07-07T02:58:43.345870008Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:58:43.346144 containerd[1504]: time="2025-07-07T02:58:43.345973901Z" level=info msg="RemovePodSandbox \"7a73577975eaeefdd271ece761d78958c63bb8926058bb1fdda694889d2e0e4f\" returns successfully" Jul 7 02:58:44.328427 containerd[1504]: time="2025-07-07T02:58:44.327005361Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:44.333513 containerd[1504]: time="2025-07-07T02:58:44.330644179Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 02:58:44.343623 containerd[1504]: time="2025-07-07T02:58:44.343296962Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:44.385294 containerd[1504]: time="2025-07-07T02:58:44.353418982Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:44.385294 containerd[1504]: time="2025-07-07T02:58:44.354998746Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 7.12961277s" Jul 7 02:58:44.385294 containerd[1504]: time="2025-07-07T02:58:44.384454525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 02:58:44.427543 containerd[1504]: time="2025-07-07T02:58:44.426538864Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 02:58:44.608530 containerd[1504]: time="2025-07-07T02:58:44.607911767Z" level=info msg="CreateContainer within sandbox \"9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 02:58:44.854828 containerd[1504]: time="2025-07-07T02:58:44.853753449Z" level=info msg="CreateContainer within sandbox \"9f1b8093b631a8b3e08d57129bea26b56d4868169be7549c89fed5436c3f6ff7\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"fa1b58b020356ba027dd4f1d619fb108f0846693d28dad587540330ee204e1e1\"" Jul 7 02:58:44.921444 containerd[1504]: time="2025-07-07T02:58:44.921309548Z" level=info msg="StartContainer for \"fa1b58b020356ba027dd4f1d619fb108f0846693d28dad587540330ee204e1e1\"" Jul 7 02:58:44.968877 containerd[1504]: time="2025-07-07T02:58:44.968773817Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:44.972301 containerd[1504]: time="2025-07-07T02:58:44.971439153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 02:58:45.014010 containerd[1504]: time="2025-07-07T02:58:45.013934933Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 587.331137ms" Jul 7 02:58:45.015660 containerd[1504]: time="2025-07-07T02:58:45.014024732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 02:58:45.033862 containerd[1504]: time="2025-07-07T02:58:45.030571069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 02:58:45.038260 containerd[1504]: time="2025-07-07T02:58:45.037047717Z" level=info msg="CreateContainer within sandbox \"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 02:58:45.078007 containerd[1504]: time="2025-07-07T02:58:45.074035758Z" level=info msg="CreateContainer within sandbox \"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32\"" Jul 7 02:58:45.079278 containerd[1504]: time="2025-07-07T02:58:45.078415442Z" level=info msg="StartContainer for \"35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32\"" Jul 7 02:58:45.224818 systemd[1]: Started cri-containerd-35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32.scope - libcontainer container 35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32. Jul 7 02:58:45.265484 systemd[1]: Started cri-containerd-fa1b58b020356ba027dd4f1d619fb108f0846693d28dad587540330ee204e1e1.scope - libcontainer container fa1b58b020356ba027dd4f1d619fb108f0846693d28dad587540330ee204e1e1. Jul 7 02:58:45.748923 containerd[1504]: time="2025-07-07T02:58:45.748443560Z" level=info msg="StartContainer for \"35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32\" returns successfully" Jul 7 02:58:45.875752 containerd[1504]: time="2025-07-07T02:58:45.874376950Z" level=info msg="StartContainer for \"fa1b58b020356ba027dd4f1d619fb108f0846693d28dad587540330ee204e1e1\" returns successfully" Jul 7 02:58:47.137746 kubelet[2669]: I0707 02:58:47.134268 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-58fd7646b9-976bg" podStartSLOduration=33.805541669 podStartE2EDuration="46.114585783s" podCreationTimestamp="2025-07-07 02:58:01 +0000 UTC" firstStartedPulling="2025-07-07 02:58:32.100509455 +0000 UTC m=+53.679044082" lastFinishedPulling="2025-07-07 02:58:44.409553559 +0000 UTC m=+65.988088196" observedRunningTime="2025-07-07 02:58:47.037696665 +0000 UTC m=+68.616231305" watchObservedRunningTime="2025-07-07 02:58:47.114585783 +0000 UTC m=+68.693120410" Jul 7 02:58:51.721432 kubelet[2669]: I0707 02:58:51.701483 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-574fc86944-2st6q" podStartSLOduration=42.951311486 podStartE2EDuration="55.672267763s" podCreationTimestamp="2025-07-07 02:57:56 +0000 UTC" firstStartedPulling="2025-07-07 02:58:32.308650416 +0000 UTC m=+53.887185038" lastFinishedPulling="2025-07-07 02:58:45.029606669 +0000 UTC m=+66.608141315" observedRunningTime="2025-07-07 02:58:47.13736755 +0000 UTC m=+68.715902176" watchObservedRunningTime="2025-07-07 02:58:51.672267763 +0000 UTC m=+73.250802404" Jul 7 02:58:51.981366 containerd[1504]: time="2025-07-07T02:58:51.979803959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:51.990804 containerd[1504]: time="2025-07-07T02:58:51.987973369Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:51.994139 containerd[1504]: time="2025-07-07T02:58:51.993187392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:52.003381 containerd[1504]: time="2025-07-07T02:58:51.995338405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 02:58:52.003381 containerd[1504]: time="2025-07-07T02:58:51.996111910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 6.96536998s" Jul 7 02:58:52.003381 containerd[1504]: time="2025-07-07T02:58:52.003246087Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 02:58:52.038732 containerd[1504]: time="2025-07-07T02:58:52.038289178Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 02:58:52.161526 containerd[1504]: time="2025-07-07T02:58:52.161460701Z" level=info msg="CreateContainer within sandbox \"10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 02:58:52.225753 containerd[1504]: time="2025-07-07T02:58:52.225557201Z" level=info msg="CreateContainer within sandbox \"10136a320aecd09395425463a80b334fc3a20faa98c29e642b96d7b403f4f467\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4cad2f229dfd73516c6a4f16879086c7e0ea2e99800d66f42b5bc2c743043529\"" Jul 7 02:58:52.230991 containerd[1504]: time="2025-07-07T02:58:52.230522604Z" level=info msg="StartContainer for \"4cad2f229dfd73516c6a4f16879086c7e0ea2e99800d66f42b5bc2c743043529\"" Jul 7 02:58:52.531771 systemd[1]: Started cri-containerd-4cad2f229dfd73516c6a4f16879086c7e0ea2e99800d66f42b5bc2c743043529.scope - libcontainer container 4cad2f229dfd73516c6a4f16879086c7e0ea2e99800d66f42b5bc2c743043529. Jul 7 02:58:52.548069 containerd[1504]: time="2025-07-07T02:58:52.547991797Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:58:52.551428 containerd[1504]: time="2025-07-07T02:58:52.551383608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 02:58:52.559357 containerd[1504]: time="2025-07-07T02:58:52.559305492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 520.951838ms" Jul 7 02:58:52.559543 containerd[1504]: time="2025-07-07T02:58:52.559513163Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 02:58:52.566561 containerd[1504]: time="2025-07-07T02:58:52.566522792Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 02:58:52.588802 containerd[1504]: time="2025-07-07T02:58:52.587974395Z" level=info msg="CreateContainer within sandbox \"e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 02:58:52.726712 containerd[1504]: time="2025-07-07T02:58:52.726654546Z" level=info msg="CreateContainer within sandbox \"e38e6983a7e0763aea3db8cc36f7ffcee3dca3ad9e6b6d57bfa6957b69b4aa7f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"35853ef17c7cda766753f79bdb590dc353287b589806c52e77366a050e499906\"" Jul 7 02:58:52.799263 containerd[1504]: time="2025-07-07T02:58:52.797043695Z" level=info msg="StartContainer for \"35853ef17c7cda766753f79bdb590dc353287b589806c52e77366a050e499906\"" Jul 7 02:58:52.897481 systemd[1]: Started cri-containerd-35853ef17c7cda766753f79bdb590dc353287b589806c52e77366a050e499906.scope - libcontainer container 35853ef17c7cda766753f79bdb590dc353287b589806c52e77366a050e499906. Jul 7 02:58:53.095317 containerd[1504]: time="2025-07-07T02:58:53.095164932Z" level=info msg="StartContainer for \"4cad2f229dfd73516c6a4f16879086c7e0ea2e99800d66f42b5bc2c743043529\" returns successfully" Jul 7 02:58:53.225940 containerd[1504]: time="2025-07-07T02:58:53.225888924Z" level=info msg="StartContainer for \"35853ef17c7cda766753f79bdb590dc353287b589806c52e77366a050e499906\" returns successfully" Jul 7 02:58:54.272787 kubelet[2669]: I0707 02:58:54.272559 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-556c85fc95-nldxn" podStartSLOduration=38.429920227 podStartE2EDuration="57.272532065s" podCreationTimestamp="2025-07-07 02:57:57 +0000 UTC" firstStartedPulling="2025-07-07 02:58:33.72037654 +0000 UTC m=+55.298911160" lastFinishedPulling="2025-07-07 02:58:52.562988364 +0000 UTC m=+74.141522998" observedRunningTime="2025-07-07 02:58:54.270678858 +0000 UTC m=+75.849213500" watchObservedRunningTime="2025-07-07 02:58:54.272532065 +0000 UTC m=+75.851066698" Jul 7 02:58:54.300388 kubelet[2669]: I0707 02:58:54.299669 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6d899c5f84-rp2zh" podStartSLOduration=33.03890704 podStartE2EDuration="52.299641609s" podCreationTimestamp="2025-07-07 02:58:02 +0000 UTC" firstStartedPulling="2025-07-07 02:58:32.751595431 +0000 UTC m=+54.330130058" lastFinishedPulling="2025-07-07 02:58:52.012329994 +0000 UTC m=+73.590864627" observedRunningTime="2025-07-07 02:58:54.299280647 +0000 UTC m=+75.877815295" watchObservedRunningTime="2025-07-07 02:58:54.299641609 +0000 UTC m=+75.878176243" Jul 7 02:58:58.160007 kubelet[2669]: I0707 02:58:58.159626 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh6w2\" (UniqueName: \"kubernetes.io/projected/0d7078d7-1915-4227-a66a-0a08e54e2daa-kube-api-access-fh6w2\") pod \"calico-apiserver-556c85fc95-lbckp\" (UID: \"0d7078d7-1915-4227-a66a-0a08e54e2daa\") " pod="calico-apiserver/calico-apiserver-556c85fc95-lbckp" Jul 7 02:58:58.160007 kubelet[2669]: I0707 02:58:58.159802 2669 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0d7078d7-1915-4227-a66a-0a08e54e2daa-calico-apiserver-certs\") pod \"calico-apiserver-556c85fc95-lbckp\" (UID: \"0d7078d7-1915-4227-a66a-0a08e54e2daa\") " pod="calico-apiserver/calico-apiserver-556c85fc95-lbckp" Jul 7 02:58:58.181476 containerd[1504]: time="2025-07-07T02:58:58.181389651Z" level=info msg="StopContainer for \"35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32\" with timeout 30 (s)" Jul 7 02:58:58.193320 containerd[1504]: time="2025-07-07T02:58:58.193273855Z" level=info msg="Stop container \"35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32\" with signal terminated" Jul 7 02:58:58.246529 systemd[1]: Created slice kubepods-besteffort-pod0d7078d7_1915_4227_a66a_0a08e54e2daa.slice - libcontainer container kubepods-besteffort-pod0d7078d7_1915_4227_a66a_0a08e54e2daa.slice. Jul 7 02:58:58.314566 systemd[1]: cri-containerd-35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32.scope: Deactivated successfully. Jul 7 02:58:58.347102 systemd[1]: cri-containerd-35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32.scope: Consumed 1.267s CPU time. Jul 7 02:58:58.635985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32-rootfs.mount: Deactivated successfully. Jul 7 02:58:58.823443 containerd[1504]: time="2025-07-07T02:58:58.701622446Z" level=info msg="shim disconnected" id=35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32 namespace=k8s.io Jul 7 02:58:58.836347 containerd[1504]: time="2025-07-07T02:58:58.744954578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556c85fc95-lbckp,Uid:0d7078d7-1915-4227-a66a-0a08e54e2daa,Namespace:calico-apiserver,Attempt:0,}" Jul 7 02:58:58.865023 containerd[1504]: time="2025-07-07T02:58:58.864962912Z" level=warning msg="cleaning up after shim disconnected" id=35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32 namespace=k8s.io Jul 7 02:58:58.866695 containerd[1504]: time="2025-07-07T02:58:58.866654827Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 02:58:59.353193 containerd[1504]: time="2025-07-07T02:58:59.350532674Z" level=info msg="StopContainer for \"35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32\" returns successfully" Jul 7 02:58:59.366440 containerd[1504]: time="2025-07-07T02:58:59.366016133Z" level=info msg="StopPodSandbox for \"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b\"" Jul 7 02:58:59.371785 containerd[1504]: time="2025-07-07T02:58:59.371710440Z" level=info msg="Container to stop \"35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 02:58:59.382990 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b-shm.mount: Deactivated successfully. Jul 7 02:58:59.468470 systemd[1]: cri-containerd-e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b.scope: Deactivated successfully. Jul 7 02:58:59.608678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b-rootfs.mount: Deactivated successfully. Jul 7 02:58:59.633259 containerd[1504]: time="2025-07-07T02:58:59.631554052Z" level=info msg="shim disconnected" id=e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b namespace=k8s.io Jul 7 02:58:59.633259 containerd[1504]: time="2025-07-07T02:58:59.631647114Z" level=warning msg="cleaning up after shim disconnected" id=e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b namespace=k8s.io Jul 7 02:58:59.633259 containerd[1504]: time="2025-07-07T02:58:59.631673476Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 02:59:00.214978 systemd-networkd[1433]: cali19487381a85: Link DOWN Jul 7 02:59:00.215003 systemd-networkd[1433]: cali19487381a85: Lost carrier Jul 7 02:59:00.413311 kubelet[2669]: I0707 02:59:00.373724 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Jul 7 02:59:00.796136 systemd-networkd[1433]: cali1a99eff8eca: Link UP Jul 7 02:59:00.803894 systemd-networkd[1433]: cali1a99eff8eca: Gained carrier Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:58:59.819 [INFO][5919] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--lbckp-eth0 calico-apiserver-556c85fc95- calico-apiserver 0d7078d7-1915-4227-a66a-0a08e54e2daa 1194 0 2025-07-07 02:58:57 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:556c85fc95 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-3i0x6.gb1.brightbox.com calico-apiserver-556c85fc95-lbckp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1a99eff8eca [] [] }} ContainerID="1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-lbckp" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--lbckp-" Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:58:59.827 [INFO][5919] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-lbckp" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--lbckp-eth0" Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.428 [INFO][5958] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" HandleID="k8s-pod-network.1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--lbckp-eth0" Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.434 [INFO][5958] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" HandleID="k8s-pod-network.1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--lbckp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fc30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"srv-3i0x6.gb1.brightbox.com", "pod":"calico-apiserver-556c85fc95-lbckp", "timestamp":"2025-07-07 02:59:00.428299004 +0000 UTC"}, Hostname:"srv-3i0x6.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.434 [INFO][5958] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.436 [INFO][5958] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.437 [INFO][5958] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-3i0x6.gb1.brightbox.com' Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.523 [INFO][5958] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.568 [INFO][5958] ipam/ipam.go 394: Looking up existing affinities for host host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.611 [INFO][5958] ipam/ipam.go 511: Trying affinity for 192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.626 [INFO][5958] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.637 [INFO][5958] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.128/26 host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.637 [INFO][5958] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.128/26 handle="k8s-pod-network.1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.642 [INFO][5958] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100 Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.655 [INFO][5958] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.128/26 handle="k8s-pod-network.1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.706 [INFO][5958] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.138/26] block=192.168.59.128/26 handle="k8s-pod-network.1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.733 [INFO][5958] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.138/26] handle="k8s-pod-network.1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" host="srv-3i0x6.gb1.brightbox.com" Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.733 [INFO][5958] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:59:00.975693 containerd[1504]: 2025-07-07 02:59:00.733 [INFO][5958] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.138/26] IPv6=[] ContainerID="1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" HandleID="k8s-pod-network.1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--lbckp-eth0" Jul 7 02:59:00.983318 containerd[1504]: 2025-07-07 02:59:00.741 [INFO][5919] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-lbckp" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--lbckp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--lbckp-eth0", GenerateName:"calico-apiserver-556c85fc95-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d7078d7-1915-4227-a66a-0a08e54e2daa", ResourceVersion:"1194", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556c85fc95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-556c85fc95-lbckp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a99eff8eca", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:59:00.983318 containerd[1504]: 2025-07-07 02:59:00.742 [INFO][5919] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.138/32] ContainerID="1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-lbckp" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--lbckp-eth0" Jul 7 02:59:00.983318 containerd[1504]: 2025-07-07 02:59:00.742 [INFO][5919] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a99eff8eca ContainerID="1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-lbckp" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--lbckp-eth0" Jul 7 02:59:00.983318 containerd[1504]: 2025-07-07 02:59:00.832 [INFO][5919] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-lbckp" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--lbckp-eth0" Jul 7 02:59:00.983318 containerd[1504]: 2025-07-07 02:59:00.845 [INFO][5919] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-lbckp" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--lbckp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--lbckp-eth0", GenerateName:"calico-apiserver-556c85fc95-", Namespace:"calico-apiserver", SelfLink:"", UID:"0d7078d7-1915-4227-a66a-0a08e54e2daa", ResourceVersion:"1194", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"556c85fc95", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100", Pod:"calico-apiserver-556c85fc95-lbckp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.138/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1a99eff8eca", MAC:"ce:7e:15:db:8a:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:59:00.983318 containerd[1504]: 2025-07-07 02:59:00.958 [INFO][5919] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100" Namespace="calico-apiserver" Pod="calico-apiserver-556c85fc95-lbckp" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--556c85fc95--lbckp-eth0" Jul 7 02:59:01.269481 containerd[1504]: 2025-07-07 02:59:00.187 [INFO][5979] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Jul 7 02:59:01.269481 containerd[1504]: 2025-07-07 02:59:00.190 [INFO][5979] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" iface="eth0" netns="/var/run/netns/cni-d04bbc5b-d3d8-2699-567c-a6c65a31233b" Jul 7 02:59:01.269481 containerd[1504]: 2025-07-07 02:59:00.193 [INFO][5979] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" iface="eth0" netns="/var/run/netns/cni-d04bbc5b-d3d8-2699-567c-a6c65a31233b" Jul 7 02:59:01.269481 containerd[1504]: 2025-07-07 02:59:00.213 [INFO][5979] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" after=22.532465ms iface="eth0" netns="/var/run/netns/cni-d04bbc5b-d3d8-2699-567c-a6c65a31233b" Jul 7 02:59:01.269481 containerd[1504]: 2025-07-07 02:59:00.213 [INFO][5979] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Jul 7 02:59:01.269481 containerd[1504]: 2025-07-07 02:59:00.213 [INFO][5979] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Jul 7 02:59:01.269481 containerd[1504]: 2025-07-07 02:59:00.497 [INFO][5989] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" HandleID="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:59:01.269481 containerd[1504]: 2025-07-07 02:59:00.502 [INFO][5989] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:59:01.269481 containerd[1504]: 2025-07-07 02:59:00.733 [INFO][5989] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:59:01.269481 containerd[1504]: 2025-07-07 02:59:01.203 [INFO][5989] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" HandleID="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:59:01.269481 containerd[1504]: 2025-07-07 02:59:01.203 [INFO][5989] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" HandleID="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:59:01.269481 containerd[1504]: 2025-07-07 02:59:01.220 [INFO][5989] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:59:01.269481 containerd[1504]: 2025-07-07 02:59:01.255 [INFO][5979] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Jul 7 02:59:01.271248 containerd[1504]: time="2025-07-07T02:59:01.270646401Z" level=info msg="TearDown network for sandbox \"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b\" successfully" Jul 7 02:59:01.271248 containerd[1504]: time="2025-07-07T02:59:01.270688871Z" level=info msg="StopPodSandbox for \"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b\" returns successfully" Jul 7 02:59:01.286147 systemd[1]: run-netns-cni\x2dd04bbc5b\x2dd3d8\x2d2699\x2d567c\x2da6c65a31233b.mount: Deactivated successfully. Jul 7 02:59:01.489506 containerd[1504]: time="2025-07-07T02:59:01.483288205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 7 02:59:01.499802 containerd[1504]: time="2025-07-07T02:59:01.491303377Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 7 02:59:01.499802 containerd[1504]: time="2025-07-07T02:59:01.498364043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:59:01.499802 containerd[1504]: time="2025-07-07T02:59:01.498693366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 7 02:59:01.699509 systemd[1]: Started cri-containerd-1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100.scope - libcontainer container 1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100. Jul 7 02:59:01.783186 kubelet[2669]: I0707 02:59:01.782419 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hr9t\" (UniqueName: \"kubernetes.io/projected/c1e51e64-ed54-4337-b03c-5e29a286c65d-kube-api-access-5hr9t\") pod \"c1e51e64-ed54-4337-b03c-5e29a286c65d\" (UID: \"c1e51e64-ed54-4337-b03c-5e29a286c65d\") " Jul 7 02:59:01.783186 kubelet[2669]: I0707 02:59:01.782979 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c1e51e64-ed54-4337-b03c-5e29a286c65d-calico-apiserver-certs\") pod \"c1e51e64-ed54-4337-b03c-5e29a286c65d\" (UID: \"c1e51e64-ed54-4337-b03c-5e29a286c65d\") " Jul 7 02:59:01.881557 systemd[1]: var-lib-kubelet-pods-c1e51e64\x2ded54\x2d4337\x2db03c\x2d5e29a286c65d-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 7 02:59:01.895534 systemd[1]: var-lib-kubelet-pods-c1e51e64\x2ded54\x2d4337\x2db03c\x2d5e29a286c65d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5hr9t.mount: Deactivated successfully. Jul 7 02:59:01.908642 kubelet[2669]: I0707 02:59:01.906981 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e51e64-ed54-4337-b03c-5e29a286c65d-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "c1e51e64-ed54-4337-b03c-5e29a286c65d" (UID: "c1e51e64-ed54-4337-b03c-5e29a286c65d"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 02:59:01.908642 kubelet[2669]: I0707 02:59:01.908378 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e51e64-ed54-4337-b03c-5e29a286c65d-kube-api-access-5hr9t" (OuterVolumeSpecName: "kube-api-access-5hr9t") pod "c1e51e64-ed54-4337-b03c-5e29a286c65d" (UID: "c1e51e64-ed54-4337-b03c-5e29a286c65d"). InnerVolumeSpecName "kube-api-access-5hr9t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 02:59:02.010793 kubelet[2669]: I0707 02:59:02.008634 2669 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5hr9t\" (UniqueName: \"kubernetes.io/projected/c1e51e64-ed54-4337-b03c-5e29a286c65d-kube-api-access-5hr9t\") on node \"srv-3i0x6.gb1.brightbox.com\" DevicePath \"\"" Jul 7 02:59:02.013424 kubelet[2669]: I0707 02:59:02.013172 2669 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c1e51e64-ed54-4337-b03c-5e29a286c65d-calico-apiserver-certs\") on node \"srv-3i0x6.gb1.brightbox.com\" DevicePath \"\"" Jul 7 02:59:02.284438 systemd-networkd[1433]: cali1a99eff8eca: Gained IPv6LL Jul 7 02:59:02.365178 containerd[1504]: time="2025-07-07T02:59:02.363895030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-556c85fc95-lbckp,Uid:0d7078d7-1915-4227-a66a-0a08e54e2daa,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100\"" Jul 7 02:59:02.662681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4244029313.mount: Deactivated successfully. Jul 7 02:59:02.789899 systemd[1]: Removed slice kubepods-besteffort-podc1e51e64_ed54_4337_b03c_5e29a286c65d.slice - libcontainer container kubepods-besteffort-podc1e51e64_ed54_4337_b03c_5e29a286c65d.slice. Jul 7 02:59:02.790048 systemd[1]: kubepods-besteffort-podc1e51e64_ed54_4337_b03c_5e29a286c65d.slice: Consumed 1.317s CPU time. Jul 7 02:59:02.822547 containerd[1504]: time="2025-07-07T02:59:02.822483103Z" level=info msg="CreateContainer within sandbox \"1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 02:59:02.898406 containerd[1504]: time="2025-07-07T02:59:02.898319050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 02:59:02.970179 containerd[1504]: time="2025-07-07T02:59:02.969410599Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 10.399118447s" Jul 7 02:59:02.975431 containerd[1504]: time="2025-07-07T02:59:02.975160949Z" level=info msg="CreateContainer within sandbox \"1ca460d875bad657f7197ae6baeb7ca6c4ec6a7f115993f1a32b9d7f19c08100\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"852cc2ac0f3e6211526b91d36745c6a5c4d523ed4b389dd0b07bbcd10b0ae8ef\"" Jul 7 02:59:02.978369 containerd[1504]: time="2025-07-07T02:59:02.977282006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 02:59:02.978923 containerd[1504]: time="2025-07-07T02:59:02.978891315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:59:02.980298 containerd[1504]: time="2025-07-07T02:59:02.980263675Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:59:02.981499 containerd[1504]: time="2025-07-07T02:59:02.981462139Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:59:03.001123 containerd[1504]: time="2025-07-07T02:59:03.001062767Z" level=info msg="StartContainer for \"852cc2ac0f3e6211526b91d36745c6a5c4d523ed4b389dd0b07bbcd10b0ae8ef\"" Jul 7 02:59:03.003347 containerd[1504]: time="2025-07-07T02:59:03.002869075Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 02:59:03.011034 containerd[1504]: time="2025-07-07T02:59:03.010740516Z" level=info msg="CreateContainer within sandbox \"315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 02:59:03.066308 containerd[1504]: time="2025-07-07T02:59:03.065528027Z" level=info msg="CreateContainer within sandbox \"315b6dc5c31ce521c16ad18a7f19ac1fb96f1147fae9fb3be1a025b32590d987\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"6b67caa79782094738d83c62e619765daa72e087724d662430dce55956f773b3\"" Jul 7 02:59:03.071395 containerd[1504]: time="2025-07-07T02:59:03.071339908Z" level=info msg="StartContainer for \"6b67caa79782094738d83c62e619765daa72e087724d662430dce55956f773b3\"" Jul 7 02:59:03.207456 systemd[1]: Started cri-containerd-6b67caa79782094738d83c62e619765daa72e087724d662430dce55956f773b3.scope - libcontainer container 6b67caa79782094738d83c62e619765daa72e087724d662430dce55956f773b3. Jul 7 02:59:03.216544 systemd[1]: Started cri-containerd-852cc2ac0f3e6211526b91d36745c6a5c4d523ed4b389dd0b07bbcd10b0ae8ef.scope - libcontainer container 852cc2ac0f3e6211526b91d36745c6a5c4d523ed4b389dd0b07bbcd10b0ae8ef. Jul 7 02:59:03.540542 containerd[1504]: time="2025-07-07T02:59:03.540490579Z" level=info msg="StartContainer for \"852cc2ac0f3e6211526b91d36745c6a5c4d523ed4b389dd0b07bbcd10b0ae8ef\" returns successfully" Jul 7 02:59:03.569361 containerd[1504]: time="2025-07-07T02:59:03.569309812Z" level=info msg="StartContainer for \"6b67caa79782094738d83c62e619765daa72e087724d662430dce55956f773b3\" returns successfully" Jul 7 02:59:03.754444 kubelet[2669]: I0707 02:59:03.698084 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-668b845577-6qbdj" podStartSLOduration=2.467764845 podStartE2EDuration="35.692705121s" podCreationTimestamp="2025-07-07 02:58:28 +0000 UTC" firstStartedPulling="2025-07-07 02:58:29.769634581 +0000 UTC m=+51.348169212" lastFinishedPulling="2025-07-07 02:59:02.994574855 +0000 UTC m=+84.573109488" observedRunningTime="2025-07-07 02:59:03.69234406 +0000 UTC m=+85.270878698" watchObservedRunningTime="2025-07-07 02:59:03.692705121 +0000 UTC m=+85.271239755" Jul 7 02:59:03.757147 kubelet[2669]: I0707 02:59:03.757058 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-556c85fc95-lbckp" podStartSLOduration=6.757034106 podStartE2EDuration="6.757034106s" podCreationTimestamp="2025-07-07 02:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 02:59:03.7562978 +0000 UTC m=+85.334832441" watchObservedRunningTime="2025-07-07 02:59:03.757034106 +0000 UTC m=+85.335568740" Jul 7 02:59:04.595896 kubelet[2669]: I0707 02:59:04.589646 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e51e64-ed54-4337-b03c-5e29a286c65d" path="/var/lib/kubelet/pods/c1e51e64-ed54-4337-b03c-5e29a286c65d/volumes" Jul 7 02:59:05.496589 containerd[1504]: time="2025-07-07T02:59:05.496526885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:59:05.501568 containerd[1504]: time="2025-07-07T02:59:05.499164920Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 02:59:05.531259 containerd[1504]: time="2025-07-07T02:59:05.528739518Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 2.52523504s" Jul 7 02:59:05.531259 containerd[1504]: time="2025-07-07T02:59:05.528798345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 02:59:05.558252 containerd[1504]: time="2025-07-07T02:59:05.556680443Z" level=info msg="CreateContainer within sandbox \"d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 02:59:05.580061 containerd[1504]: time="2025-07-07T02:59:05.579993246Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:59:05.583915 containerd[1504]: time="2025-07-07T02:59:05.583880486Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:59:05.608322 containerd[1504]: time="2025-07-07T02:59:05.608227599Z" level=info msg="CreateContainer within sandbox \"d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"02eecee12e9d695eccbf3df9a0cded902ac6e295e2fe93609cc46b1ae1bbd3e9\"" Jul 7 02:59:05.609856 containerd[1504]: time="2025-07-07T02:59:05.609572048Z" level=info msg="StartContainer for \"02eecee12e9d695eccbf3df9a0cded902ac6e295e2fe93609cc46b1ae1bbd3e9\"" Jul 7 02:59:05.802492 systemd[1]: Started cri-containerd-02eecee12e9d695eccbf3df9a0cded902ac6e295e2fe93609cc46b1ae1bbd3e9.scope - libcontainer container 02eecee12e9d695eccbf3df9a0cded902ac6e295e2fe93609cc46b1ae1bbd3e9. Jul 7 02:59:05.972577 containerd[1504]: time="2025-07-07T02:59:05.972497291Z" level=info msg="StartContainer for \"02eecee12e9d695eccbf3df9a0cded902ac6e295e2fe93609cc46b1ae1bbd3e9\" returns successfully" Jul 7 02:59:05.991719 containerd[1504]: time="2025-07-07T02:59:05.991667154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 02:59:08.429612 containerd[1504]: time="2025-07-07T02:59:08.429487219Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:59:08.432844 containerd[1504]: time="2025-07-07T02:59:08.432535315Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 02:59:08.433155 containerd[1504]: time="2025-07-07T02:59:08.433102499Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:59:08.436264 containerd[1504]: time="2025-07-07T02:59:08.435761512Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 02:59:08.438214 containerd[1504]: time="2025-07-07T02:59:08.438159622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.446405819s" Jul 7 02:59:08.438489 containerd[1504]: time="2025-07-07T02:59:08.438429829Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 02:59:08.450736 containerd[1504]: time="2025-07-07T02:59:08.450596635Z" level=info msg="CreateContainer within sandbox \"d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 02:59:08.490372 containerd[1504]: time="2025-07-07T02:59:08.482209544Z" level=info msg="CreateContainer within sandbox \"d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"852d7ba66131185b43adef61a81d6dec0d84159236b85bc8d7332310708ce08d\"" Jul 7 02:59:08.490372 containerd[1504]: time="2025-07-07T02:59:08.485204965Z" level=info msg="StartContainer for \"852d7ba66131185b43adef61a81d6dec0d84159236b85bc8d7332310708ce08d\"" Jul 7 02:59:08.665666 systemd[1]: run-containerd-runc-k8s.io-852d7ba66131185b43adef61a81d6dec0d84159236b85bc8d7332310708ce08d-runc.5y5o7q.mount: Deactivated successfully. Jul 7 02:59:08.678750 systemd[1]: Started cri-containerd-852d7ba66131185b43adef61a81d6dec0d84159236b85bc8d7332310708ce08d.scope - libcontainer container 852d7ba66131185b43adef61a81d6dec0d84159236b85bc8d7332310708ce08d. Jul 7 02:59:08.824705 containerd[1504]: time="2025-07-07T02:59:08.824560514Z" level=info msg="StartContainer for \"852d7ba66131185b43adef61a81d6dec0d84159236b85bc8d7332310708ce08d\" returns successfully" Jul 7 02:59:09.225777 containerd[1504]: time="2025-07-07T02:59:09.225712261Z" level=info msg="StopContainer for \"64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c\" with timeout 30 (s)" Jul 7 02:59:09.228013 containerd[1504]: time="2025-07-07T02:59:09.227974979Z" level=info msg="Stop container \"64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c\" with signal terminated" Jul 7 02:59:09.500354 systemd[1]: cri-containerd-64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c.scope: Deactivated successfully. Jul 7 02:59:09.500754 systemd[1]: cri-containerd-64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c.scope: Consumed 1.651s CPU time. Jul 7 02:59:09.631878 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c-rootfs.mount: Deactivated successfully. Jul 7 02:59:09.678794 containerd[1504]: time="2025-07-07T02:59:09.622270926Z" level=info msg="shim disconnected" id=64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c namespace=k8s.io Jul 7 02:59:09.679568 containerd[1504]: time="2025-07-07T02:59:09.679520077Z" level=warning msg="cleaning up after shim disconnected" id=64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c namespace=k8s.io Jul 7 02:59:09.679700 containerd[1504]: time="2025-07-07T02:59:09.679672500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 02:59:09.841447 containerd[1504]: time="2025-07-07T02:59:09.840885690Z" level=info msg="StopContainer for \"64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c\" returns successfully" Jul 7 02:59:09.857328 containerd[1504]: time="2025-07-07T02:59:09.857271743Z" level=info msg="StopPodSandbox for \"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d\"" Jul 7 02:59:09.883169 containerd[1504]: time="2025-07-07T02:59:09.882947318Z" level=info msg="Container to stop \"64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 7 02:59:09.897819 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d-shm.mount: Deactivated successfully. Jul 7 02:59:09.918647 systemd[1]: cri-containerd-1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d.scope: Deactivated successfully. Jul 7 02:59:09.973207 containerd[1504]: time="2025-07-07T02:59:09.972186018Z" level=info msg="shim disconnected" id=1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d namespace=k8s.io Jul 7 02:59:09.973207 containerd[1504]: time="2025-07-07T02:59:09.972299917Z" level=warning msg="cleaning up after shim disconnected" id=1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d namespace=k8s.io Jul 7 02:59:09.973207 containerd[1504]: time="2025-07-07T02:59:09.972316349Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 7 02:59:09.977094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d-rootfs.mount: Deactivated successfully. Jul 7 02:59:10.318548 kubelet[2669]: I0707 02:59:10.302284 2669 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 02:59:10.318548 kubelet[2669]: I0707 02:59:10.315910 2669 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 02:59:10.731016 systemd-networkd[1433]: cali14d4bc10b6d: Link DOWN Jul 7 02:59:10.731030 systemd-networkd[1433]: cali14d4bc10b6d: Lost carrier Jul 7 02:59:11.004821 kubelet[2669]: I0707 02:59:10.889072 2669 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dq6fd" podStartSLOduration=42.215879124 podStartE2EDuration="1m9.672567303s" podCreationTimestamp="2025-07-07 02:58:01 +0000 UTC" firstStartedPulling="2025-07-07 02:58:40.986183323 +0000 UTC m=+62.564717950" lastFinishedPulling="2025-07-07 02:59:08.442871509 +0000 UTC m=+90.021406129" observedRunningTime="2025-07-07 02:59:10.032883101 +0000 UTC m=+91.611417740" watchObservedRunningTime="2025-07-07 02:59:10.672567303 +0000 UTC m=+92.251101932" Jul 7 02:59:11.186547 kubelet[2669]: I0707 02:59:11.186210 2669 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Jul 7 02:59:12.583282 containerd[1504]: 2025-07-07 02:59:10.670 [INFO][6317] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Jul 7 02:59:12.583282 containerd[1504]: 2025-07-07 02:59:10.672 [INFO][6317] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" iface="eth0" netns="/var/run/netns/cni-68930fc3-fd68-11bc-8f0a-21f284cf23ba" Jul 7 02:59:12.583282 containerd[1504]: 2025-07-07 02:59:10.674 [INFO][6317] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" iface="eth0" netns="/var/run/netns/cni-68930fc3-fd68-11bc-8f0a-21f284cf23ba" Jul 7 02:59:12.583282 containerd[1504]: 2025-07-07 02:59:10.695 [INFO][6317] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" after=22.173741ms iface="eth0" netns="/var/run/netns/cni-68930fc3-fd68-11bc-8f0a-21f284cf23ba" Jul 7 02:59:12.583282 containerd[1504]: 2025-07-07 02:59:10.695 [INFO][6317] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Jul 7 02:59:12.583282 containerd[1504]: 2025-07-07 02:59:10.697 [INFO][6317] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Jul 7 02:59:12.583282 containerd[1504]: 2025-07-07 02:59:11.390 [INFO][6330] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" HandleID="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:59:12.583282 containerd[1504]: 2025-07-07 02:59:11.394 [INFO][6330] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:59:12.583282 containerd[1504]: 2025-07-07 02:59:11.397 [INFO][6330] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:59:12.583282 containerd[1504]: 2025-07-07 02:59:12.356 [INFO][6330] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" HandleID="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:59:12.583282 containerd[1504]: 2025-07-07 02:59:12.356 [INFO][6330] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" HandleID="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:59:12.583282 containerd[1504]: 2025-07-07 02:59:12.538 [INFO][6330] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:59:12.583282 containerd[1504]: 2025-07-07 02:59:12.556 [INFO][6317] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Jul 7 02:59:12.738530 containerd[1504]: time="2025-07-07T02:59:12.736832865Z" level=info msg="TearDown network for sandbox \"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d\" successfully" Jul 7 02:59:12.738530 containerd[1504]: time="2025-07-07T02:59:12.736888751Z" level=info msg="StopPodSandbox for \"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d\" returns successfully" Jul 7 02:59:12.762609 systemd[1]: run-netns-cni\x2d68930fc3\x2dfd68\x2d11bc\x2d8f0a\x2d21f284cf23ba.mount: Deactivated successfully. Jul 7 02:59:13.138296 kubelet[2669]: I0707 02:59:13.137712 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c09fc3d5-40d3-4a9f-af15-68963bda8021-calico-apiserver-certs\") pod \"c09fc3d5-40d3-4a9f-af15-68963bda8021\" (UID: \"c09fc3d5-40d3-4a9f-af15-68963bda8021\") " Jul 7 02:59:13.138296 kubelet[2669]: I0707 02:59:13.137854 2669 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-965j8\" (UniqueName: \"kubernetes.io/projected/c09fc3d5-40d3-4a9f-af15-68963bda8021-kube-api-access-965j8\") pod \"c09fc3d5-40d3-4a9f-af15-68963bda8021\" (UID: \"c09fc3d5-40d3-4a9f-af15-68963bda8021\") " Jul 7 02:59:13.219930 systemd[1]: var-lib-kubelet-pods-c09fc3d5\x2d40d3\x2d4a9f\x2daf15\x2d68963bda8021-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Jul 7 02:59:13.228556 systemd[1]: var-lib-kubelet-pods-c09fc3d5\x2d40d3\x2d4a9f\x2daf15\x2d68963bda8021-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d965j8.mount: Deactivated successfully. Jul 7 02:59:13.235591 kubelet[2669]: I0707 02:59:13.233933 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c09fc3d5-40d3-4a9f-af15-68963bda8021-kube-api-access-965j8" (OuterVolumeSpecName: "kube-api-access-965j8") pod "c09fc3d5-40d3-4a9f-af15-68963bda8021" (UID: "c09fc3d5-40d3-4a9f-af15-68963bda8021"). InnerVolumeSpecName "kube-api-access-965j8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 7 02:59:13.236021 kubelet[2669]: I0707 02:59:13.235543 2669 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c09fc3d5-40d3-4a9f-af15-68963bda8021-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "c09fc3d5-40d3-4a9f-af15-68963bda8021" (UID: "c09fc3d5-40d3-4a9f-af15-68963bda8021"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 7 02:59:13.238656 kubelet[2669]: I0707 02:59:13.238627 2669 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c09fc3d5-40d3-4a9f-af15-68963bda8021-calico-apiserver-certs\") on node \"srv-3i0x6.gb1.brightbox.com\" DevicePath \"\"" Jul 7 02:59:13.238828 kubelet[2669]: I0707 02:59:13.238804 2669 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-965j8\" (UniqueName: \"kubernetes.io/projected/c09fc3d5-40d3-4a9f-af15-68963bda8021-kube-api-access-965j8\") on node \"srv-3i0x6.gb1.brightbox.com\" DevicePath \"\"" Jul 7 02:59:13.277764 systemd[1]: Removed slice kubepods-besteffort-podc09fc3d5_40d3_4a9f_af15_68963bda8021.slice - libcontainer container kubepods-besteffort-podc09fc3d5_40d3_4a9f_af15_68963bda8021.slice. Jul 7 02:59:13.277936 systemd[1]: kubepods-besteffort-podc09fc3d5_40d3_4a9f_af15_68963bda8021.slice: Consumed 1.693s CPU time. Jul 7 02:59:13.563811 systemd[1]: Started sshd@9-10.244.11.130:22-139.178.68.195:38354.service - OpenSSH per-connection server daemon (139.178.68.195:38354). Jul 7 02:59:14.566734 sshd[6355]: Accepted publickey for core from 139.178.68.195 port 38354 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:59:14.570597 sshd[6355]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:59:14.609900 systemd-logind[1484]: New session 12 of user core. Jul 7 02:59:14.616912 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 02:59:14.628456 kubelet[2669]: I0707 02:59:14.628390 2669 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c09fc3d5-40d3-4a9f-af15-68963bda8021" path="/var/lib/kubelet/pods/c09fc3d5-40d3-4a9f-af15-68963bda8021/volumes" Jul 7 02:59:15.206345 systemd[1]: run-containerd-runc-k8s.io-4cad2f229dfd73516c6a4f16879086c7e0ea2e99800d66f42b5bc2c743043529-runc.2nRU0A.mount: Deactivated successfully. Jul 7 02:59:16.158348 sshd[6355]: pam_unix(sshd:session): session closed for user core Jul 7 02:59:16.180519 systemd[1]: sshd@9-10.244.11.130:22-139.178.68.195:38354.service: Deactivated successfully. Jul 7 02:59:16.187013 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 02:59:16.194954 systemd-logind[1484]: Session 12 logged out. Waiting for processes to exit. Jul 7 02:59:16.199999 systemd-logind[1484]: Removed session 12. Jul 7 02:59:21.339021 systemd[1]: Started sshd@10-10.244.11.130:22-139.178.68.195:36520.service - OpenSSH per-connection server daemon (139.178.68.195:36520). Jul 7 02:59:22.377519 systemd[1]: run-containerd-runc-k8s.io-fa1b58b020356ba027dd4f1d619fb108f0846693d28dad587540330ee204e1e1-runc.J9sci0.mount: Deactivated successfully. Jul 7 02:59:22.384030 sshd[6421]: Accepted publickey for core from 139.178.68.195 port 36520 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:59:22.386017 sshd[6421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:59:22.431358 systemd-logind[1484]: New session 13 of user core. Jul 7 02:59:22.437542 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 02:59:23.821335 sshd[6421]: pam_unix(sshd:session): session closed for user core Jul 7 02:59:23.832112 systemd[1]: sshd@10-10.244.11.130:22-139.178.68.195:36520.service: Deactivated successfully. Jul 7 02:59:23.837323 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 02:59:23.840476 systemd-logind[1484]: Session 13 logged out. Waiting for processes to exit. Jul 7 02:59:23.848042 systemd-logind[1484]: Removed session 13. Jul 7 02:59:28.990685 systemd[1]: Started sshd@11-10.244.11.130:22-139.178.68.195:41932.service - OpenSSH per-connection server daemon (139.178.68.195:41932). Jul 7 02:59:29.991300 sshd[6457]: Accepted publickey for core from 139.178.68.195 port 41932 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:59:29.995733 sshd[6457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:59:30.008972 systemd-logind[1484]: New session 14 of user core. Jul 7 02:59:30.017788 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 02:59:31.019647 sshd[6457]: pam_unix(sshd:session): session closed for user core Jul 7 02:59:31.026133 systemd[1]: sshd@11-10.244.11.130:22-139.178.68.195:41932.service: Deactivated successfully. Jul 7 02:59:31.031922 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 02:59:31.036365 systemd-logind[1484]: Session 14 logged out. Waiting for processes to exit. Jul 7 02:59:31.038361 systemd-logind[1484]: Removed session 14. Jul 7 02:59:31.180743 systemd[1]: Started sshd@12-10.244.11.130:22-139.178.68.195:41946.service - OpenSSH per-connection server daemon (139.178.68.195:41946). Jul 7 02:59:32.096902 sshd[6472]: Accepted publickey for core from 139.178.68.195 port 41946 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:59:32.099902 sshd[6472]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:59:32.107068 systemd-logind[1484]: New session 15 of user core. Jul 7 02:59:32.115868 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 02:59:33.047652 sshd[6472]: pam_unix(sshd:session): session closed for user core Jul 7 02:59:33.054429 systemd[1]: sshd@12-10.244.11.130:22-139.178.68.195:41946.service: Deactivated successfully. Jul 7 02:59:33.057158 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 02:59:33.061452 systemd-logind[1484]: Session 15 logged out. Waiting for processes to exit. Jul 7 02:59:33.063938 systemd-logind[1484]: Removed session 15. Jul 7 02:59:33.204575 systemd[1]: Started sshd@13-10.244.11.130:22-139.178.68.195:41954.service - OpenSSH per-connection server daemon (139.178.68.195:41954). Jul 7 02:59:34.168822 sshd[6487]: Accepted publickey for core from 139.178.68.195 port 41954 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:59:34.175137 sshd[6487]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:59:34.185305 systemd-logind[1484]: New session 16 of user core. Jul 7 02:59:34.192572 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 02:59:35.224685 sshd[6487]: pam_unix(sshd:session): session closed for user core Jul 7 02:59:35.238273 systemd[1]: sshd@13-10.244.11.130:22-139.178.68.195:41954.service: Deactivated successfully. Jul 7 02:59:35.245066 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 02:59:35.248528 systemd-logind[1484]: Session 16 logged out. Waiting for processes to exit. Jul 7 02:59:35.250635 systemd-logind[1484]: Removed session 16. Jul 7 02:59:40.392216 systemd[1]: Started sshd@14-10.244.11.130:22-139.178.68.195:46746.service - OpenSSH per-connection server daemon (139.178.68.195:46746). Jul 7 02:59:41.439372 sshd[6530]: Accepted publickey for core from 139.178.68.195 port 46746 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:59:41.445952 sshd[6530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:59:41.461969 systemd-logind[1484]: New session 17 of user core. Jul 7 02:59:41.466443 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 02:59:42.839613 sshd[6530]: pam_unix(sshd:session): session closed for user core Jul 7 02:59:42.857972 systemd[1]: sshd@14-10.244.11.130:22-139.178.68.195:46746.service: Deactivated successfully. Jul 7 02:59:42.858049 systemd-logind[1484]: Session 17 logged out. Waiting for processes to exit. Jul 7 02:59:42.866535 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 02:59:42.869323 systemd-logind[1484]: Removed session 17. Jul 7 02:59:43.499257 kubelet[2669]: I0707 02:59:43.490226 2669 scope.go:117] "RemoveContainer" containerID="64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c" Jul 7 02:59:43.894826 containerd[1504]: time="2025-07-07T02:59:43.861300229Z" level=info msg="RemoveContainer for \"64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c\"" Jul 7 02:59:44.011167 containerd[1504]: time="2025-07-07T02:59:44.004686020Z" level=info msg="RemoveContainer for \"64aafb9eaded85961f76d6a6d7c4d5ca591997f08a937e144c630b09af24c36c\" returns successfully" Jul 7 02:59:44.011411 kubelet[2669]: I0707 02:59:44.011208 2669 scope.go:117] "RemoveContainer" containerID="35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32" Jul 7 02:59:44.016299 containerd[1504]: time="2025-07-07T02:59:44.014194629Z" level=info msg="RemoveContainer for \"35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32\"" Jul 7 02:59:44.208114 containerd[1504]: time="2025-07-07T02:59:44.208009378Z" level=info msg="RemoveContainer for \"35d3c99a975b064292efe692e326b98e9b8a72dd6ab78f8a8be45d63364ffd32\" returns successfully" Jul 7 02:59:44.276396 containerd[1504]: time="2025-07-07T02:59:44.276322198Z" level=info msg="StopPodSandbox for \"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d\"" Jul 7 02:59:45.485682 containerd[1504]: 2025-07-07 02:59:44.795 [WARNING][6550] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:59:45.485682 containerd[1504]: 2025-07-07 02:59:44.799 [INFO][6550] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Jul 7 02:59:45.485682 containerd[1504]: 2025-07-07 02:59:44.799 [INFO][6550] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" iface="eth0" netns="" Jul 7 02:59:45.485682 containerd[1504]: 2025-07-07 02:59:44.800 [INFO][6550] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Jul 7 02:59:45.485682 containerd[1504]: 2025-07-07 02:59:44.800 [INFO][6550] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Jul 7 02:59:45.485682 containerd[1504]: 2025-07-07 02:59:45.347 [INFO][6557] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" HandleID="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:59:45.485682 containerd[1504]: 2025-07-07 02:59:45.356 [INFO][6557] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:59:45.485682 containerd[1504]: 2025-07-07 02:59:45.359 [INFO][6557] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:59:45.485682 containerd[1504]: 2025-07-07 02:59:45.436 [WARNING][6557] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" HandleID="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:59:45.485682 containerd[1504]: 2025-07-07 02:59:45.436 [INFO][6557] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" HandleID="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:59:45.485682 containerd[1504]: 2025-07-07 02:59:45.443 [INFO][6557] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:59:45.485682 containerd[1504]: 2025-07-07 02:59:45.448 [INFO][6550] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Jul 7 02:59:45.593062 containerd[1504]: time="2025-07-07T02:59:45.569792721Z" level=info msg="TearDown network for sandbox \"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d\" successfully" Jul 7 02:59:45.593062 containerd[1504]: time="2025-07-07T02:59:45.590368433Z" level=info msg="StopPodSandbox for \"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d\" returns successfully" Jul 7 02:59:45.633834 containerd[1504]: time="2025-07-07T02:59:45.632835013Z" level=info msg="RemovePodSandbox for \"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d\"" Jul 7 02:59:45.654218 containerd[1504]: time="2025-07-07T02:59:45.653463880Z" level=info msg="Forcibly stopping sandbox \"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d\"" Jul 7 02:59:46.129903 containerd[1504]: 2025-07-07 02:59:45.880 [WARNING][6612] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:59:46.129903 containerd[1504]: 2025-07-07 02:59:45.882 [INFO][6612] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Jul 7 02:59:46.129903 containerd[1504]: 2025-07-07 02:59:45.882 [INFO][6612] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" iface="eth0" netns="" Jul 7 02:59:46.129903 containerd[1504]: 2025-07-07 02:59:45.882 [INFO][6612] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Jul 7 02:59:46.129903 containerd[1504]: 2025-07-07 02:59:45.882 [INFO][6612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Jul 7 02:59:46.129903 containerd[1504]: 2025-07-07 02:59:46.102 [INFO][6636] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" HandleID="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:59:46.129903 containerd[1504]: 2025-07-07 02:59:46.104 [INFO][6636] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:59:46.129903 containerd[1504]: 2025-07-07 02:59:46.104 [INFO][6636] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:59:46.129903 containerd[1504]: 2025-07-07 02:59:46.121 [WARNING][6636] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" HandleID="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:59:46.129903 containerd[1504]: 2025-07-07 02:59:46.121 [INFO][6636] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" HandleID="k8s-pod-network.1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--7dt46-eth0" Jul 7 02:59:46.129903 containerd[1504]: 2025-07-07 02:59:46.123 [INFO][6636] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:59:46.129903 containerd[1504]: 2025-07-07 02:59:46.126 [INFO][6612] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d" Jul 7 02:59:46.136295 containerd[1504]: time="2025-07-07T02:59:46.129971142Z" level=info msg="TearDown network for sandbox \"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d\" successfully" Jul 7 02:59:46.170159 containerd[1504]: time="2025-07-07T02:59:46.170010071Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:59:46.170764 containerd[1504]: time="2025-07-07T02:59:46.170558231Z" level=info msg="RemovePodSandbox \"1c25175e4d6060291f5365bc62f97b66bb5378d46edb53f9c10162126955e61d\" returns successfully" Jul 7 02:59:46.176733 containerd[1504]: time="2025-07-07T02:59:46.176676402Z" level=info msg="StopPodSandbox for \"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b\"" Jul 7 02:59:46.339666 containerd[1504]: 2025-07-07 02:59:46.266 [WARNING][6652] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:59:46.339666 containerd[1504]: 2025-07-07 02:59:46.266 [INFO][6652] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Jul 7 02:59:46.339666 containerd[1504]: 2025-07-07 02:59:46.266 [INFO][6652] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" iface="eth0" netns="" Jul 7 02:59:46.339666 containerd[1504]: 2025-07-07 02:59:46.266 [INFO][6652] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Jul 7 02:59:46.339666 containerd[1504]: 2025-07-07 02:59:46.266 [INFO][6652] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Jul 7 02:59:46.339666 containerd[1504]: 2025-07-07 02:59:46.312 [INFO][6662] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" HandleID="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:59:46.339666 containerd[1504]: 2025-07-07 02:59:46.312 [INFO][6662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:59:46.339666 containerd[1504]: 2025-07-07 02:59:46.312 [INFO][6662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:59:46.339666 containerd[1504]: 2025-07-07 02:59:46.328 [WARNING][6662] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" HandleID="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:59:46.339666 containerd[1504]: 2025-07-07 02:59:46.329 [INFO][6662] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" HandleID="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:59:46.339666 containerd[1504]: 2025-07-07 02:59:46.332 [INFO][6662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:59:46.339666 containerd[1504]: 2025-07-07 02:59:46.335 [INFO][6652] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Jul 7 02:59:46.342488 containerd[1504]: time="2025-07-07T02:59:46.339729804Z" level=info msg="TearDown network for sandbox \"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b\" successfully" Jul 7 02:59:46.342488 containerd[1504]: time="2025-07-07T02:59:46.339767395Z" level=info msg="StopPodSandbox for \"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b\" returns successfully" Jul 7 02:59:46.342488 containerd[1504]: time="2025-07-07T02:59:46.341221978Z" level=info msg="RemovePodSandbox for \"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b\"" Jul 7 02:59:46.342488 containerd[1504]: time="2025-07-07T02:59:46.341281200Z" level=info msg="Forcibly stopping sandbox \"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b\"" Jul 7 02:59:46.496030 containerd[1504]: 2025-07-07 02:59:46.403 [WARNING][6676] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" WorkloadEndpoint="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:59:46.496030 containerd[1504]: 2025-07-07 02:59:46.404 [INFO][6676] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Jul 7 02:59:46.496030 containerd[1504]: 2025-07-07 02:59:46.404 [INFO][6676] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" iface="eth0" netns="" Jul 7 02:59:46.496030 containerd[1504]: 2025-07-07 02:59:46.404 [INFO][6676] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Jul 7 02:59:46.496030 containerd[1504]: 2025-07-07 02:59:46.404 [INFO][6676] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Jul 7 02:59:46.496030 containerd[1504]: 2025-07-07 02:59:46.453 [INFO][6684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" HandleID="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:59:46.496030 containerd[1504]: 2025-07-07 02:59:46.453 [INFO][6684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:59:46.496030 containerd[1504]: 2025-07-07 02:59:46.453 [INFO][6684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:59:46.496030 containerd[1504]: 2025-07-07 02:59:46.481 [WARNING][6684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" HandleID="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:59:46.496030 containerd[1504]: 2025-07-07 02:59:46.481 [INFO][6684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" HandleID="k8s-pod-network.e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Workload="srv--3i0x6.gb1.brightbox.com-k8s-calico--apiserver--574fc86944--2st6q-eth0" Jul 7 02:59:46.496030 containerd[1504]: 2025-07-07 02:59:46.487 [INFO][6684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:59:46.496030 containerd[1504]: 2025-07-07 02:59:46.490 [INFO][6676] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b" Jul 7 02:59:46.496030 containerd[1504]: time="2025-07-07T02:59:46.495463237Z" level=info msg="TearDown network for sandbox \"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b\" successfully" Jul 7 02:59:46.804222 containerd[1504]: time="2025-07-07T02:59:46.804063231Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:59:46.804222 containerd[1504]: time="2025-07-07T02:59:46.804189339Z" level=info msg="RemovePodSandbox \"e559dd1df9b7721b37e8bc3f6dfd8a15d7af36a2103c3cc513ee2ef7f2e9769b\" returns successfully" Jul 7 02:59:46.827713 containerd[1504]: time="2025-07-07T02:59:46.827660943Z" level=info msg="StopPodSandbox for \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\"" Jul 7 02:59:47.121940 containerd[1504]: 2025-07-07 02:59:47.016 [WARNING][6698] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"524e68d3-c271-42bd-a0b6-ec9248f8255b", ResourceVersion:"1276", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e", Pod:"csi-node-driver-dq6fd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e2a6f22139", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:59:47.121940 containerd[1504]: 2025-07-07 02:59:47.018 [INFO][6698] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:59:47.121940 containerd[1504]: 2025-07-07 02:59:47.019 [INFO][6698] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" iface="eth0" netns="" Jul 7 02:59:47.121940 containerd[1504]: 2025-07-07 02:59:47.019 [INFO][6698] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:59:47.121940 containerd[1504]: 2025-07-07 02:59:47.019 [INFO][6698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:59:47.121940 containerd[1504]: 2025-07-07 02:59:47.087 [INFO][6706] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" HandleID="k8s-pod-network.2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:59:47.121940 containerd[1504]: 2025-07-07 02:59:47.087 [INFO][6706] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:59:47.121940 containerd[1504]: 2025-07-07 02:59:47.087 [INFO][6706] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:59:47.121940 containerd[1504]: 2025-07-07 02:59:47.108 [WARNING][6706] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" HandleID="k8s-pod-network.2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:59:47.121940 containerd[1504]: 2025-07-07 02:59:47.108 [INFO][6706] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" HandleID="k8s-pod-network.2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:59:47.121940 containerd[1504]: 2025-07-07 02:59:47.111 [INFO][6706] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:59:47.121940 containerd[1504]: 2025-07-07 02:59:47.116 [INFO][6698] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:59:47.161406 containerd[1504]: time="2025-07-07T02:59:47.123315794Z" level=info msg="TearDown network for sandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\" successfully" Jul 7 02:59:47.161406 containerd[1504]: time="2025-07-07T02:59:47.123381755Z" level=info msg="StopPodSandbox for \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\" returns successfully" Jul 7 02:59:47.161406 containerd[1504]: time="2025-07-07T02:59:47.142724626Z" level=info msg="RemovePodSandbox for \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\"" Jul 7 02:59:47.161406 containerd[1504]: time="2025-07-07T02:59:47.142778465Z" level=info msg="Forcibly stopping sandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\"" Jul 7 02:59:47.336522 containerd[1504]: 2025-07-07 02:59:47.248 [WARNING][6720] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"524e68d3-c271-42bd-a0b6-ec9248f8255b", ResourceVersion:"1276", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 2, 58, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-3i0x6.gb1.brightbox.com", ContainerID:"d636bcc15d2cff11e90d8d6e3bd0eca4f1cca4ec7b7a16d1e725bf302f65521e", Pod:"csi-node-driver-dq6fd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4e2a6f22139", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 02:59:47.336522 containerd[1504]: 2025-07-07 02:59:47.256 [INFO][6720] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:59:47.336522 containerd[1504]: 2025-07-07 02:59:47.256 [INFO][6720] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" iface="eth0" netns="" Jul 7 02:59:47.336522 containerd[1504]: 2025-07-07 02:59:47.256 [INFO][6720] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:59:47.336522 containerd[1504]: 2025-07-07 02:59:47.256 [INFO][6720] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:59:47.336522 containerd[1504]: 2025-07-07 02:59:47.309 [INFO][6727] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" HandleID="k8s-pod-network.2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:59:47.336522 containerd[1504]: 2025-07-07 02:59:47.310 [INFO][6727] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 02:59:47.336522 containerd[1504]: 2025-07-07 02:59:47.310 [INFO][6727] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 02:59:47.336522 containerd[1504]: 2025-07-07 02:59:47.320 [WARNING][6727] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" HandleID="k8s-pod-network.2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:59:47.336522 containerd[1504]: 2025-07-07 02:59:47.321 [INFO][6727] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" HandleID="k8s-pod-network.2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Workload="srv--3i0x6.gb1.brightbox.com-k8s-csi--node--driver--dq6fd-eth0" Jul 7 02:59:47.336522 containerd[1504]: 2025-07-07 02:59:47.325 [INFO][6727] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 02:59:47.336522 containerd[1504]: 2025-07-07 02:59:47.327 [INFO][6720] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e" Jul 7 02:59:47.336522 containerd[1504]: time="2025-07-07T02:59:47.336415945Z" level=info msg="TearDown network for sandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\" successfully" Jul 7 02:59:47.344584 containerd[1504]: time="2025-07-07T02:59:47.344407577Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 7 02:59:47.344678 containerd[1504]: time="2025-07-07T02:59:47.344620238Z" level=info msg="RemovePodSandbox \"2640e1a37f3a5b3991ce0cd7d33f51b4f10ee1efdcaf84bd7110f3822d42542e\" returns successfully" Jul 7 02:59:48.079757 systemd[1]: Started sshd@15-10.244.11.130:22-139.178.68.195:46750.service - OpenSSH per-connection server daemon (139.178.68.195:46750). Jul 7 02:59:49.098215 sshd[6735]: Accepted publickey for core from 139.178.68.195 port 46750 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:59:49.102838 sshd[6735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:59:49.136364 systemd-logind[1484]: New session 18 of user core. Jul 7 02:59:49.151516 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 02:59:50.507035 sshd[6735]: pam_unix(sshd:session): session closed for user core Jul 7 02:59:50.521206 systemd[1]: sshd@15-10.244.11.130:22-139.178.68.195:46750.service: Deactivated successfully. Jul 7 02:59:50.526334 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 02:59:50.528474 systemd-logind[1484]: Session 18 logged out. Waiting for processes to exit. Jul 7 02:59:50.531033 systemd-logind[1484]: Removed session 18. Jul 7 02:59:55.688294 systemd[1]: Started sshd@16-10.244.11.130:22-139.178.68.195:48606.service - OpenSSH per-connection server daemon (139.178.68.195:48606). Jul 7 02:59:56.698400 sshd[6756]: Accepted publickey for core from 139.178.68.195 port 48606 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:59:56.703217 sshd[6756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:59:56.713222 systemd-logind[1484]: New session 19 of user core. Jul 7 02:59:56.719560 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 02:59:58.105443 sshd[6756]: pam_unix(sshd:session): session closed for user core Jul 7 02:59:58.113865 systemd-logind[1484]: Session 19 logged out. Waiting for processes to exit. Jul 7 02:59:58.115102 systemd[1]: sshd@16-10.244.11.130:22-139.178.68.195:48606.service: Deactivated successfully. Jul 7 02:59:58.118947 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 02:59:58.124253 systemd-logind[1484]: Removed session 19. Jul 7 02:59:58.271348 systemd[1]: Started sshd@17-10.244.11.130:22-139.178.68.195:48622.service - OpenSSH per-connection server daemon (139.178.68.195:48622). Jul 7 02:59:59.209369 sshd[6769]: Accepted publickey for core from 139.178.68.195 port 48622 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 02:59:59.211301 sshd[6769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 02:59:59.218663 systemd-logind[1484]: New session 20 of user core. Jul 7 02:59:59.226615 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 03:00:00.481945 sshd[6769]: pam_unix(sshd:session): session closed for user core Jul 7 03:00:00.501862 systemd[1]: sshd@17-10.244.11.130:22-139.178.68.195:48622.service: Deactivated successfully. Jul 7 03:00:00.516394 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 03:00:00.520977 systemd-logind[1484]: Session 20 logged out. Waiting for processes to exit. Jul 7 03:00:00.523594 systemd-logind[1484]: Removed session 20. Jul 7 03:00:00.650750 systemd[1]: Started sshd@18-10.244.11.130:22-139.178.68.195:54714.service - OpenSSH per-connection server daemon (139.178.68.195:54714). Jul 7 03:00:01.698782 sshd[6780]: Accepted publickey for core from 139.178.68.195 port 54714 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 03:00:01.703075 sshd[6780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 03:00:01.712304 systemd-logind[1484]: New session 21 of user core. Jul 7 03:00:01.716509 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 03:00:07.822555 sshd[6780]: pam_unix(sshd:session): session closed for user core Jul 7 03:00:07.937103 systemd-logind[1484]: Session 21 logged out. Waiting for processes to exit. Jul 7 03:00:07.949431 systemd[1]: sshd@18-10.244.11.130:22-139.178.68.195:54714.service: Deactivated successfully. Jul 7 03:00:07.954832 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 03:00:08.126799 systemd[1]: Started sshd@19-10.244.11.130:22-139.178.68.195:54728.service - OpenSSH per-connection server daemon (139.178.68.195:54728). Jul 7 03:00:08.127935 systemd-logind[1484]: Removed session 21. Jul 7 03:00:09.377387 sshd[6823]: Accepted publickey for core from 139.178.68.195 port 54728 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 03:00:09.421852 sshd[6823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 03:00:09.537689 systemd-logind[1484]: New session 22 of user core. Jul 7 03:00:09.543828 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 03:00:12.099521 kubelet[2669]: E0707 03:00:12.056569 2669 kubelet.go:2512] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.336s" Jul 7 03:00:15.431365 sshd[6823]: pam_unix(sshd:session): session closed for user core Jul 7 03:00:15.532099 systemd[1]: sshd@19-10.244.11.130:22-139.178.68.195:54728.service: Deactivated successfully. Jul 7 03:00:15.539596 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 03:00:15.541372 systemd[1]: session-22.scope: Consumed 1.468s CPU time. Jul 7 03:00:15.546689 systemd-logind[1484]: Session 22 logged out. Waiting for processes to exit. Jul 7 03:00:15.555803 systemd-logind[1484]: Removed session 22. Jul 7 03:00:15.629845 systemd[1]: Started sshd@20-10.244.11.130:22-139.178.68.195:38172.service - OpenSSH per-connection server daemon (139.178.68.195:38172). Jul 7 03:00:16.674026 sshd[6894]: Accepted publickey for core from 139.178.68.195 port 38172 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 03:00:16.679682 sshd[6894]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 03:00:16.699010 systemd-logind[1484]: New session 23 of user core. Jul 7 03:00:16.707722 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 03:00:19.458832 sshd[6894]: pam_unix(sshd:session): session closed for user core Jul 7 03:00:19.482200 systemd[1]: sshd@20-10.244.11.130:22-139.178.68.195:38172.service: Deactivated successfully. Jul 7 03:00:19.486994 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 03:00:19.489411 systemd-logind[1484]: Session 23 logged out. Waiting for processes to exit. Jul 7 03:00:19.495400 systemd-logind[1484]: Removed session 23. Jul 7 03:00:24.696766 systemd[1]: Started sshd@21-10.244.11.130:22-139.178.68.195:44606.service - OpenSSH per-connection server daemon (139.178.68.195:44606). Jul 7 03:00:25.806514 sshd[6939]: Accepted publickey for core from 139.178.68.195 port 44606 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 03:00:25.812598 sshd[6939]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 03:00:25.826435 systemd-logind[1484]: New session 24 of user core. Jul 7 03:00:25.833517 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 03:00:27.716023 sshd[6939]: pam_unix(sshd:session): session closed for user core Jul 7 03:00:27.740478 systemd[1]: sshd@21-10.244.11.130:22-139.178.68.195:44606.service: Deactivated successfully. Jul 7 03:00:27.749725 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 03:00:27.756198 systemd-logind[1484]: Session 24 logged out. Waiting for processes to exit. Jul 7 03:00:27.759601 systemd-logind[1484]: Removed session 24. Jul 7 03:00:32.899806 systemd[1]: Started sshd@22-10.244.11.130:22-139.178.68.195:37260.service - OpenSSH per-connection server daemon (139.178.68.195:37260). Jul 7 03:00:33.966028 sshd[6958]: Accepted publickey for core from 139.178.68.195 port 37260 ssh2: RSA SHA256:wARF/eMA/+KNKDqAnuQNtBuTQL1w3j+2vtSbEv/yp+s Jul 7 03:00:33.981401 sshd[6958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 03:00:33.995000 systemd-logind[1484]: New session 25 of user core. Jul 7 03:00:34.007131 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 03:00:36.179559 sshd[6958]: pam_unix(sshd:session): session closed for user core Jul 7 03:00:36.185188 systemd-logind[1484]: Session 25 logged out. Waiting for processes to exit. Jul 7 03:00:36.188194 systemd[1]: sshd@22-10.244.11.130:22-139.178.68.195:37260.service: Deactivated successfully. Jul 7 03:00:36.194524 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 03:00:36.198619 systemd-logind[1484]: Removed session 25.