Mar 4 02:17:27.015408 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Mar 3 22:42:33 -00 2026 Mar 4 02:17:27.015441 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 02:17:27.015454 kernel: BIOS-provided physical RAM map: Mar 4 02:17:27.015470 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 4 02:17:27.015479 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 4 02:17:27.015489 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 4 02:17:27.015499 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Mar 4 02:17:27.015509 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Mar 4 02:17:27.015519 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Mar 4 02:17:27.015528 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Mar 4 02:17:27.015538 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 4 02:17:27.015548 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 4 02:17:27.015562 kernel: NX (Execute Disable) protection: active Mar 4 02:17:27.015572 kernel: APIC: Static calls initialized Mar 4 02:17:27.015597 kernel: SMBIOS 2.8 present. Mar 4 02:17:27.015607 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Mar 4 02:17:27.015618 kernel: Hypervisor detected: KVM Mar 4 02:17:27.015632 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 4 02:17:27.015643 kernel: kvm-clock: using sched offset of 4408535566 cycles Mar 4 02:17:27.015666 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 4 02:17:27.015676 kernel: tsc: Detected 2799.998 MHz processor Mar 4 02:17:27.015686 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 4 02:17:27.015696 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 4 02:17:27.015706 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Mar 4 02:17:27.015716 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 4 02:17:27.015726 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 4 02:17:27.015740 kernel: Using GB pages for direct mapping Mar 4 02:17:27.015750 kernel: ACPI: Early table checksum verification disabled Mar 4 02:17:27.015760 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Mar 4 02:17:27.015770 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 02:17:27.015780 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 02:17:27.015802 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 02:17:27.015812 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Mar 4 02:17:27.015822 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 02:17:27.015832 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 02:17:27.015847 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 02:17:27.015870 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 4 02:17:27.015881 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Mar 4 02:17:27.015891 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Mar 4 02:17:27.015902 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Mar 4 02:17:27.015919 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Mar 4 02:17:27.015930 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Mar 4 02:17:27.015945 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Mar 4 02:17:27.015957 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Mar 4 02:17:27.015968 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 4 02:17:27.015979 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 4 02:17:27.015990 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Mar 4 02:17:27.016014 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Mar 4 02:17:27.016026 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Mar 4 02:17:27.016038 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Mar 4 02:17:27.016054 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Mar 4 02:17:27.016065 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Mar 4 02:17:27.016076 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Mar 4 02:17:27.016087 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Mar 4 02:17:27.016098 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Mar 4 02:17:27.016109 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Mar 4 02:17:27.016120 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Mar 4 02:17:27.016131 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Mar 4 02:17:27.016141 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Mar 4 02:17:27.016157 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Mar 4 02:17:27.016168 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 4 02:17:27.016179 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 4 02:17:27.016190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Mar 4 02:17:27.016202 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Mar 4 02:17:27.016213 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Mar 4 02:17:27.016224 kernel: Zone ranges: Mar 4 02:17:27.016235 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 4 02:17:27.016246 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Mar 4 02:17:27.016257 kernel: Normal empty Mar 4 02:17:27.016273 kernel: Movable zone start for each node Mar 4 02:17:27.016298 kernel: Early memory node ranges Mar 4 02:17:27.016308 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 4 02:17:27.016319 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Mar 4 02:17:27.016330 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Mar 4 02:17:27.016341 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 4 02:17:27.016364 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 4 02:17:27.016375 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Mar 4 02:17:27.016386 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 4 02:17:27.017720 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 4 02:17:27.017746 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 4 02:17:27.017757 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 4 02:17:27.017767 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 4 02:17:27.017778 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 4 02:17:27.017788 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 4 02:17:27.017798 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 4 02:17:27.017811 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 4 02:17:27.017821 kernel: TSC deadline timer available Mar 4 02:17:27.017848 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Mar 4 02:17:27.017859 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 4 02:17:27.017873 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Mar 4 02:17:27.017883 kernel: Booting paravirtualized kernel on KVM Mar 4 02:17:27.017894 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 4 02:17:27.017904 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Mar 4 02:17:27.017915 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u262144 Mar 4 02:17:27.017926 kernel: pcpu-alloc: s196328 r8192 d28952 u262144 alloc=1*2097152 Mar 4 02:17:27.017936 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Mar 4 02:17:27.017951 kernel: kvm-guest: PV spinlocks enabled Mar 4 02:17:27.017962 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 4 02:17:27.017974 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 02:17:27.017997 kernel: random: crng init done Mar 4 02:17:27.018020 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 4 02:17:27.018032 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 4 02:17:27.018043 kernel: Fallback order for Node 0: 0 Mar 4 02:17:27.018055 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Mar 4 02:17:27.018071 kernel: Policy zone: DMA32 Mar 4 02:17:27.018083 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 4 02:17:27.018094 kernel: software IO TLB: area num 16. Mar 4 02:17:27.018106 kernel: Memory: 1901596K/2096616K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42892K init, 2304K bss, 194760K reserved, 0K cma-reserved) Mar 4 02:17:27.018117 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Mar 4 02:17:27.018128 kernel: Kernel/User page tables isolation: enabled Mar 4 02:17:27.018139 kernel: ftrace: allocating 37996 entries in 149 pages Mar 4 02:17:27.018150 kernel: ftrace: allocated 149 pages with 4 groups Mar 4 02:17:27.018162 kernel: Dynamic Preempt: voluntary Mar 4 02:17:27.018177 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 4 02:17:27.018190 kernel: rcu: RCU event tracing is enabled. Mar 4 02:17:27.018201 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Mar 4 02:17:27.018212 kernel: Trampoline variant of Tasks RCU enabled. Mar 4 02:17:27.018224 kernel: Rude variant of Tasks RCU enabled. Mar 4 02:17:27.018246 kernel: Tracing variant of Tasks RCU enabled. Mar 4 02:17:27.018262 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 4 02:17:27.018273 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Mar 4 02:17:27.018297 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Mar 4 02:17:27.018308 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 4 02:17:27.018319 kernel: Console: colour VGA+ 80x25 Mar 4 02:17:27.018330 kernel: printk: console [tty0] enabled Mar 4 02:17:27.018345 kernel: printk: console [ttyS0] enabled Mar 4 02:17:27.018357 kernel: ACPI: Core revision 20230628 Mar 4 02:17:27.018380 kernel: APIC: Switch to symmetric I/O mode setup Mar 4 02:17:27.020395 kernel: x2apic enabled Mar 4 02:17:27.020412 kernel: APIC: Switched APIC routing to: physical x2apic Mar 4 02:17:27.020432 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Mar 4 02:17:27.020445 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998) Mar 4 02:17:27.020457 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 4 02:17:27.020469 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 4 02:17:27.020480 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 4 02:17:27.020492 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 4 02:17:27.020504 kernel: Spectre V2 : Mitigation: Retpolines Mar 4 02:17:27.020516 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 4 02:17:27.020528 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 4 02:17:27.020539 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 4 02:17:27.020555 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 4 02:17:27.020568 kernel: MDS: Mitigation: Clear CPU buffers Mar 4 02:17:27.020579 kernel: MMIO Stale Data: Unknown: No mitigations Mar 4 02:17:27.020591 kernel: SRBDS: Unknown: Dependent on hypervisor status Mar 4 02:17:27.020602 kernel: active return thunk: its_return_thunk Mar 4 02:17:27.020614 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 4 02:17:27.020626 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 4 02:17:27.020637 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 4 02:17:27.020649 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 4 02:17:27.020661 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 4 02:17:27.020672 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 4 02:17:27.020689 kernel: Freeing SMP alternatives memory: 32K Mar 4 02:17:27.020701 kernel: pid_max: default: 32768 minimum: 301 Mar 4 02:17:27.020712 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 4 02:17:27.020724 kernel: landlock: Up and running. Mar 4 02:17:27.020735 kernel: SELinux: Initializing. Mar 4 02:17:27.020747 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 4 02:17:27.020759 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 4 02:17:27.020771 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Mar 4 02:17:27.020783 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 4 02:17:27.020795 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 4 02:17:27.020807 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Mar 4 02:17:27.020823 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Mar 4 02:17:27.020835 kernel: signal: max sigframe size: 1776 Mar 4 02:17:27.020848 kernel: rcu: Hierarchical SRCU implementation. Mar 4 02:17:27.020860 kernel: rcu: Max phase no-delay instances is 400. Mar 4 02:17:27.020872 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 4 02:17:27.020884 kernel: smp: Bringing up secondary CPUs ... Mar 4 02:17:27.020896 kernel: smpboot: x86: Booting SMP configuration: Mar 4 02:17:27.020908 kernel: .... node #0, CPUs: #1 Mar 4 02:17:27.020919 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Mar 4 02:17:27.020936 kernel: smp: Brought up 1 node, 2 CPUs Mar 4 02:17:27.020948 kernel: smpboot: Max logical packages: 16 Mar 4 02:17:27.020960 kernel: smpboot: Total of 2 processors activated (11199.99 BogoMIPS) Mar 4 02:17:27.020972 kernel: devtmpfs: initialized Mar 4 02:17:27.020983 kernel: x86/mm: Memory block size: 128MB Mar 4 02:17:27.020995 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 4 02:17:27.021022 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Mar 4 02:17:27.021035 kernel: pinctrl core: initialized pinctrl subsystem Mar 4 02:17:27.021047 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 4 02:17:27.021064 kernel: audit: initializing netlink subsys (disabled) Mar 4 02:17:27.021076 kernel: audit: type=2000 audit(1772590645.770:1): state=initialized audit_enabled=0 res=1 Mar 4 02:17:27.021088 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 4 02:17:27.021099 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 4 02:17:27.021111 kernel: cpuidle: using governor menu Mar 4 02:17:27.021123 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 4 02:17:27.021135 kernel: dca service started, version 1.12.1 Mar 4 02:17:27.021147 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Mar 4 02:17:27.021159 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Mar 4 02:17:27.021176 kernel: PCI: Using configuration type 1 for base access Mar 4 02:17:27.021188 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 4 02:17:27.021200 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 4 02:17:27.021211 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 4 02:17:27.021223 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 4 02:17:27.021235 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 4 02:17:27.021247 kernel: ACPI: Added _OSI(Module Device) Mar 4 02:17:27.021259 kernel: ACPI: Added _OSI(Processor Device) Mar 4 02:17:27.021283 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 4 02:17:27.021299 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 4 02:17:27.021311 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 4 02:17:27.021322 kernel: ACPI: Interpreter enabled Mar 4 02:17:27.021345 kernel: ACPI: PM: (supports S0 S5) Mar 4 02:17:27.021357 kernel: ACPI: Using IOAPIC for interrupt routing Mar 4 02:17:27.021368 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 4 02:17:27.021379 kernel: PCI: Using E820 reservations for host bridge windows Mar 4 02:17:27.021427 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 4 02:17:27.021442 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 4 02:17:27.021726 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 4 02:17:27.023441 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 4 02:17:27.023628 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 4 02:17:27.023648 kernel: PCI host bridge to bus 0000:00 Mar 4 02:17:27.023826 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 4 02:17:27.023996 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 4 02:17:27.024185 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 4 02:17:27.024349 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Mar 4 02:17:27.025735 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Mar 4 02:17:27.025897 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Mar 4 02:17:27.026070 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 4 02:17:27.026271 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 4 02:17:27.027638 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Mar 4 02:17:27.027824 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Mar 4 02:17:27.028027 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Mar 4 02:17:27.028196 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Mar 4 02:17:27.028363 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 4 02:17:27.032217 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Mar 4 02:17:27.032434 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Mar 4 02:17:27.032672 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Mar 4 02:17:27.032858 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Mar 4 02:17:27.033067 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Mar 4 02:17:27.033240 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Mar 4 02:17:27.035459 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Mar 4 02:17:27.035655 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Mar 4 02:17:27.035853 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Mar 4 02:17:27.036038 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Mar 4 02:17:27.036229 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Mar 4 02:17:27.036473 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Mar 4 02:17:27.036657 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Mar 4 02:17:27.036826 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Mar 4 02:17:27.037016 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Mar 4 02:17:27.037196 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Mar 4 02:17:27.039448 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 4 02:17:27.039653 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Mar 4 02:17:27.039848 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Mar 4 02:17:27.040033 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Mar 4 02:17:27.040206 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Mar 4 02:17:27.041423 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Mar 4 02:17:27.041620 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Mar 4 02:17:27.041791 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Mar 4 02:17:27.041960 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Mar 4 02:17:27.042158 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 4 02:17:27.042326 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 4 02:17:27.043600 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 4 02:17:27.043793 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Mar 4 02:17:27.043969 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Mar 4 02:17:27.044185 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 4 02:17:27.044353 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Mar 4 02:17:27.044666 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Mar 4 02:17:27.044868 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Mar 4 02:17:27.046570 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 4 02:17:27.046764 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 4 02:17:27.046936 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 4 02:17:27.047133 kernel: pci_bus 0000:02: extended config space not accessible Mar 4 02:17:27.047328 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Mar 4 02:17:27.047555 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Mar 4 02:17:27.047739 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 4 02:17:27.050429 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 4 02:17:27.050649 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Mar 4 02:17:27.050830 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Mar 4 02:17:27.051052 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 4 02:17:27.051222 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 4 02:17:27.052432 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 4 02:17:27.052660 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Mar 4 02:17:27.052840 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Mar 4 02:17:27.053019 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 4 02:17:27.053189 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 4 02:17:27.053365 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 4 02:17:27.054587 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 4 02:17:27.054759 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 4 02:17:27.054947 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 4 02:17:27.055148 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 4 02:17:27.055313 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 4 02:17:27.055503 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 4 02:17:27.055703 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 4 02:17:27.055867 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 4 02:17:27.056044 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 4 02:17:27.056216 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 4 02:17:27.061530 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 4 02:17:27.061710 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 4 02:17:27.061906 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 4 02:17:27.062113 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 4 02:17:27.062297 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 4 02:17:27.062317 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 4 02:17:27.062329 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 4 02:17:27.062341 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 4 02:17:27.062373 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 4 02:17:27.062419 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 4 02:17:27.062458 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 4 02:17:27.062471 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 4 02:17:27.062483 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 4 02:17:27.062495 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 4 02:17:27.062507 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 4 02:17:27.062519 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 4 02:17:27.062531 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 4 02:17:27.062543 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 4 02:17:27.062554 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 4 02:17:27.062572 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 4 02:17:27.062583 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 4 02:17:27.062595 kernel: iommu: Default domain type: Translated Mar 4 02:17:27.062607 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 4 02:17:27.062619 kernel: PCI: Using ACPI for IRQ routing Mar 4 02:17:27.062631 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 4 02:17:27.062643 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 4 02:17:27.062655 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Mar 4 02:17:27.062835 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 4 02:17:27.063051 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 4 02:17:27.063218 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 4 02:17:27.063237 kernel: vgaarb: loaded Mar 4 02:17:27.063249 kernel: clocksource: Switched to clocksource kvm-clock Mar 4 02:17:27.063261 kernel: VFS: Disk quotas dquot_6.6.0 Mar 4 02:17:27.063273 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 4 02:17:27.063285 kernel: pnp: PnP ACPI init Mar 4 02:17:27.063529 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Mar 4 02:17:27.063558 kernel: pnp: PnP ACPI: found 5 devices Mar 4 02:17:27.063570 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 4 02:17:27.063583 kernel: NET: Registered PF_INET protocol family Mar 4 02:17:27.063595 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 4 02:17:27.063607 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 4 02:17:27.063619 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 4 02:17:27.063632 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 4 02:17:27.063644 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 4 02:17:27.063661 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 4 02:17:27.063673 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 4 02:17:27.063685 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 4 02:17:27.063697 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 4 02:17:27.063709 kernel: NET: Registered PF_XDP protocol family Mar 4 02:17:27.063884 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Mar 4 02:17:27.064078 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Mar 4 02:17:27.064244 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Mar 4 02:17:27.064461 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Mar 4 02:17:27.064641 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Mar 4 02:17:27.064821 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Mar 4 02:17:27.065042 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Mar 4 02:17:27.065211 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Mar 4 02:17:27.065397 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Mar 4 02:17:27.066645 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Mar 4 02:17:27.066813 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Mar 4 02:17:27.066977 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Mar 4 02:17:27.067156 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Mar 4 02:17:27.067320 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Mar 4 02:17:27.070682 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Mar 4 02:17:27.070868 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Mar 4 02:17:27.071058 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Mar 4 02:17:27.071273 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Mar 4 02:17:27.071479 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Mar 4 02:17:27.071644 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Mar 4 02:17:27.071822 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Mar 4 02:17:27.071987 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Mar 4 02:17:27.072169 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Mar 4 02:17:27.072360 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Mar 4 02:17:27.072542 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Mar 4 02:17:27.072705 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 4 02:17:27.072891 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Mar 4 02:17:27.073073 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Mar 4 02:17:27.073242 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Mar 4 02:17:27.075454 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 4 02:17:27.075616 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Mar 4 02:17:27.075777 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Mar 4 02:17:27.075938 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Mar 4 02:17:27.076128 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 4 02:17:27.076295 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Mar 4 02:17:27.076518 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Mar 4 02:17:27.076666 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Mar 4 02:17:27.076827 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 4 02:17:27.077015 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Mar 4 02:17:27.077184 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Mar 4 02:17:27.077360 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Mar 4 02:17:27.078556 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 4 02:17:27.078743 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Mar 4 02:17:27.078911 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Mar 4 02:17:27.079110 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Mar 4 02:17:27.079277 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 4 02:17:27.079500 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Mar 4 02:17:27.079690 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Mar 4 02:17:27.079854 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Mar 4 02:17:27.080057 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 4 02:17:27.080215 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 4 02:17:27.082404 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 4 02:17:27.082588 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 4 02:17:27.082740 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Mar 4 02:17:27.082901 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Mar 4 02:17:27.083081 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Mar 4 02:17:27.083248 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Mar 4 02:17:27.084436 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Mar 4 02:17:27.084596 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Mar 4 02:17:27.084746 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Mar 4 02:17:27.084894 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Mar 4 02:17:27.085076 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Mar 4 02:17:27.085234 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Mar 4 02:17:27.087441 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Mar 4 02:17:27.087612 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Mar 4 02:17:27.087792 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Mar 4 02:17:27.087993 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Mar 4 02:17:27.088167 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Mar 4 02:17:27.088334 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Mar 4 02:17:27.088551 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Mar 4 02:17:27.088712 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Mar 4 02:17:27.088881 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Mar 4 02:17:27.089076 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Mar 4 02:17:27.089236 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Mar 4 02:17:27.089768 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Mar 4 02:17:27.089972 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Mar 4 02:17:27.090171 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Mar 4 02:17:27.090339 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Mar 4 02:17:27.090539 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Mar 4 02:17:27.090738 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Mar 4 02:17:27.090909 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Mar 4 02:17:27.090955 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 4 02:17:27.090968 kernel: PCI: CLS 0 bytes, default 64 Mar 4 02:17:27.090979 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 4 02:17:27.091023 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Mar 4 02:17:27.091036 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 4 02:17:27.091049 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x285c3ee517e, max_idle_ns: 440795257231 ns Mar 4 02:17:27.091061 kernel: Initialise system trusted keyrings Mar 4 02:17:27.091074 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 4 02:17:27.091087 kernel: Key type asymmetric registered Mar 4 02:17:27.091105 kernel: Asymmetric key parser 'x509' registered Mar 4 02:17:27.091118 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 4 02:17:27.091131 kernel: io scheduler mq-deadline registered Mar 4 02:17:27.091144 kernel: io scheduler kyber registered Mar 4 02:17:27.091156 kernel: io scheduler bfq registered Mar 4 02:17:27.091343 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Mar 4 02:17:27.093553 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Mar 4 02:17:27.093751 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:17:27.093936 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Mar 4 02:17:27.094142 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Mar 4 02:17:27.094311 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:17:27.095754 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Mar 4 02:17:27.095931 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Mar 4 02:17:27.096126 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:17:27.096317 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Mar 4 02:17:27.096530 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Mar 4 02:17:27.096711 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:17:27.096892 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Mar 4 02:17:27.097085 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Mar 4 02:17:27.097254 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:17:27.097522 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Mar 4 02:17:27.097692 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Mar 4 02:17:27.097869 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:17:27.098060 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Mar 4 02:17:27.098228 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Mar 4 02:17:27.098433 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:17:27.098627 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Mar 4 02:17:27.098809 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Mar 4 02:17:27.098993 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Mar 4 02:17:27.099037 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 4 02:17:27.099051 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 4 02:17:27.099064 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 4 02:17:27.099077 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 4 02:17:27.099097 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 4 02:17:27.099110 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 4 02:17:27.099123 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 4 02:17:27.099135 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 4 02:17:27.099148 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 4 02:17:27.099328 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 4 02:17:27.099579 kernel: rtc_cmos 00:03: registered as rtc0 Mar 4 02:17:27.099739 kernel: rtc_cmos 00:03: setting system clock to 2026-03-04T02:17:26 UTC (1772590646) Mar 4 02:17:27.099905 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 4 02:17:27.099934 kernel: intel_pstate: CPU model not supported Mar 4 02:17:27.099953 kernel: NET: Registered PF_INET6 protocol family Mar 4 02:17:27.099966 kernel: Segment Routing with IPv6 Mar 4 02:17:27.099979 kernel: In-situ OAM (IOAM) with IPv6 Mar 4 02:17:27.099992 kernel: NET: Registered PF_PACKET protocol family Mar 4 02:17:27.100015 kernel: Key type dns_resolver registered Mar 4 02:17:27.100029 kernel: IPI shorthand broadcast: enabled Mar 4 02:17:27.100041 kernel: sched_clock: Marking stable (1200025557, 222002703)->(1544930166, -122901906) Mar 4 02:17:27.100060 kernel: registered taskstats version 1 Mar 4 02:17:27.100073 kernel: Loading compiled-in X.509 certificates Mar 4 02:17:27.100085 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: be1dcbe3e3dee66976c19d61f4b179b405e1c498' Mar 4 02:17:27.100098 kernel: Key type .fscrypt registered Mar 4 02:17:27.100110 kernel: Key type fscrypt-provisioning registered Mar 4 02:17:27.100122 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 4 02:17:27.100135 kernel: ima: Allocated hash algorithm: sha1 Mar 4 02:17:27.100147 kernel: ima: No architecture policies found Mar 4 02:17:27.100159 kernel: clk: Disabling unused clocks Mar 4 02:17:27.100177 kernel: Freeing unused kernel image (initmem) memory: 42892K Mar 4 02:17:27.100190 kernel: Write protecting the kernel read-only data: 36864k Mar 4 02:17:27.100202 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Mar 4 02:17:27.100215 kernel: Run /init as init process Mar 4 02:17:27.100227 kernel: with arguments: Mar 4 02:17:27.100240 kernel: /init Mar 4 02:17:27.100252 kernel: with environment: Mar 4 02:17:27.100277 kernel: HOME=/ Mar 4 02:17:27.100288 kernel: TERM=linux Mar 4 02:17:27.100316 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 02:17:27.100344 systemd[1]: Detected virtualization kvm. Mar 4 02:17:27.100357 systemd[1]: Detected architecture x86-64. Mar 4 02:17:27.100375 systemd[1]: Running in initrd. Mar 4 02:17:27.100431 systemd[1]: No hostname configured, using default hostname. Mar 4 02:17:27.100457 systemd[1]: Hostname set to . Mar 4 02:17:27.100471 systemd[1]: Initializing machine ID from VM UUID. Mar 4 02:17:27.100490 systemd[1]: Queued start job for default target initrd.target. Mar 4 02:17:27.100504 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 02:17:27.100517 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 02:17:27.100531 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 4 02:17:27.100544 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 02:17:27.100569 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 4 02:17:27.100583 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 4 02:17:27.100603 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 4 02:17:27.100618 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 4 02:17:27.100631 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 02:17:27.100644 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 02:17:27.100658 systemd[1]: Reached target paths.target - Path Units. Mar 4 02:17:27.100680 systemd[1]: Reached target slices.target - Slice Units. Mar 4 02:17:27.100693 systemd[1]: Reached target swap.target - Swaps. Mar 4 02:17:27.100707 systemd[1]: Reached target timers.target - Timer Units. Mar 4 02:17:27.100725 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 02:17:27.100739 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 02:17:27.100752 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 02:17:27.100766 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 02:17:27.100779 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 02:17:27.100793 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 02:17:27.100807 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 02:17:27.100820 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 02:17:27.100838 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 4 02:17:27.100856 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 02:17:27.100870 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 4 02:17:27.100883 systemd[1]: Starting systemd-fsck-usr.service... Mar 4 02:17:27.100905 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 02:17:27.100919 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 02:17:27.100933 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 02:17:27.100947 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 4 02:17:27.100960 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 02:17:27.101033 systemd-journald[202]: Collecting audit messages is disabled. Mar 4 02:17:27.101071 systemd[1]: Finished systemd-fsck-usr.service. Mar 4 02:17:27.101091 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 02:17:27.101105 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 02:17:27.101119 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 02:17:27.101133 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 4 02:17:27.101145 kernel: Bridge firewalling registered Mar 4 02:17:27.101159 systemd-journald[202]: Journal started Mar 4 02:17:27.101190 systemd-journald[202]: Runtime Journal (/run/log/journal/54964f994ae24a1f9ea93bff2a26a7a4) is 4.7M, max 38.0M, 33.2M free. Mar 4 02:17:27.026415 systemd-modules-load[203]: Inserted module 'overlay' Mar 4 02:17:27.123544 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 02:17:27.094919 systemd-modules-load[203]: Inserted module 'br_netfilter' Mar 4 02:17:27.125874 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 02:17:27.127005 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 02:17:27.134176 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 02:17:27.142547 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 02:17:27.145544 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 02:17:27.149560 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 02:17:27.166074 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 02:17:27.169634 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 4 02:17:27.175782 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 02:17:27.182662 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 02:17:27.185566 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 02:17:27.201787 dracut-cmdline[234]: dracut-dracut-053 Mar 4 02:17:27.207567 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=cfbb17c272ffeca64391861cc763ec4868ca597850b31cbd6ed67c590a72edc7 Mar 4 02:17:27.236197 systemd-resolved[238]: Positive Trust Anchors: Mar 4 02:17:27.236217 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 02:17:27.236258 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 02:17:27.245115 systemd-resolved[238]: Defaulting to hostname 'linux'. Mar 4 02:17:27.246743 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 02:17:27.248188 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 02:17:27.308449 kernel: SCSI subsystem initialized Mar 4 02:17:27.320539 kernel: Loading iSCSI transport class v2.0-870. Mar 4 02:17:27.333497 kernel: iscsi: registered transport (tcp) Mar 4 02:17:27.358579 kernel: iscsi: registered transport (qla4xxx) Mar 4 02:17:27.358645 kernel: QLogic iSCSI HBA Driver Mar 4 02:17:27.410818 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 4 02:17:27.426595 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 4 02:17:27.455974 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 4 02:17:27.456075 kernel: device-mapper: uevent: version 1.0.3 Mar 4 02:17:27.456098 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 4 02:17:27.505431 kernel: raid6: sse2x4 gen() 13646 MB/s Mar 4 02:17:27.522451 kernel: raid6: sse2x2 gen() 9102 MB/s Mar 4 02:17:27.541028 kernel: raid6: sse2x1 gen() 9229 MB/s Mar 4 02:17:27.541064 kernel: raid6: using algorithm sse2x4 gen() 13646 MB/s Mar 4 02:17:27.560105 kernel: raid6: .... xor() 7818 MB/s, rmw enabled Mar 4 02:17:27.560141 kernel: raid6: using ssse3x2 recovery algorithm Mar 4 02:17:27.586448 kernel: xor: automatically using best checksumming function avx Mar 4 02:17:27.774469 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 4 02:17:27.791127 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 4 02:17:27.799664 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 02:17:27.833478 systemd-udevd[421]: Using default interface naming scheme 'v255'. Mar 4 02:17:27.841068 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 02:17:27.849548 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 4 02:17:27.871200 dracut-pre-trigger[426]: rd.md=0: removing MD RAID activation Mar 4 02:17:27.909769 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 02:17:27.915635 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 02:17:28.039532 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 02:17:28.047632 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 4 02:17:28.071444 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 4 02:17:28.073565 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 02:17:28.076897 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 02:17:28.077661 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 02:17:28.087554 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 4 02:17:28.111569 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 4 02:17:28.161394 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Mar 4 02:17:28.167090 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 4 02:17:28.186382 kernel: cryptd: max_cpu_qlen set to 1000 Mar 4 02:17:28.195267 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 4 02:17:28.195342 kernel: GPT:17805311 != 125829119 Mar 4 02:17:28.195401 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 4 02:17:28.195427 kernel: GPT:17805311 != 125829119 Mar 4 02:17:28.195442 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 4 02:17:28.195470 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 02:17:28.212316 kernel: AVX version of gcm_enc/dec engaged. Mar 4 02:17:28.212398 kernel: AES CTR mode by8 optimization enabled Mar 4 02:17:28.237009 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 02:17:28.237179 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 02:17:28.242223 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 02:17:28.243962 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 02:17:28.244221 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 02:17:28.246452 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 02:17:28.257581 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 02:17:28.275561 kernel: BTRFS: device fsid 251c1416-ef37-47f1-be3f-832af5870605 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (476) Mar 4 02:17:28.275603 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (479) Mar 4 02:17:28.281408 kernel: ACPI: bus type USB registered Mar 4 02:17:28.286219 kernel: usbcore: registered new interface driver usbfs Mar 4 02:17:28.286253 kernel: usbcore: registered new interface driver hub Mar 4 02:17:28.289386 kernel: usbcore: registered new device driver usb Mar 4 02:17:28.318016 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 02:17:28.438864 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 4 02:17:28.439172 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Mar 4 02:17:28.439412 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Mar 4 02:17:28.439611 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Mar 4 02:17:28.439804 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Mar 4 02:17:28.440030 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Mar 4 02:17:28.440238 kernel: hub 1-0:1.0: USB hub found Mar 4 02:17:28.440508 kernel: hub 1-0:1.0: 4 ports detected Mar 4 02:17:28.440728 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Mar 4 02:17:28.441020 kernel: hub 2-0:1.0: USB hub found Mar 4 02:17:28.441254 kernel: hub 2-0:1.0: 4 ports detected Mar 4 02:17:28.441505 kernel: libata version 3.00 loaded. Mar 4 02:17:28.441531 kernel: ahci 0000:00:1f.2: version 3.0 Mar 4 02:17:28.441716 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 4 02:17:28.441750 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 4 02:17:28.441932 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 4 02:17:28.442153 kernel: scsi host0: ahci Mar 4 02:17:28.442384 kernel: scsi host1: ahci Mar 4 02:17:28.442624 kernel: scsi host2: ahci Mar 4 02:17:28.442859 kernel: scsi host3: ahci Mar 4 02:17:28.443091 kernel: scsi host4: ahci Mar 4 02:17:28.443284 kernel: scsi host5: ahci Mar 4 02:17:28.443558 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Mar 4 02:17:28.443578 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Mar 4 02:17:28.443593 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Mar 4 02:17:28.443609 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Mar 4 02:17:28.443631 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Mar 4 02:17:28.443648 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Mar 4 02:17:28.439866 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 02:17:28.452656 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 4 02:17:28.459448 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 4 02:17:28.465386 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 4 02:17:28.466172 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 4 02:17:28.475562 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 4 02:17:28.480556 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 4 02:17:28.482338 disk-uuid[564]: Primary Header is updated. Mar 4 02:17:28.482338 disk-uuid[564]: Secondary Entries is updated. Mar 4 02:17:28.482338 disk-uuid[564]: Secondary Header is updated. Mar 4 02:17:28.489806 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 02:17:28.493398 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 02:17:28.501427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 02:17:28.522746 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 02:17:28.565427 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Mar 4 02:17:28.670442 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 4 02:17:28.670515 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 4 02:17:28.670535 kernel: ata3: SATA link down (SStatus 0 SControl 300) Mar 4 02:17:28.673423 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 4 02:17:28.673457 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 4 02:17:28.674432 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 4 02:17:28.709397 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 4 02:17:28.716438 kernel: usbcore: registered new interface driver usbhid Mar 4 02:17:28.716478 kernel: usbhid: USB HID core driver Mar 4 02:17:28.723775 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Mar 4 02:17:28.723823 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Mar 4 02:17:29.500666 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 4 02:17:29.501431 disk-uuid[565]: The operation has completed successfully. Mar 4 02:17:29.550881 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 4 02:17:29.551077 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 4 02:17:29.579697 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 4 02:17:29.584449 sh[587]: Success Mar 4 02:17:29.603393 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Mar 4 02:17:29.674660 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 4 02:17:29.677522 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 4 02:17:29.679210 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 4 02:17:29.708416 kernel: BTRFS info (device dm-0): first mount of filesystem 251c1416-ef37-47f1-be3f-832af5870605 Mar 4 02:17:29.708477 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 4 02:17:29.708496 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 4 02:17:29.708881 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 4 02:17:29.710468 kernel: BTRFS info (device dm-0): using free space tree Mar 4 02:17:29.720690 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 4 02:17:29.722066 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 4 02:17:29.727590 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 4 02:17:29.729552 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 4 02:17:29.749391 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 02:17:29.749447 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 02:17:29.749466 kernel: BTRFS info (device vda6): using free space tree Mar 4 02:17:29.756387 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 02:17:29.772144 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 02:17:29.771789 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 4 02:17:29.779314 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 4 02:17:29.786628 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 4 02:17:29.869384 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 02:17:29.889629 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 02:17:29.925730 systemd-networkd[769]: lo: Link UP Mar 4 02:17:29.925743 systemd-networkd[769]: lo: Gained carrier Mar 4 02:17:29.931671 systemd-networkd[769]: Enumeration completed Mar 4 02:17:29.931865 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 02:17:29.932295 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 02:17:29.932301 systemd-networkd[769]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 02:17:29.933722 systemd[1]: Reached target network.target - Network. Mar 4 02:17:29.935041 systemd-networkd[769]: eth0: Link UP Mar 4 02:17:29.935046 systemd-networkd[769]: eth0: Gained carrier Mar 4 02:17:29.935058 systemd-networkd[769]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 02:17:29.944583 ignition[691]: Ignition 2.19.0 Mar 4 02:17:29.944601 ignition[691]: Stage: fetch-offline Mar 4 02:17:29.946407 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 02:17:29.944679 ignition[691]: no configs at "/usr/lib/ignition/base.d" Mar 4 02:17:29.944701 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 02:17:29.944910 ignition[691]: parsed url from cmdline: "" Mar 4 02:17:29.944917 ignition[691]: no config URL provided Mar 4 02:17:29.944927 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 02:17:29.944942 ignition[691]: no config at "/usr/lib/ignition/user.ign" Mar 4 02:17:29.944964 ignition[691]: failed to fetch config: resource requires networking Mar 4 02:17:29.945240 ignition[691]: Ignition finished successfully Mar 4 02:17:29.954655 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 4 02:17:29.964555 systemd-networkd[769]: eth0: DHCPv4 address 10.230.66.70/30, gateway 10.230.66.69 acquired from 10.230.66.69 Mar 4 02:17:29.974661 ignition[779]: Ignition 2.19.0 Mar 4 02:17:29.974682 ignition[779]: Stage: fetch Mar 4 02:17:29.974934 ignition[779]: no configs at "/usr/lib/ignition/base.d" Mar 4 02:17:29.974966 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 02:17:29.975114 ignition[779]: parsed url from cmdline: "" Mar 4 02:17:29.975121 ignition[779]: no config URL provided Mar 4 02:17:29.975130 ignition[779]: reading system config file "/usr/lib/ignition/user.ign" Mar 4 02:17:29.975146 ignition[779]: no config at "/usr/lib/ignition/user.ign" Mar 4 02:17:29.975388 ignition[779]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Mar 4 02:17:29.975497 ignition[779]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Mar 4 02:17:29.975558 ignition[779]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Mar 4 02:17:29.992176 ignition[779]: GET result: OK Mar 4 02:17:29.993339 ignition[779]: parsing config with SHA512: 6e3c42aa7c91409ee63d81cd0a7aaabccb0417dd14539f5c349c5a14ba49eb81db0de438d2971e2a1d5101e50ffccfad1a475e0ce13cba13a2901ee53b016d09 Mar 4 02:17:29.999825 unknown[779]: fetched base config from "system" Mar 4 02:17:29.999841 unknown[779]: fetched base config from "system" Mar 4 02:17:30.000396 ignition[779]: fetch: fetch complete Mar 4 02:17:29.999850 unknown[779]: fetched user config from "openstack" Mar 4 02:17:30.000404 ignition[779]: fetch: fetch passed Mar 4 02:17:30.004707 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 4 02:17:30.000464 ignition[779]: Ignition finished successfully Mar 4 02:17:30.013599 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 4 02:17:30.033646 ignition[786]: Ignition 2.19.0 Mar 4 02:17:30.033663 ignition[786]: Stage: kargs Mar 4 02:17:30.033890 ignition[786]: no configs at "/usr/lib/ignition/base.d" Mar 4 02:17:30.033909 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 02:17:30.038529 ignition[786]: kargs: kargs passed Mar 4 02:17:30.038594 ignition[786]: Ignition finished successfully Mar 4 02:17:30.040361 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 4 02:17:30.047616 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 4 02:17:30.066041 ignition[792]: Ignition 2.19.0 Mar 4 02:17:30.066062 ignition[792]: Stage: disks Mar 4 02:17:30.066294 ignition[792]: no configs at "/usr/lib/ignition/base.d" Mar 4 02:17:30.068928 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 4 02:17:30.066314 ignition[792]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 02:17:30.070473 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 4 02:17:30.067600 ignition[792]: disks: disks passed Mar 4 02:17:30.071484 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 02:17:30.067669 ignition[792]: Ignition finished successfully Mar 4 02:17:30.073109 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 02:17:30.075250 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 02:17:30.076741 systemd[1]: Reached target basic.target - Basic System. Mar 4 02:17:30.090612 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 4 02:17:30.108409 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 4 02:17:30.111702 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 4 02:17:30.118495 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 4 02:17:30.232386 kernel: EXT4-fs (vda9): mounted filesystem 77c4d29a-0423-4e33-8b82-61754d97532c r/w with ordered data mode. Quota mode: none. Mar 4 02:17:30.233140 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 4 02:17:30.234528 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 4 02:17:30.248677 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 02:17:30.251383 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 4 02:17:30.253202 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 4 02:17:30.260619 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Mar 4 02:17:30.263208 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 4 02:17:30.276359 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (808) Mar 4 02:17:30.276422 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 02:17:30.276450 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 02:17:30.276470 kernel: BTRFS info (device vda6): using free space tree Mar 4 02:17:30.266458 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 02:17:30.278526 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 4 02:17:30.292385 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 02:17:30.289578 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 4 02:17:30.295254 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 02:17:30.360413 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Mar 4 02:17:30.366528 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Mar 4 02:17:30.373437 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Mar 4 02:17:30.380513 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Mar 4 02:17:30.482589 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 4 02:17:30.488545 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 4 02:17:30.491549 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 4 02:17:30.504397 kernel: BTRFS info (device vda6): last unmount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 02:17:30.528881 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 4 02:17:30.535344 ignition[925]: INFO : Ignition 2.19.0 Mar 4 02:17:30.536244 ignition[925]: INFO : Stage: mount Mar 4 02:17:30.537213 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 02:17:30.539418 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 02:17:30.539418 ignition[925]: INFO : mount: mount passed Mar 4 02:17:30.539418 ignition[925]: INFO : Ignition finished successfully Mar 4 02:17:30.542337 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 4 02:17:30.703679 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 4 02:17:31.562139 systemd-networkd[769]: eth0: Gained IPv6LL Mar 4 02:17:33.069694 systemd-networkd[769]: eth0: Ignoring DHCPv6 address 2a02:1348:179:9091:24:19ff:fee6:4246/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:9091:24:19ff:fee6:4246/64 assigned by NDisc. Mar 4 02:17:33.069714 systemd-networkd[769]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 4 02:17:37.438040 coreos-metadata[810]: Mar 04 02:17:37.437 WARN failed to locate config-drive, using the metadata service API instead Mar 4 02:17:37.461205 coreos-metadata[810]: Mar 04 02:17:37.461 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 4 02:17:37.482928 coreos-metadata[810]: Mar 04 02:17:37.482 INFO Fetch successful Mar 4 02:17:37.484781 coreos-metadata[810]: Mar 04 02:17:37.483 INFO wrote hostname srv-mtsxv.gb1.brightbox.com to /sysroot/etc/hostname Mar 4 02:17:37.486063 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Mar 4 02:17:37.486249 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Mar 4 02:17:37.507647 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 4 02:17:37.516867 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 4 02:17:37.555474 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (941) Mar 4 02:17:37.560738 kernel: BTRFS info (device vda6): first mount of filesystem 71a972ce-abd4-4705-b1cd-2b663b77d747 Mar 4 02:17:37.560789 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 4 02:17:37.562608 kernel: BTRFS info (device vda6): using free space tree Mar 4 02:17:37.568526 kernel: BTRFS info (device vda6): auto enabling async discard Mar 4 02:17:37.570791 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 4 02:17:37.602408 ignition[959]: INFO : Ignition 2.19.0 Mar 4 02:17:37.603616 ignition[959]: INFO : Stage: files Mar 4 02:17:37.604632 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 02:17:37.605476 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 02:17:37.606718 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Mar 4 02:17:37.608623 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 4 02:17:37.608623 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 4 02:17:37.612641 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 4 02:17:37.614127 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 4 02:17:37.615263 unknown[959]: wrote ssh authorized keys file for user: core Mar 4 02:17:37.616287 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 4 02:17:37.617560 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 4 02:17:37.618779 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 4 02:17:37.618779 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 02:17:37.618779 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 4 02:17:37.819944 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 4 02:17:38.298866 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 4 02:17:38.306340 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 4 02:17:38.313747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 4 02:17:38.313747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 4 02:17:38.313747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 4 02:17:38.313747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 02:17:38.313747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 4 02:17:38.313747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 02:17:38.313747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 4 02:17:38.313747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 02:17:38.313747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 4 02:17:38.313747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 02:17:38.313747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 02:17:38.313747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 02:17:38.313747 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 4 02:17:38.732268 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 4 02:17:40.675477 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 4 02:17:40.675477 ignition[959]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 4 02:17:40.680192 ignition[959]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 4 02:17:40.680192 ignition[959]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 4 02:17:40.680192 ignition[959]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 4 02:17:40.680192 ignition[959]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 4 02:17:40.680192 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 02:17:40.680192 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 4 02:17:40.680192 ignition[959]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 4 02:17:40.680192 ignition[959]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Mar 4 02:17:40.693650 ignition[959]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Mar 4 02:17:40.693650 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 4 02:17:40.693650 ignition[959]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 4 02:17:40.693650 ignition[959]: INFO : files: files passed Mar 4 02:17:40.693650 ignition[959]: INFO : Ignition finished successfully Mar 4 02:17:40.686480 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 4 02:17:40.696731 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 4 02:17:40.708656 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 4 02:17:40.714292 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 4 02:17:40.715045 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 4 02:17:40.726984 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 02:17:40.726984 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 4 02:17:40.730689 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 4 02:17:40.733377 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 02:17:40.735775 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 4 02:17:40.740558 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 4 02:17:40.776890 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 4 02:17:40.777069 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 4 02:17:40.780461 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 4 02:17:40.781506 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 4 02:17:40.783012 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 4 02:17:40.793141 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 4 02:17:40.810855 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 02:17:40.820580 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 4 02:17:40.834986 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 4 02:17:40.836011 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 02:17:40.837692 systemd[1]: Stopped target timers.target - Timer Units. Mar 4 02:17:40.839122 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 4 02:17:40.839281 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 4 02:17:40.841098 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 4 02:17:40.842012 systemd[1]: Stopped target basic.target - Basic System. Mar 4 02:17:40.843417 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 4 02:17:40.844691 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 4 02:17:40.846060 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 4 02:17:40.847536 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 4 02:17:40.849018 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 4 02:17:40.850620 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 4 02:17:40.852043 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 4 02:17:40.853565 systemd[1]: Stopped target swap.target - Swaps. Mar 4 02:17:40.854912 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 4 02:17:40.855078 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 4 02:17:40.856747 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 4 02:17:40.857646 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 02:17:40.858985 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 4 02:17:40.859800 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 02:17:40.860648 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 4 02:17:40.860826 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 4 02:17:40.862827 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 4 02:17:40.862996 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 4 02:17:40.864727 systemd[1]: ignition-files.service: Deactivated successfully. Mar 4 02:17:40.864904 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 4 02:17:40.874094 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 4 02:17:40.876711 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 4 02:17:40.888625 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 4 02:17:40.889197 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 02:17:40.891383 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 4 02:17:40.891697 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 4 02:17:40.899109 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 4 02:17:40.899315 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 4 02:17:40.904419 ignition[1012]: INFO : Ignition 2.19.0 Mar 4 02:17:40.904419 ignition[1012]: INFO : Stage: umount Mar 4 02:17:40.904419 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 4 02:17:40.904419 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Mar 4 02:17:40.904419 ignition[1012]: INFO : umount: umount passed Mar 4 02:17:40.904419 ignition[1012]: INFO : Ignition finished successfully Mar 4 02:17:40.905071 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 4 02:17:40.906478 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 4 02:17:40.910849 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 4 02:17:40.910932 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 4 02:17:40.912891 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 4 02:17:40.912963 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 4 02:17:40.914834 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 4 02:17:40.914903 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 4 02:17:40.915588 systemd[1]: Stopped target network.target - Network. Mar 4 02:17:40.916191 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 4 02:17:40.916257 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 4 02:17:40.919678 systemd[1]: Stopped target paths.target - Path Units. Mar 4 02:17:40.920619 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 4 02:17:40.922616 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 02:17:40.923693 systemd[1]: Stopped target slices.target - Slice Units. Mar 4 02:17:40.924282 systemd[1]: Stopped target sockets.target - Socket Units. Mar 4 02:17:40.925002 systemd[1]: iscsid.socket: Deactivated successfully. Mar 4 02:17:40.925065 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 4 02:17:40.926424 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 4 02:17:40.926517 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 4 02:17:40.927929 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 4 02:17:40.928006 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 4 02:17:40.929292 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 4 02:17:40.929367 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 4 02:17:40.931234 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 4 02:17:40.933342 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 4 02:17:40.935507 systemd-networkd[769]: eth0: DHCPv6 lease lost Mar 4 02:17:40.937355 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 4 02:17:40.939146 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 4 02:17:40.939337 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 4 02:17:40.942877 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 4 02:17:40.943019 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 4 02:17:40.944340 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 4 02:17:40.944579 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 4 02:17:40.949775 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 4 02:17:40.950252 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 4 02:17:40.951613 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 4 02:17:40.951700 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 4 02:17:40.958559 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 4 02:17:40.959312 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 4 02:17:40.959400 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 4 02:17:40.960847 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 4 02:17:40.960913 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 4 02:17:40.964252 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 4 02:17:40.964331 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 4 02:17:40.965157 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 4 02:17:40.965222 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 02:17:40.966172 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 02:17:40.976940 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 4 02:17:40.977179 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 02:17:40.980666 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 4 02:17:40.980773 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 4 02:17:40.981736 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 4 02:17:40.981805 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 02:17:40.983155 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 4 02:17:40.983224 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 4 02:17:40.985315 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 4 02:17:40.985425 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 4 02:17:40.986828 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 4 02:17:40.986902 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 4 02:17:40.993578 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 4 02:17:40.994388 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 4 02:17:40.994466 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 02:17:40.995983 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 4 02:17:40.996048 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 02:17:40.999158 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 4 02:17:40.999228 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 02:17:41.000054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 4 02:17:41.000134 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 02:17:41.003058 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 4 02:17:41.004449 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 4 02:17:41.006291 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 4 02:17:41.006466 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 4 02:17:41.008933 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 4 02:17:41.019591 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 4 02:17:41.030257 systemd[1]: Switching root. Mar 4 02:17:41.069197 systemd-journald[202]: Journal stopped Mar 4 02:17:42.550259 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Mar 4 02:17:42.550400 kernel: SELinux: policy capability network_peer_controls=1 Mar 4 02:17:42.550449 kernel: SELinux: policy capability open_perms=1 Mar 4 02:17:42.550469 kernel: SELinux: policy capability extended_socket_class=1 Mar 4 02:17:42.550498 kernel: SELinux: policy capability always_check_network=0 Mar 4 02:17:42.550515 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 4 02:17:42.550532 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 4 02:17:42.550547 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 4 02:17:42.550570 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 4 02:17:42.550602 kernel: audit: type=1403 audit(1772590661.390:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 4 02:17:42.550640 systemd[1]: Successfully loaded SELinux policy in 55.182ms. Mar 4 02:17:42.550676 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 20.979ms. Mar 4 02:17:42.550698 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 4 02:17:42.550730 systemd[1]: Detected virtualization kvm. Mar 4 02:17:42.550761 systemd[1]: Detected architecture x86-64. Mar 4 02:17:42.550781 systemd[1]: Detected first boot. Mar 4 02:17:42.550800 systemd[1]: Hostname set to . Mar 4 02:17:42.550831 systemd[1]: Initializing machine ID from VM UUID. Mar 4 02:17:42.550851 zram_generator::config[1077]: No configuration found. Mar 4 02:17:42.550877 systemd[1]: Populated /etc with preset unit settings. Mar 4 02:17:42.550897 systemd[1]: Queued start job for default target multi-user.target. Mar 4 02:17:42.550916 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 4 02:17:42.550936 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 4 02:17:42.550955 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 4 02:17:42.550985 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 4 02:17:42.551020 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 4 02:17:42.551042 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 4 02:17:42.551061 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 4 02:17:42.551081 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 4 02:17:42.551100 systemd[1]: Created slice user.slice - User and Session Slice. Mar 4 02:17:42.551122 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 4 02:17:42.551154 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 4 02:17:42.551172 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 4 02:17:42.551212 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 4 02:17:42.551253 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 4 02:17:42.551281 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 4 02:17:42.551301 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 4 02:17:42.551328 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 4 02:17:42.551347 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 4 02:17:42.551385 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 4 02:17:42.551406 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 4 02:17:42.551438 systemd[1]: Reached target slices.target - Slice Units. Mar 4 02:17:42.551459 systemd[1]: Reached target swap.target - Swaps. Mar 4 02:17:42.551478 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 4 02:17:42.551498 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 4 02:17:42.551518 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 4 02:17:42.551574 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 4 02:17:42.551594 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 4 02:17:42.551612 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 4 02:17:42.551644 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 4 02:17:42.551663 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 4 02:17:42.551681 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 4 02:17:42.551712 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 4 02:17:42.551762 systemd[1]: Mounting media.mount - External Media Directory... Mar 4 02:17:42.551784 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:17:42.551815 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 4 02:17:42.551837 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 4 02:17:42.551856 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 4 02:17:42.551876 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 4 02:17:42.551895 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 02:17:42.551914 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 4 02:17:42.551940 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 4 02:17:42.551961 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 02:17:42.551981 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 02:17:42.552012 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 02:17:42.552034 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 4 02:17:42.552060 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 02:17:42.552087 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 4 02:17:42.552107 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 4 02:17:42.552127 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 4 02:17:42.552158 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 4 02:17:42.552176 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 4 02:17:42.552194 kernel: fuse: init (API version 7.39) Mar 4 02:17:42.552223 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 4 02:17:42.552243 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 4 02:17:42.552279 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 4 02:17:42.552299 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:17:42.552316 kernel: loop: module loaded Mar 4 02:17:42.552334 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 4 02:17:42.552352 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 4 02:17:42.554690 systemd[1]: Mounted media.mount - External Media Directory. Mar 4 02:17:42.554757 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 4 02:17:42.554806 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 4 02:17:42.554826 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 4 02:17:42.554845 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 4 02:17:42.554892 systemd-journald[1184]: Collecting audit messages is disabled. Mar 4 02:17:42.554927 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 4 02:17:42.554962 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 4 02:17:42.554983 systemd-journald[1184]: Journal started Mar 4 02:17:42.555025 systemd-journald[1184]: Runtime Journal (/run/log/journal/54964f994ae24a1f9ea93bff2a26a7a4) is 4.7M, max 38.0M, 33.2M free. Mar 4 02:17:42.560050 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 4 02:17:42.563817 systemd[1]: Started systemd-journald.service - Journal Service. Mar 4 02:17:42.564689 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 02:17:42.564940 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 02:17:42.566070 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 02:17:42.566297 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 02:17:42.567545 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 4 02:17:42.567796 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 4 02:17:42.571056 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 02:17:42.571302 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 02:17:42.572545 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 4 02:17:42.573674 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 4 02:17:42.574881 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 4 02:17:42.586417 kernel: ACPI: bus type drm_connector registered Mar 4 02:17:42.588319 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 02:17:42.590659 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 02:17:42.600346 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 4 02:17:42.607482 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 4 02:17:42.613457 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 4 02:17:42.614219 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 4 02:17:42.626634 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 4 02:17:42.636786 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 4 02:17:42.637668 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 02:17:42.646571 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 4 02:17:42.649546 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 02:17:42.653249 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 4 02:17:42.670556 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 4 02:17:42.677095 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 4 02:17:42.678052 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 4 02:17:42.678470 systemd-journald[1184]: Time spent on flushing to /var/log/journal/54964f994ae24a1f9ea93bff2a26a7a4 is 43.563ms for 1127 entries. Mar 4 02:17:42.678470 systemd-journald[1184]: System Journal (/var/log/journal/54964f994ae24a1f9ea93bff2a26a7a4) is 8.0M, max 584.8M, 576.8M free. Mar 4 02:17:42.746828 systemd-journald[1184]: Received client request to flush runtime journal. Mar 4 02:17:42.697963 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 4 02:17:42.699073 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 4 02:17:42.742847 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 4 02:17:42.750190 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 4 02:17:42.777499 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Mar 4 02:17:42.777537 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. Mar 4 02:17:42.793884 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 4 02:17:42.804499 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 4 02:17:42.878180 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 4 02:17:42.894609 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 4 02:17:42.898058 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 4 02:17:42.910670 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 4 02:17:42.936456 udevadm[1251]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 4 02:17:42.941009 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Mar 4 02:17:42.941434 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Mar 4 02:17:42.949111 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 4 02:17:43.368735 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 4 02:17:43.379628 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 4 02:17:43.412706 systemd-udevd[1257]: Using default interface naming scheme 'v255'. Mar 4 02:17:43.439970 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 4 02:17:43.458233 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 4 02:17:43.491657 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 4 02:17:43.567619 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 4 02:17:43.574083 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 4 02:17:43.600401 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1271) Mar 4 02:17:43.706791 systemd-networkd[1264]: lo: Link UP Mar 4 02:17:43.711415 systemd-networkd[1264]: lo: Gained carrier Mar 4 02:17:43.717767 systemd-networkd[1264]: Enumeration completed Mar 4 02:17:43.720048 systemd-networkd[1264]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 02:17:43.721948 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 4 02:17:43.722639 systemd-networkd[1264]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 4 02:17:43.725243 systemd-networkd[1264]: eth0: Link UP Mar 4 02:17:43.725707 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 4 02:17:43.729433 systemd-networkd[1264]: eth0: Gained carrier Mar 4 02:17:43.729582 systemd-networkd[1264]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 02:17:43.734886 systemd-networkd[1264]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 4 02:17:43.735045 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 4 02:17:43.749995 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 4 02:17:43.748532 systemd-networkd[1264]: eth0: DHCPv4 address 10.230.66.70/30, gateway 10.230.66.69 acquired from 10.230.66.69 Mar 4 02:17:43.754614 kernel: ACPI: button: Power Button [PWRF] Mar 4 02:17:43.789423 kernel: mousedev: PS/2 mouse device common for all mice Mar 4 02:17:43.810407 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 4 02:17:43.821446 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 4 02:17:43.827263 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 4 02:17:43.827773 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 4 02:17:43.879922 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 4 02:17:44.051806 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 4 02:17:44.085066 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 4 02:17:44.093699 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 4 02:17:44.112403 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 02:17:44.148972 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 4 02:17:44.150732 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 4 02:17:44.158741 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 4 02:17:44.168447 lvm[1300]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 4 02:17:44.203700 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 4 02:17:44.204925 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 4 02:17:44.205676 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 4 02:17:44.205739 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 4 02:17:44.206424 systemd[1]: Reached target machines.target - Containers. Mar 4 02:17:44.208809 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 4 02:17:44.217622 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 4 02:17:44.222640 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 4 02:17:44.223644 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 02:17:44.226478 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 4 02:17:44.236677 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 4 02:17:44.242648 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 4 02:17:44.252732 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 4 02:17:44.256968 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 4 02:17:44.270192 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 4 02:17:44.272632 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 4 02:17:44.282333 kernel: loop0: detected capacity change from 0 to 8 Mar 4 02:17:44.299750 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 4 02:17:44.327889 kernel: loop1: detected capacity change from 0 to 142488 Mar 4 02:17:44.371814 kernel: loop2: detected capacity change from 0 to 140768 Mar 4 02:17:44.417522 kernel: loop3: detected capacity change from 0 to 228704 Mar 4 02:17:44.462072 kernel: loop4: detected capacity change from 0 to 8 Mar 4 02:17:44.468618 kernel: loop5: detected capacity change from 0 to 142488 Mar 4 02:17:44.490619 kernel: loop6: detected capacity change from 0 to 140768 Mar 4 02:17:44.518392 kernel: loop7: detected capacity change from 0 to 228704 Mar 4 02:17:44.534831 (sd-merge)[1322]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Mar 4 02:17:44.537400 (sd-merge)[1322]: Merged extensions into '/usr'. Mar 4 02:17:44.542715 systemd[1]: Reloading requested from client PID 1308 ('systemd-sysext') (unit systemd-sysext.service)... Mar 4 02:17:44.542754 systemd[1]: Reloading... Mar 4 02:17:44.631393 zram_generator::config[1348]: No configuration found. Mar 4 02:17:44.825412 ldconfig[1304]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 4 02:17:44.866571 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 02:17:44.955294 systemd[1]: Reloading finished in 411 ms. Mar 4 02:17:44.977244 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 4 02:17:44.978544 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 4 02:17:44.987572 systemd[1]: Starting ensure-sysext.service... Mar 4 02:17:45.003744 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 4 02:17:45.014527 systemd[1]: Reloading requested from client PID 1413 ('systemctl') (unit ensure-sysext.service)... Mar 4 02:17:45.014558 systemd[1]: Reloading... Mar 4 02:17:45.035072 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 4 02:17:45.035727 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 4 02:17:45.037155 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 4 02:17:45.037596 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Mar 4 02:17:45.037727 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Mar 4 02:17:45.042598 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 02:17:45.042616 systemd-tmpfiles[1414]: Skipping /boot Mar 4 02:17:45.064175 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Mar 4 02:17:45.064196 systemd-tmpfiles[1414]: Skipping /boot Mar 4 02:17:45.113075 zram_generator::config[1443]: No configuration found. Mar 4 02:17:45.289032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 02:17:45.376328 systemd[1]: Reloading finished in 361 ms. Mar 4 02:17:45.397531 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 4 02:17:45.419036 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 02:17:45.425588 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 4 02:17:45.433562 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 4 02:17:45.445558 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 4 02:17:45.458139 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 4 02:17:45.474772 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:17:45.475800 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 02:17:45.479641 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 02:17:45.491973 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 02:17:45.508326 augenrules[1530]: No rules Mar 4 02:17:45.508962 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 02:17:45.512178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 02:17:45.512345 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:17:45.515887 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 02:17:45.518871 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 4 02:17:45.521046 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 02:17:45.521314 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 02:17:45.528945 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 02:17:45.529470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 02:17:45.531944 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 02:17:45.532178 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 02:17:45.549781 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:17:45.550142 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 02:17:45.563780 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 02:17:45.567099 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 02:17:45.570636 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 4 02:17:45.572610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 02:17:45.584251 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 4 02:17:45.585068 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:17:45.587934 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 4 02:17:45.591031 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 4 02:17:45.595916 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 02:17:45.596158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 02:17:45.598200 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 02:17:45.598596 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 02:17:45.616195 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 4 02:17:45.618619 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 4 02:17:45.623227 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 4 02:17:45.632870 systemd[1]: Finished ensure-sysext.service. Mar 4 02:17:45.636212 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:17:45.637044 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 4 02:17:45.643604 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 4 02:17:45.646293 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 4 02:17:45.651514 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 4 02:17:45.652432 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 4 02:17:45.664100 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 4 02:17:45.666963 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 4 02:17:45.667159 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 4 02:17:45.667944 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 4 02:17:45.668185 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 4 02:17:45.671237 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 4 02:17:45.671480 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 4 02:17:45.685301 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 4 02:17:45.690285 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 4 02:17:45.690904 systemd-resolved[1518]: Positive Trust Anchors: Mar 4 02:17:45.691373 systemd-resolved[1518]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 4 02:17:45.691563 systemd-resolved[1518]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 4 02:17:45.692046 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 4 02:17:45.692173 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 4 02:17:45.697735 systemd-resolved[1518]: Using system hostname 'srv-mtsxv.gb1.brightbox.com'. Mar 4 02:17:45.700824 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 4 02:17:45.701833 systemd[1]: Reached target network.target - Network. Mar 4 02:17:45.702499 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 4 02:17:45.706759 systemd-networkd[1264]: eth0: Gained IPv6LL Mar 4 02:17:45.710850 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 4 02:17:45.714733 systemd[1]: Reached target network-online.target - Network is Online. Mar 4 02:17:45.765711 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 4 02:17:45.767034 systemd[1]: Reached target sysinit.target - System Initialization. Mar 4 02:17:45.768039 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 4 02:17:45.769028 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 4 02:17:45.769833 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 4 02:17:45.770600 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 4 02:17:45.770650 systemd[1]: Reached target paths.target - Path Units. Mar 4 02:17:45.771265 systemd[1]: Reached target time-set.target - System Time Set. Mar 4 02:17:45.772152 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 4 02:17:45.773035 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 4 02:17:45.773787 systemd[1]: Reached target timers.target - Timer Units. Mar 4 02:17:45.775303 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 4 02:17:45.778102 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 4 02:17:45.780601 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 4 02:17:45.783541 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 4 02:17:45.784337 systemd[1]: Reached target sockets.target - Socket Units. Mar 4 02:17:45.784997 systemd[1]: Reached target basic.target - Basic System. Mar 4 02:17:45.785883 systemd[1]: System is tainted: cgroupsv1 Mar 4 02:17:45.785933 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 4 02:17:45.785970 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 4 02:17:45.789487 systemd[1]: Starting containerd.service - containerd container runtime... Mar 4 02:17:45.801632 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 4 02:17:45.807566 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 4 02:17:45.828587 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 4 02:17:45.845776 jq[1588]: false Mar 4 02:17:45.848593 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 4 02:17:45.851445 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 4 02:17:45.858876 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:17:45.859182 dbus-daemon[1584]: [system] SELinux support is enabled Mar 4 02:17:45.863975 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 4 02:17:45.865825 dbus-daemon[1584]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1264 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 4 02:17:45.877595 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 4 02:17:45.892408 extend-filesystems[1589]: Found loop4 Mar 4 02:17:45.892408 extend-filesystems[1589]: Found loop5 Mar 4 02:17:45.892408 extend-filesystems[1589]: Found loop6 Mar 4 02:17:45.892408 extend-filesystems[1589]: Found loop7 Mar 4 02:17:45.892408 extend-filesystems[1589]: Found vda Mar 4 02:17:45.892408 extend-filesystems[1589]: Found vda1 Mar 4 02:17:45.892408 extend-filesystems[1589]: Found vda2 Mar 4 02:17:45.892408 extend-filesystems[1589]: Found vda3 Mar 4 02:17:45.892408 extend-filesystems[1589]: Found usr Mar 4 02:17:45.892408 extend-filesystems[1589]: Found vda4 Mar 4 02:17:45.892408 extend-filesystems[1589]: Found vda6 Mar 4 02:17:45.892408 extend-filesystems[1589]: Found vda7 Mar 4 02:17:45.907401 extend-filesystems[1589]: Found vda9 Mar 4 02:17:45.907401 extend-filesystems[1589]: Checking size of /dev/vda9 Mar 4 02:17:45.897496 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 4 02:17:45.914362 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 4 02:17:45.919578 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 4 02:17:45.943607 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 4 02:17:45.945020 extend-filesystems[1589]: Resized partition /dev/vda9 Mar 4 02:17:45.946334 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 4 02:17:45.948453 extend-filesystems[1615]: resize2fs 1.47.1 (20-May-2024) Mar 4 02:17:45.956385 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Mar 4 02:17:45.956570 systemd[1]: Starting update-engine.service - Update Engine... Mar 4 02:17:45.980979 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 4 02:17:45.989566 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 4 02:17:46.007441 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1265) Mar 4 02:17:46.013100 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 4 02:17:46.013566 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 4 02:17:46.015172 systemd[1]: motdgen.service: Deactivated successfully. Mar 4 02:17:46.015539 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 4 02:17:46.029882 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 4 02:17:46.030219 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 4 02:17:46.041795 update_engine[1617]: I20260304 02:17:46.038529 1617 main.cc:92] Flatcar Update Engine starting Mar 4 02:17:46.055774 jq[1621]: true Mar 4 02:17:46.048697 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 4 02:17:46.058141 dbus-daemon[1584]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 4 02:17:46.048747 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 4 02:17:46.049577 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 4 02:17:46.049617 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 4 02:17:46.067200 (ntainerd)[1633]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 4 02:17:46.075579 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 4 02:17:46.090220 systemd[1]: Started update-engine.service - Update Engine. Mar 4 02:17:46.091575 update_engine[1617]: I20260304 02:17:46.090660 1617 update_check_scheduler.cc:74] Next update check in 10m33s Mar 4 02:17:46.102584 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 4 02:17:46.106984 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 4 02:17:46.117025 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 4 02:17:46.138443 jq[1637]: true Mar 4 02:17:46.172637 tar[1626]: linux-amd64/LICENSE Mar 4 02:17:46.172637 tar[1626]: linux-amd64/helm Mar 4 02:17:46.377460 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 4 02:17:46.379855 bash[1662]: Updated "/home/core/.ssh/authorized_keys" Mar 4 02:17:46.381725 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 4 02:17:46.395574 systemd[1]: Starting sshkeys.service... Mar 4 02:17:46.426021 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 4 02:17:46.429961 systemd-logind[1611]: Watching system buttons on /dev/input/event2 (Power Button) Mar 4 02:17:46.430005 systemd-logind[1611]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 4 02:17:46.436944 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 4 02:17:46.441503 extend-filesystems[1615]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 4 02:17:46.441503 extend-filesystems[1615]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 4 02:17:46.441503 extend-filesystems[1615]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 4 02:17:46.469655 extend-filesystems[1589]: Resized filesystem in /dev/vda9 Mar 4 02:17:46.451321 systemd-logind[1611]: New seat seat0. Mar 4 02:17:46.452905 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 4 02:17:46.453265 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 4 02:17:46.471826 systemd[1]: Started systemd-logind.service - User Login Management. Mar 4 02:17:46.497634 locksmithd[1645]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 4 02:17:46.586595 systemd-networkd[1264]: eth0: Ignoring DHCPv6 address 2a02:1348:179:9091:24:19ff:fee6:4246/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:9091:24:19ff:fee6:4246/64 assigned by NDisc. Mar 4 02:17:46.586605 systemd-networkd[1264]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Mar 4 02:17:46.610311 dbus-daemon[1584]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 4 02:17:46.611630 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 4 02:17:46.613615 dbus-daemon[1584]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1644 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 4 02:17:46.626040 systemd[1]: Starting polkit.service - Authorization Manager... Mar 4 02:17:46.662726 polkitd[1685]: Started polkitd version 121 Mar 4 02:17:46.672765 polkitd[1685]: Loading rules from directory /etc/polkit-1/rules.d Mar 4 02:17:46.672857 polkitd[1685]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 4 02:17:46.673843 polkitd[1685]: Finished loading, compiling and executing 2 rules Mar 4 02:17:46.681132 dbus-daemon[1584]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 4 02:17:46.681361 systemd[1]: Started polkit.service - Authorization Manager. Mar 4 02:17:46.684152 polkitd[1685]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 4 02:17:46.703831 systemd-hostnamed[1644]: Hostname set to (static) Mar 4 02:17:46.738052 containerd[1633]: time="2026-03-04T02:17:46.737174452Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Mar 4 02:17:46.821132 containerd[1633]: time="2026-03-04T02:17:46.821047970Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 4 02:17:46.830282 containerd[1633]: time="2026-03-04T02:17:46.830228520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 4 02:17:46.830388 containerd[1633]: time="2026-03-04T02:17:46.830284487Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 4 02:17:46.830388 containerd[1633]: time="2026-03-04T02:17:46.830329709Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 4 02:17:46.831019 containerd[1633]: time="2026-03-04T02:17:46.830808702Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 4 02:17:46.831019 containerd[1633]: time="2026-03-04T02:17:46.830849929Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 4 02:17:46.831019 containerd[1633]: time="2026-03-04T02:17:46.830986099Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 02:17:46.831019 containerd[1633]: time="2026-03-04T02:17:46.831009447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 4 02:17:46.831457 containerd[1633]: time="2026-03-04T02:17:46.831314558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 02:17:46.831457 containerd[1633]: time="2026-03-04T02:17:46.831347282Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 4 02:17:46.831457 containerd[1633]: time="2026-03-04T02:17:46.831379607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 02:17:46.831457 containerd[1633]: time="2026-03-04T02:17:46.831413643Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 4 02:17:46.831851 containerd[1633]: time="2026-03-04T02:17:46.831568753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 4 02:17:46.832389 containerd[1633]: time="2026-03-04T02:17:46.831970214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 4 02:17:46.832389 containerd[1633]: time="2026-03-04T02:17:46.832165545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 4 02:17:46.832389 containerd[1633]: time="2026-03-04T02:17:46.832190469Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 4 02:17:46.832389 containerd[1633]: time="2026-03-04T02:17:46.832315067Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 4 02:17:46.835398 containerd[1633]: time="2026-03-04T02:17:46.834751031Z" level=info msg="metadata content store policy set" policy=shared Mar 4 02:17:46.842373 containerd[1633]: time="2026-03-04T02:17:46.842313659Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 4 02:17:46.842533 containerd[1633]: time="2026-03-04T02:17:46.842449916Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 4 02:17:46.842533 containerd[1633]: time="2026-03-04T02:17:46.842479696Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 4 02:17:46.842533 containerd[1633]: time="2026-03-04T02:17:46.842501974Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 4 02:17:46.842533 containerd[1633]: time="2026-03-04T02:17:46.842527531Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 4 02:17:46.842937 containerd[1633]: time="2026-03-04T02:17:46.842760004Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 4 02:17:46.844092 containerd[1633]: time="2026-03-04T02:17:46.843309682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 4 02:17:46.844092 containerd[1633]: time="2026-03-04T02:17:46.843523671Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 4 02:17:46.844092 containerd[1633]: time="2026-03-04T02:17:46.843549829Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 4 02:17:46.844092 containerd[1633]: time="2026-03-04T02:17:46.843570520Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 4 02:17:46.844092 containerd[1633]: time="2026-03-04T02:17:46.843593362Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 4 02:17:46.844092 containerd[1633]: time="2026-03-04T02:17:46.843632028Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 4 02:17:46.844092 containerd[1633]: time="2026-03-04T02:17:46.843656849Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 4 02:17:46.844092 containerd[1633]: time="2026-03-04T02:17:46.843713355Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 4 02:17:46.844092 containerd[1633]: time="2026-03-04T02:17:46.843737056Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 4 02:17:46.844092 containerd[1633]: time="2026-03-04T02:17:46.843755843Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 4 02:17:46.844092 containerd[1633]: time="2026-03-04T02:17:46.843779810Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 4 02:17:46.844092 containerd[1633]: time="2026-03-04T02:17:46.843801988Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 4 02:17:46.844092 containerd[1633]: time="2026-03-04T02:17:46.843839832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.844092 containerd[1633]: time="2026-03-04T02:17:46.843863605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.846334 containerd[1633]: time="2026-03-04T02:17:46.843894030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.846334 containerd[1633]: time="2026-03-04T02:17:46.843939095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.846334 containerd[1633]: time="2026-03-04T02:17:46.843966788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.846334 containerd[1633]: time="2026-03-04T02:17:46.844005312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.849851 containerd[1633]: time="2026-03-04T02:17:46.849580737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.849851 containerd[1633]: time="2026-03-04T02:17:46.849624387Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.849851 containerd[1633]: time="2026-03-04T02:17:46.849651385Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.852851 containerd[1633]: time="2026-03-04T02:17:46.852810737Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.852908 containerd[1633]: time="2026-03-04T02:17:46.852851539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.852908 containerd[1633]: time="2026-03-04T02:17:46.852881501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.852908 containerd[1633]: time="2026-03-04T02:17:46.852905267Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.853265 containerd[1633]: time="2026-03-04T02:17:46.852929214Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 4 02:17:46.853265 containerd[1633]: time="2026-03-04T02:17:46.852970524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.853265 containerd[1633]: time="2026-03-04T02:17:46.852995221Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.853265 containerd[1633]: time="2026-03-04T02:17:46.853015081Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 4 02:17:46.855449 containerd[1633]: time="2026-03-04T02:17:46.854473794Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 4 02:17:46.855449 containerd[1633]: time="2026-03-04T02:17:46.854633778Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 4 02:17:46.855449 containerd[1633]: time="2026-03-04T02:17:46.854682788Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 4 02:17:46.855449 containerd[1633]: time="2026-03-04T02:17:46.854704271Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 4 02:17:46.855449 containerd[1633]: time="2026-03-04T02:17:46.854722539Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.855449 containerd[1633]: time="2026-03-04T02:17:46.854751076Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 4 02:17:46.855449 containerd[1633]: time="2026-03-04T02:17:46.854769783Z" level=info msg="NRI interface is disabled by configuration." Mar 4 02:17:46.855449 containerd[1633]: time="2026-03-04T02:17:46.854785359Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 4 02:17:46.855734 containerd[1633]: time="2026-03-04T02:17:46.855222155Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 4 02:17:46.855734 containerd[1633]: time="2026-03-04T02:17:46.855310652Z" level=info msg="Connect containerd service" Mar 4 02:17:46.858456 containerd[1633]: time="2026-03-04T02:17:46.855361451Z" level=info msg="using legacy CRI server" Mar 4 02:17:46.858456 containerd[1633]: time="2026-03-04T02:17:46.857052631Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 4 02:17:46.858456 containerd[1633]: time="2026-03-04T02:17:46.857286779Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 4 02:17:46.858687 containerd[1633]: time="2026-03-04T02:17:46.858607923Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 4 02:17:46.862613 containerd[1633]: time="2026-03-04T02:17:46.862556151Z" level=info msg="Start subscribing containerd event" Mar 4 02:17:46.862706 containerd[1633]: time="2026-03-04T02:17:46.862655277Z" level=info msg="Start recovering state" Mar 4 02:17:46.864459 containerd[1633]: time="2026-03-04T02:17:46.862785725Z" level=info msg="Start event monitor" Mar 4 02:17:46.864459 containerd[1633]: time="2026-03-04T02:17:46.862843536Z" level=info msg="Start snapshots syncer" Mar 4 02:17:46.864459 containerd[1633]: time="2026-03-04T02:17:46.862871241Z" level=info msg="Start cni network conf syncer for default" Mar 4 02:17:46.864459 containerd[1633]: time="2026-03-04T02:17:46.862888504Z" level=info msg="Start streaming server" Mar 4 02:17:46.866771 containerd[1633]: time="2026-03-04T02:17:46.866742094Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 4 02:17:46.866856 containerd[1633]: time="2026-03-04T02:17:46.866829556Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 4 02:17:46.866978 containerd[1633]: time="2026-03-04T02:17:46.866951651Z" level=info msg="containerd successfully booted in 0.134092s" Mar 4 02:17:46.867143 systemd[1]: Started containerd.service - containerd container runtime. Mar 4 02:17:47.155936 sshd_keygen[1616]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 4 02:17:47.211730 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 4 02:17:47.226498 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 4 02:17:47.258766 systemd[1]: issuegen.service: Deactivated successfully. Mar 4 02:17:47.259165 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 4 02:17:47.272742 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 4 02:17:47.287182 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 4 02:17:47.300633 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 4 02:17:47.310256 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 4 02:17:47.313812 systemd[1]: Reached target getty.target - Login Prompts. Mar 4 02:17:47.462168 tar[1626]: linux-amd64/README.md Mar 4 02:17:47.480486 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 4 02:17:47.670676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:17:47.676303 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 02:17:48.434811 systemd-resolved[1518]: Clock change detected. Flushing caches. Mar 4 02:17:48.434983 systemd-timesyncd[1570]: Contacted time server 178.79.138.215:123 (0.flatcar.pool.ntp.org). Mar 4 02:17:48.435087 systemd-timesyncd[1570]: Initial clock synchronization to Wed 2026-03-04 02:17:48.434708 UTC. Mar 4 02:17:49.008052 kubelet[1736]: E0304 02:17:49.007899 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 02:17:49.011002 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 02:17:49.011425 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 02:17:53.077034 login[1721]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Mar 4 02:17:53.079717 login[1722]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 4 02:17:53.097748 systemd-logind[1611]: New session 1 of user core. Mar 4 02:17:53.100466 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 4 02:17:53.109236 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 4 02:17:53.133947 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 4 02:17:53.147449 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 4 02:17:53.153570 (systemd)[1755]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 4 02:17:53.299687 systemd[1755]: Queued start job for default target default.target. Mar 4 02:17:53.300782 systemd[1755]: Created slice app.slice - User Application Slice. Mar 4 02:17:53.300843 systemd[1755]: Reached target paths.target - Paths. Mar 4 02:17:53.300864 systemd[1755]: Reached target timers.target - Timers. Mar 4 02:17:53.315947 systemd[1755]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 4 02:17:53.325302 systemd[1755]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 4 02:17:53.325482 systemd[1755]: Reached target sockets.target - Sockets. Mar 4 02:17:53.325669 systemd[1755]: Reached target basic.target - Basic System. Mar 4 02:17:53.325753 systemd[1755]: Reached target default.target - Main User Target. Mar 4 02:17:53.325839 systemd[1755]: Startup finished in 161ms. Mar 4 02:17:53.325870 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 4 02:17:53.331163 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 4 02:17:53.606455 coreos-metadata[1583]: Mar 04 02:17:53.606 WARN failed to locate config-drive, using the metadata service API instead Mar 4 02:17:53.630968 coreos-metadata[1583]: Mar 04 02:17:53.630 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Mar 4 02:17:53.646352 coreos-metadata[1583]: Mar 04 02:17:53.646 INFO Fetch failed with 404: resource not found Mar 4 02:17:53.646352 coreos-metadata[1583]: Mar 04 02:17:53.646 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Mar 4 02:17:53.647347 coreos-metadata[1583]: Mar 04 02:17:53.647 INFO Fetch successful Mar 4 02:17:53.647528 coreos-metadata[1583]: Mar 04 02:17:53.647 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Mar 4 02:17:53.676623 coreos-metadata[1583]: Mar 04 02:17:53.676 INFO Fetch successful Mar 4 02:17:53.676805 coreos-metadata[1583]: Mar 04 02:17:53.676 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Mar 4 02:17:53.694108 coreos-metadata[1583]: Mar 04 02:17:53.694 INFO Fetch successful Mar 4 02:17:53.694245 coreos-metadata[1583]: Mar 04 02:17:53.694 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Mar 4 02:17:53.707511 coreos-metadata[1583]: Mar 04 02:17:53.707 INFO Fetch successful Mar 4 02:17:53.707679 coreos-metadata[1583]: Mar 04 02:17:53.707 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Mar 4 02:17:53.734143 coreos-metadata[1583]: Mar 04 02:17:53.734 INFO Fetch successful Mar 4 02:17:53.769668 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 4 02:17:53.771377 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 4 02:17:54.078366 login[1721]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Mar 4 02:17:54.085707 systemd-logind[1611]: New session 2 of user core. Mar 4 02:17:54.094387 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 4 02:17:54.465194 coreos-metadata[1671]: Mar 04 02:17:54.465 WARN failed to locate config-drive, using the metadata service API instead Mar 4 02:17:54.486844 coreos-metadata[1671]: Mar 04 02:17:54.486 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Mar 4 02:17:54.528297 coreos-metadata[1671]: Mar 04 02:17:54.528 INFO Fetch successful Mar 4 02:17:54.528470 coreos-metadata[1671]: Mar 04 02:17:54.528 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Mar 4 02:17:54.567027 coreos-metadata[1671]: Mar 04 02:17:54.566 INFO Fetch successful Mar 4 02:17:54.569009 unknown[1671]: wrote ssh authorized keys file for user: core Mar 4 02:17:54.594875 update-ssh-keys[1799]: Updated "/home/core/.ssh/authorized_keys" Mar 4 02:17:54.595743 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 4 02:17:54.601155 systemd[1]: Finished sshkeys.service. Mar 4 02:17:54.608884 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 4 02:17:54.609227 systemd[1]: Startup finished in 16.011s (kernel) + 12.574s (userspace) = 28.586s. Mar 4 02:17:55.131849 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 4 02:17:55.140161 systemd[1]: Started sshd@0-10.230.66.70:22-20.161.92.111:60816.service - OpenSSH per-connection server daemon (20.161.92.111:60816). Mar 4 02:17:55.710523 sshd[1806]: Accepted publickey for core from 20.161.92.111 port 60816 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:17:55.712614 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:17:55.718873 systemd-logind[1611]: New session 3 of user core. Mar 4 02:17:55.730432 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 4 02:17:56.203146 systemd[1]: Started sshd@1-10.230.66.70:22-20.161.92.111:60820.service - OpenSSH per-connection server daemon (20.161.92.111:60820). Mar 4 02:17:56.768873 sshd[1811]: Accepted publickey for core from 20.161.92.111 port 60820 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:17:56.771714 sshd[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:17:56.778400 systemd-logind[1611]: New session 4 of user core. Mar 4 02:17:56.784402 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 4 02:17:57.176064 sshd[1811]: pam_unix(sshd:session): session closed for user core Mar 4 02:17:57.182563 systemd[1]: sshd@1-10.230.66.70:22-20.161.92.111:60820.service: Deactivated successfully. Mar 4 02:17:57.184112 systemd-logind[1611]: Session 4 logged out. Waiting for processes to exit. Mar 4 02:17:57.186593 systemd[1]: session-4.scope: Deactivated successfully. Mar 4 02:17:57.188391 systemd-logind[1611]: Removed session 4. Mar 4 02:17:57.273157 systemd[1]: Started sshd@2-10.230.66.70:22-20.161.92.111:60834.service - OpenSSH per-connection server daemon (20.161.92.111:60834). Mar 4 02:17:57.843588 sshd[1819]: Accepted publickey for core from 20.161.92.111 port 60834 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:17:57.845669 sshd[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:17:57.851920 systemd-logind[1611]: New session 5 of user core. Mar 4 02:17:57.860238 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 4 02:17:58.242120 sshd[1819]: pam_unix(sshd:session): session closed for user core Mar 4 02:17:58.248197 systemd[1]: sshd@2-10.230.66.70:22-20.161.92.111:60834.service: Deactivated successfully. Mar 4 02:17:58.248852 systemd-logind[1611]: Session 5 logged out. Waiting for processes to exit. Mar 4 02:17:58.251251 systemd[1]: session-5.scope: Deactivated successfully. Mar 4 02:17:58.252872 systemd-logind[1611]: Removed session 5. Mar 4 02:17:58.347115 systemd[1]: Started sshd@3-10.230.66.70:22-20.161.92.111:60844.service - OpenSSH per-connection server daemon (20.161.92.111:60844). Mar 4 02:17:58.931865 sshd[1827]: Accepted publickey for core from 20.161.92.111 port 60844 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:17:58.933430 sshd[1827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:17:58.940029 systemd-logind[1611]: New session 6 of user core. Mar 4 02:17:58.950335 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 4 02:17:59.024704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 4 02:17:59.032044 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:17:59.261016 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:17:59.266264 (kubelet)[1843]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 02:17:59.349249 kubelet[1843]: E0304 02:17:59.349174 1843 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 02:17:59.353057 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 02:17:59.353381 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 02:17:59.360055 sshd[1827]: pam_unix(sshd:session): session closed for user core Mar 4 02:17:59.365039 systemd[1]: sshd@3-10.230.66.70:22-20.161.92.111:60844.service: Deactivated successfully. Mar 4 02:17:59.367612 systemd[1]: session-6.scope: Deactivated successfully. Mar 4 02:17:59.367961 systemd-logind[1611]: Session 6 logged out. Waiting for processes to exit. Mar 4 02:17:59.370581 systemd-logind[1611]: Removed session 6. Mar 4 02:17:59.505160 systemd[1]: Started sshd@4-10.230.66.70:22-20.161.92.111:60852.service - OpenSSH per-connection server daemon (20.161.92.111:60852). Mar 4 02:18:00.118847 sshd[1855]: Accepted publickey for core from 20.161.92.111 port 60852 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:18:00.120361 sshd[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:18:00.126400 systemd-logind[1611]: New session 7 of user core. Mar 4 02:18:00.138190 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 4 02:18:00.460019 sudo[1859]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 4 02:18:00.460471 sudo[1859]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 02:18:00.489364 sudo[1859]: pam_unix(sudo:session): session closed for user root Mar 4 02:18:00.584740 sshd[1855]: pam_unix(sshd:session): session closed for user core Mar 4 02:18:00.591083 systemd[1]: sshd@4-10.230.66.70:22-20.161.92.111:60852.service: Deactivated successfully. Mar 4 02:18:00.591147 systemd-logind[1611]: Session 7 logged out. Waiting for processes to exit. Mar 4 02:18:00.594533 systemd[1]: session-7.scope: Deactivated successfully. Mar 4 02:18:00.595643 systemd-logind[1611]: Removed session 7. Mar 4 02:18:00.679212 systemd[1]: Started sshd@5-10.230.66.70:22-20.161.92.111:46738.service - OpenSSH per-connection server daemon (20.161.92.111:46738). Mar 4 02:18:01.246477 sshd[1864]: Accepted publickey for core from 20.161.92.111 port 46738 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:18:01.248639 sshd[1864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:18:01.255927 systemd-logind[1611]: New session 8 of user core. Mar 4 02:18:01.263254 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 4 02:18:01.566066 sudo[1869]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 4 02:18:01.566528 sudo[1869]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 02:18:01.572561 sudo[1869]: pam_unix(sudo:session): session closed for user root Mar 4 02:18:01.580195 sudo[1868]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 4 02:18:01.580623 sudo[1868]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 02:18:01.598335 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Mar 4 02:18:01.602677 auditctl[1872]: No rules Mar 4 02:18:01.603471 systemd[1]: audit-rules.service: Deactivated successfully. Mar 4 02:18:01.603795 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Mar 4 02:18:01.611217 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Mar 4 02:18:01.645479 augenrules[1891]: No rules Mar 4 02:18:01.647070 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Mar 4 02:18:01.649186 sudo[1868]: pam_unix(sudo:session): session closed for user root Mar 4 02:18:01.739449 sshd[1864]: pam_unix(sshd:session): session closed for user core Mar 4 02:18:01.743513 systemd-logind[1611]: Session 8 logged out. Waiting for processes to exit. Mar 4 02:18:01.745414 systemd[1]: sshd@5-10.230.66.70:22-20.161.92.111:46738.service: Deactivated successfully. Mar 4 02:18:01.749979 systemd[1]: session-8.scope: Deactivated successfully. Mar 4 02:18:01.751251 systemd-logind[1611]: Removed session 8. Mar 4 02:18:01.844174 systemd[1]: Started sshd@6-10.230.66.70:22-20.161.92.111:46742.service - OpenSSH per-connection server daemon (20.161.92.111:46742). Mar 4 02:18:02.446859 sshd[1900]: Accepted publickey for core from 20.161.92.111 port 46742 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:18:02.448587 sshd[1900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:18:02.455394 systemd-logind[1611]: New session 9 of user core. Mar 4 02:18:02.465435 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 4 02:18:02.776375 sudo[1904]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 4 02:18:02.777431 sudo[1904]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 4 02:18:03.232354 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 4 02:18:03.233306 (dockerd)[1920]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 4 02:18:03.691209 dockerd[1920]: time="2026-03-04T02:18:03.691046478Z" level=info msg="Starting up" Mar 4 02:18:03.971295 dockerd[1920]: time="2026-03-04T02:18:03.970948308Z" level=info msg="Loading containers: start." Mar 4 02:18:04.115186 kernel: Initializing XFRM netlink socket Mar 4 02:18:04.232311 systemd-networkd[1264]: docker0: Link UP Mar 4 02:18:04.245475 dockerd[1920]: time="2026-03-04T02:18:04.245427926Z" level=info msg="Loading containers: done." Mar 4 02:18:04.263585 dockerd[1920]: time="2026-03-04T02:18:04.263539260Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 4 02:18:04.265045 dockerd[1920]: time="2026-03-04T02:18:04.264605207Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Mar 4 02:18:04.265344 dockerd[1920]: time="2026-03-04T02:18:04.265321445Z" level=info msg="Daemon has completed initialization" Mar 4 02:18:04.302000 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 4 02:18:04.303109 dockerd[1920]: time="2026-03-04T02:18:04.301739711Z" level=info msg="API listen on /run/docker.sock" Mar 4 02:18:04.966451 containerd[1633]: time="2026-03-04T02:18:04.965575545Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 4 02:18:05.672634 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount666620199.mount: Deactivated successfully. Mar 4 02:18:07.934488 containerd[1633]: time="2026-03-04T02:18:07.934411121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:07.937876 containerd[1633]: time="2026-03-04T02:18:07.937833778Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30116194" Mar 4 02:18:07.938616 containerd[1633]: time="2026-03-04T02:18:07.938580530Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:07.944595 containerd[1633]: time="2026-03-04T02:18:07.944559978Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:07.947950 containerd[1633]: time="2026-03-04T02:18:07.947913333Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.982223236s" Mar 4 02:18:07.948146 containerd[1633]: time="2026-03-04T02:18:07.948116760Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 4 02:18:07.950440 containerd[1633]: time="2026-03-04T02:18:07.950365058Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 4 02:18:09.526677 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 4 02:18:09.541400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:18:09.813174 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:18:09.825812 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 02:18:09.909931 kubelet[2138]: E0304 02:18:09.909411 2138 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 02:18:09.911262 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 02:18:09.911574 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 02:18:10.403422 containerd[1633]: time="2026-03-04T02:18:10.403332545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:10.406241 containerd[1633]: time="2026-03-04T02:18:10.406191156Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26021818" Mar 4 02:18:10.407312 containerd[1633]: time="2026-03-04T02:18:10.407258274Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:10.411779 containerd[1633]: time="2026-03-04T02:18:10.411035990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:10.412814 containerd[1633]: time="2026-03-04T02:18:10.412755488Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 2.462180064s" Mar 4 02:18:10.412896 containerd[1633]: time="2026-03-04T02:18:10.412821818Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 4 02:18:10.413877 containerd[1633]: time="2026-03-04T02:18:10.413840926Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 4 02:18:12.113255 containerd[1633]: time="2026-03-04T02:18:12.113118150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:12.115083 containerd[1633]: time="2026-03-04T02:18:12.114981344Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162754" Mar 4 02:18:12.115827 containerd[1633]: time="2026-03-04T02:18:12.115721309Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:12.119852 containerd[1633]: time="2026-03-04T02:18:12.119787409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:12.123394 containerd[1633]: time="2026-03-04T02:18:12.121588244Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.7077028s" Mar 4 02:18:12.123394 containerd[1633]: time="2026-03-04T02:18:12.121668365Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 4 02:18:12.124010 containerd[1633]: time="2026-03-04T02:18:12.123952355Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 4 02:18:14.210596 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3373241151.mount: Deactivated successfully. Mar 4 02:18:15.191877 containerd[1633]: time="2026-03-04T02:18:15.190819358Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:15.194082 containerd[1633]: time="2026-03-04T02:18:15.192040226Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828655" Mar 4 02:18:15.194082 containerd[1633]: time="2026-03-04T02:18:15.192404036Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:15.195825 containerd[1633]: time="2026-03-04T02:18:15.195070172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:15.196484 containerd[1633]: time="2026-03-04T02:18:15.196193005Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 3.072044581s" Mar 4 02:18:15.196484 containerd[1633]: time="2026-03-04T02:18:15.196274657Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 4 02:18:15.198371 containerd[1633]: time="2026-03-04T02:18:15.198329226Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 4 02:18:15.746774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4185998703.mount: Deactivated successfully. Mar 4 02:18:17.358685 containerd[1633]: time="2026-03-04T02:18:17.358483643Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:17.360633 containerd[1633]: time="2026-03-04T02:18:17.360506635Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942246" Mar 4 02:18:17.362086 containerd[1633]: time="2026-03-04T02:18:17.362026291Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:17.369826 containerd[1633]: time="2026-03-04T02:18:17.367912705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:17.369826 containerd[1633]: time="2026-03-04T02:18:17.369377461Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.17098741s" Mar 4 02:18:17.369826 containerd[1633]: time="2026-03-04T02:18:17.369430056Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 4 02:18:17.371064 containerd[1633]: time="2026-03-04T02:18:17.370698706Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 4 02:18:17.447326 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 4 02:18:17.903403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount171965167.mount: Deactivated successfully. Mar 4 02:18:17.910611 containerd[1633]: time="2026-03-04T02:18:17.909172740Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:17.911443 containerd[1633]: time="2026-03-04T02:18:17.911391889Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Mar 4 02:18:17.912672 containerd[1633]: time="2026-03-04T02:18:17.912640875Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:17.916902 containerd[1633]: time="2026-03-04T02:18:17.916862582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:17.919309 containerd[1633]: time="2026-03-04T02:18:17.919237778Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 548.475402ms" Mar 4 02:18:17.919425 containerd[1633]: time="2026-03-04T02:18:17.919335470Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 4 02:18:17.920585 containerd[1633]: time="2026-03-04T02:18:17.920204297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 4 02:18:18.512787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1115102033.mount: Deactivated successfully. Mar 4 02:18:20.025087 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 4 02:18:20.033045 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:18:20.090815 containerd[1633]: time="2026-03-04T02:18:20.089787198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:20.093363 containerd[1633]: time="2026-03-04T02:18:20.093247670Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23718848" Mar 4 02:18:20.094659 containerd[1633]: time="2026-03-04T02:18:20.094607480Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:20.100547 containerd[1633]: time="2026-03-04T02:18:20.098591336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:20.101257 containerd[1633]: time="2026-03-04T02:18:20.100349156Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 2.180098754s" Mar 4 02:18:20.101257 containerd[1633]: time="2026-03-04T02:18:20.101251819Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 4 02:18:20.598680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:18:20.610466 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 4 02:18:20.755892 kubelet[2301]: E0304 02:18:20.754394 2301 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 4 02:18:20.759585 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 4 02:18:20.759956 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 4 02:18:23.870630 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:18:23.878122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:18:23.931173 systemd[1]: Reloading requested from client PID 2332 ('systemctl') (unit session-9.scope)... Mar 4 02:18:23.931485 systemd[1]: Reloading... Mar 4 02:18:24.129842 zram_generator::config[2367]: No configuration found. Mar 4 02:18:24.300372 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 02:18:24.409512 systemd[1]: Reloading finished in 475 ms. Mar 4 02:18:24.476819 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 4 02:18:24.476969 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 4 02:18:24.477545 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:18:24.488293 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:18:24.671070 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:18:24.685481 (kubelet)[2450]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 02:18:24.773838 kubelet[2450]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 02:18:24.773838 kubelet[2450]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 02:18:24.773838 kubelet[2450]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 02:18:24.774737 kubelet[2450]: I0304 02:18:24.774417 2450 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 02:18:25.458996 kubelet[2450]: I0304 02:18:25.458879 2450 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 4 02:18:25.458996 kubelet[2450]: I0304 02:18:25.458928 2450 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 02:18:25.459696 kubelet[2450]: I0304 02:18:25.459296 2450 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 02:18:25.505421 kubelet[2450]: E0304 02:18:25.505355 2450 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.66.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.66.70:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 02:18:25.507967 kubelet[2450]: I0304 02:18:25.507634 2450 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 02:18:25.521139 kubelet[2450]: E0304 02:18:25.521083 2450 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 02:18:25.521139 kubelet[2450]: I0304 02:18:25.521130 2450 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 4 02:18:25.529699 kubelet[2450]: I0304 02:18:25.529016 2450 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 4 02:18:25.533998 kubelet[2450]: I0304 02:18:25.533732 2450 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 02:18:25.535815 kubelet[2450]: I0304 02:18:25.533773 2450 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-mtsxv.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 4 02:18:25.535815 kubelet[2450]: I0304 02:18:25.535573 2450 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 02:18:25.535815 kubelet[2450]: I0304 02:18:25.535590 2450 container_manager_linux.go:303] "Creating device plugin manager" Mar 4 02:18:25.536321 kubelet[2450]: I0304 02:18:25.536289 2450 state_mem.go:36] "Initialized new in-memory state store" Mar 4 02:18:25.543765 kubelet[2450]: I0304 02:18:25.543625 2450 kubelet.go:480] "Attempting to sync node with API server" Mar 4 02:18:25.543765 kubelet[2450]: I0304 02:18:25.543662 2450 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 02:18:25.544504 kubelet[2450]: I0304 02:18:25.544391 2450 kubelet.go:386] "Adding apiserver pod source" Mar 4 02:18:25.546466 kubelet[2450]: I0304 02:18:25.546028 2450 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 02:18:25.547778 kubelet[2450]: E0304 02:18:25.547733 2450 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.66.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-mtsxv.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.66.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 02:18:25.550527 kubelet[2450]: E0304 02:18:25.550033 2450 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.66.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.66.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 02:18:25.550821 kubelet[2450]: I0304 02:18:25.550781 2450 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 02:18:25.551756 kubelet[2450]: I0304 02:18:25.551732 2450 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 02:18:25.552927 kubelet[2450]: W0304 02:18:25.552907 2450 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 4 02:18:25.562005 kubelet[2450]: I0304 02:18:25.561986 2450 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 4 02:18:25.562823 kubelet[2450]: I0304 02:18:25.562168 2450 server.go:1289] "Started kubelet" Mar 4 02:18:25.567226 kubelet[2450]: I0304 02:18:25.566933 2450 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 02:18:25.567399 kubelet[2450]: I0304 02:18:25.567298 2450 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 02:18:25.568258 kubelet[2450]: I0304 02:18:25.568075 2450 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 02:18:25.568439 kubelet[2450]: I0304 02:18:25.568249 2450 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 02:18:25.570574 kubelet[2450]: I0304 02:18:25.570435 2450 server.go:317] "Adding debug handlers to kubelet server" Mar 4 02:18:25.577178 kubelet[2450]: E0304 02:18:25.575844 2450 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.66.70:6443/api/v1/namespaces/default/events\": dial tcp 10.230.66.70:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-mtsxv.gb1.brightbox.com.189981dae93071aa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-mtsxv.gb1.brightbox.com,UID:srv-mtsxv.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-mtsxv.gb1.brightbox.com,},FirstTimestamp:2026-03-04 02:18:25.562120618 +0000 UTC m=+0.870236974,LastTimestamp:2026-03-04 02:18:25.562120618 +0000 UTC m=+0.870236974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-mtsxv.gb1.brightbox.com,}" Mar 4 02:18:25.578838 kubelet[2450]: I0304 02:18:25.578654 2450 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 02:18:25.581443 kubelet[2450]: I0304 02:18:25.581398 2450 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 4 02:18:25.581754 kubelet[2450]: E0304 02:18:25.581726 2450 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-mtsxv.gb1.brightbox.com\" not found" Mar 4 02:18:25.582237 kubelet[2450]: I0304 02:18:25.582211 2450 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 4 02:18:25.582332 kubelet[2450]: I0304 02:18:25.582314 2450 reconciler.go:26] "Reconciler: start to sync state" Mar 4 02:18:25.584357 kubelet[2450]: E0304 02:18:25.584313 2450 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.66.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.66.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 02:18:25.584486 kubelet[2450]: E0304 02:18:25.584454 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-mtsxv.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.70:6443: connect: connection refused" interval="200ms" Mar 4 02:18:25.586216 kubelet[2450]: I0304 02:18:25.586182 2450 factory.go:223] Registration of the systemd container factory successfully Mar 4 02:18:25.586355 kubelet[2450]: E0304 02:18:25.586335 2450 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 02:18:25.586564 kubelet[2450]: I0304 02:18:25.586336 2450 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 02:18:25.588605 kubelet[2450]: I0304 02:18:25.588559 2450 factory.go:223] Registration of the containerd container factory successfully Mar 4 02:18:25.633733 kubelet[2450]: I0304 02:18:25.633677 2450 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 4 02:18:25.638700 kubelet[2450]: I0304 02:18:25.638660 2450 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 4 02:18:25.638780 kubelet[2450]: I0304 02:18:25.638712 2450 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 4 02:18:25.638780 kubelet[2450]: I0304 02:18:25.638749 2450 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 02:18:25.638780 kubelet[2450]: I0304 02:18:25.638773 2450 kubelet.go:2436] "Starting kubelet main sync loop" Mar 4 02:18:25.639294 kubelet[2450]: E0304 02:18:25.639233 2450 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 02:18:25.642366 kubelet[2450]: E0304 02:18:25.641722 2450 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.66.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.66.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 02:18:25.647328 kubelet[2450]: I0304 02:18:25.647306 2450 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 02:18:25.647871 kubelet[2450]: I0304 02:18:25.647848 2450 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 02:18:25.647997 kubelet[2450]: I0304 02:18:25.647979 2450 state_mem.go:36] "Initialized new in-memory state store" Mar 4 02:18:25.652688 kubelet[2450]: I0304 02:18:25.652667 2450 policy_none.go:49] "None policy: Start" Mar 4 02:18:25.652892 kubelet[2450]: I0304 02:18:25.652847 2450 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 4 02:18:25.653077 kubelet[2450]: I0304 02:18:25.653060 2450 state_mem.go:35] "Initializing new in-memory state store" Mar 4 02:18:25.665208 kubelet[2450]: E0304 02:18:25.665179 2450 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 02:18:25.665555 kubelet[2450]: I0304 02:18:25.665520 2450 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 02:18:25.665714 kubelet[2450]: I0304 02:18:25.665572 2450 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 02:18:25.668169 kubelet[2450]: I0304 02:18:25.668032 2450 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 02:18:25.669016 kubelet[2450]: E0304 02:18:25.668866 2450 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 02:18:25.669016 kubelet[2450]: E0304 02:18:25.668990 2450 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"srv-mtsxv.gb1.brightbox.com\" not found" Mar 4 02:18:25.724404 kubelet[2450]: E0304 02:18:25.724113 2450 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.230.66.70:6443/api/v1/namespaces/default/events\": dial tcp 10.230.66.70:6443: connect: connection refused" event="&Event{ObjectMeta:{srv-mtsxv.gb1.brightbox.com.189981dae93071aa default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:srv-mtsxv.gb1.brightbox.com,UID:srv-mtsxv.gb1.brightbox.com,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:srv-mtsxv.gb1.brightbox.com,},FirstTimestamp:2026-03-04 02:18:25.562120618 +0000 UTC m=+0.870236974,LastTimestamp:2026-03-04 02:18:25.562120618 +0000 UTC m=+0.870236974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:srv-mtsxv.gb1.brightbox.com,}" Mar 4 02:18:25.751842 kubelet[2450]: E0304 02:18:25.750480 2450 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mtsxv.gb1.brightbox.com\" not found" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.758265 kubelet[2450]: E0304 02:18:25.758177 2450 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mtsxv.gb1.brightbox.com\" not found" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.760760 kubelet[2450]: E0304 02:18:25.760715 2450 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mtsxv.gb1.brightbox.com\" not found" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.768923 kubelet[2450]: I0304 02:18:25.768766 2450 kubelet_node_status.go:75] "Attempting to register node" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.769367 kubelet[2450]: E0304 02:18:25.769320 2450 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.66.70:6443/api/v1/nodes\": dial tcp 10.230.66.70:6443: connect: connection refused" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.783836 kubelet[2450]: I0304 02:18:25.783784 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4c130856ffeb69b33907a335d3172db-kubeconfig\") pod \"kube-scheduler-srv-mtsxv.gb1.brightbox.com\" (UID: \"c4c130856ffeb69b33907a335d3172db\") " pod="kube-system/kube-scheduler-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.784473 kubelet[2450]: I0304 02:18:25.783842 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47286106e728f96fac26c3ada988d036-k8s-certs\") pod \"kube-apiserver-srv-mtsxv.gb1.brightbox.com\" (UID: \"47286106e728f96fac26c3ada988d036\") " pod="kube-system/kube-apiserver-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.784473 kubelet[2450]: I0304 02:18:25.783871 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47286106e728f96fac26c3ada988d036-usr-share-ca-certificates\") pod \"kube-apiserver-srv-mtsxv.gb1.brightbox.com\" (UID: \"47286106e728f96fac26c3ada988d036\") " pod="kube-system/kube-apiserver-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.784473 kubelet[2450]: I0304 02:18:25.783897 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df9bc69dbf39e5c2a484a3b228637171-flexvolume-dir\") pod \"kube-controller-manager-srv-mtsxv.gb1.brightbox.com\" (UID: \"df9bc69dbf39e5c2a484a3b228637171\") " pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.784473 kubelet[2450]: I0304 02:18:25.783950 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df9bc69dbf39e5c2a484a3b228637171-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-mtsxv.gb1.brightbox.com\" (UID: \"df9bc69dbf39e5c2a484a3b228637171\") " pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.784473 kubelet[2450]: I0304 02:18:25.783975 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47286106e728f96fac26c3ada988d036-ca-certs\") pod \"kube-apiserver-srv-mtsxv.gb1.brightbox.com\" (UID: \"47286106e728f96fac26c3ada988d036\") " pod="kube-system/kube-apiserver-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.785225 kubelet[2450]: I0304 02:18:25.783997 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df9bc69dbf39e5c2a484a3b228637171-ca-certs\") pod \"kube-controller-manager-srv-mtsxv.gb1.brightbox.com\" (UID: \"df9bc69dbf39e5c2a484a3b228637171\") " pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.785225 kubelet[2450]: I0304 02:18:25.784019 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df9bc69dbf39e5c2a484a3b228637171-k8s-certs\") pod \"kube-controller-manager-srv-mtsxv.gb1.brightbox.com\" (UID: \"df9bc69dbf39e5c2a484a3b228637171\") " pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.785225 kubelet[2450]: I0304 02:18:25.784054 2450 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df9bc69dbf39e5c2a484a3b228637171-kubeconfig\") pod \"kube-controller-manager-srv-mtsxv.gb1.brightbox.com\" (UID: \"df9bc69dbf39e5c2a484a3b228637171\") " pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.785225 kubelet[2450]: E0304 02:18:25.785058 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-mtsxv.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.70:6443: connect: connection refused" interval="400ms" Mar 4 02:18:25.973918 kubelet[2450]: I0304 02:18:25.973719 2450 kubelet_node_status.go:75] "Attempting to register node" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:25.974388 kubelet[2450]: E0304 02:18:25.974319 2450 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.66.70:6443/api/v1/nodes\": dial tcp 10.230.66.70:6443: connect: connection refused" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:26.053459 containerd[1633]: time="2026-03-04T02:18:26.053284455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-mtsxv.gb1.brightbox.com,Uid:c4c130856ffeb69b33907a335d3172db,Namespace:kube-system,Attempt:0,}" Mar 4 02:18:26.063650 containerd[1633]: time="2026-03-04T02:18:26.063221418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-mtsxv.gb1.brightbox.com,Uid:47286106e728f96fac26c3ada988d036,Namespace:kube-system,Attempt:0,}" Mar 4 02:18:26.063650 containerd[1633]: time="2026-03-04T02:18:26.063220792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-mtsxv.gb1.brightbox.com,Uid:df9bc69dbf39e5c2a484a3b228637171,Namespace:kube-system,Attempt:0,}" Mar 4 02:18:26.186439 kubelet[2450]: E0304 02:18:26.186348 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-mtsxv.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.70:6443: connect: connection refused" interval="800ms" Mar 4 02:18:26.378074 kubelet[2450]: I0304 02:18:26.377931 2450 kubelet_node_status.go:75] "Attempting to register node" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:26.378526 kubelet[2450]: E0304 02:18:26.378466 2450 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.66.70:6443/api/v1/nodes\": dial tcp 10.230.66.70:6443: connect: connection refused" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:26.417399 kubelet[2450]: E0304 02:18:26.417347 2450 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.230.66.70:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.230.66.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 4 02:18:26.603588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount694173108.mount: Deactivated successfully. Mar 4 02:18:26.611257 containerd[1633]: time="2026-03-04T02:18:26.609992872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 02:18:26.612343 containerd[1633]: time="2026-03-04T02:18:26.612252795Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 02:18:26.614818 containerd[1633]: time="2026-03-04T02:18:26.613080423Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 02:18:26.614818 containerd[1633]: time="2026-03-04T02:18:26.614178690Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 02:18:26.615030 containerd[1633]: time="2026-03-04T02:18:26.614987534Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Mar 4 02:18:26.615944 containerd[1633]: time="2026-03-04T02:18:26.615891001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 4 02:18:26.616047 containerd[1633]: time="2026-03-04T02:18:26.615995364Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 02:18:26.622226 containerd[1633]: time="2026-03-04T02:18:26.622179508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 4 02:18:26.624789 containerd[1633]: time="2026-03-04T02:18:26.624742556Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 561.397586ms" Mar 4 02:18:26.627990 containerd[1633]: time="2026-03-04T02:18:26.627937467Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 573.61751ms" Mar 4 02:18:26.628325 containerd[1633]: time="2026-03-04T02:18:26.628217206Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 564.765526ms" Mar 4 02:18:26.652475 kubelet[2450]: E0304 02:18:26.652433 2450 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.230.66.70:6443/api/v1/nodes?fieldSelector=metadata.name%3Dsrv-mtsxv.gb1.brightbox.com&limit=500&resourceVersion=0\": dial tcp 10.230.66.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 4 02:18:26.826130 containerd[1633]: time="2026-03-04T02:18:26.825917658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:18:26.826393 containerd[1633]: time="2026-03-04T02:18:26.826142683Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:18:26.826393 containerd[1633]: time="2026-03-04T02:18:26.826184102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:18:26.826579 containerd[1633]: time="2026-03-04T02:18:26.826521168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:18:26.833072 containerd[1633]: time="2026-03-04T02:18:26.832751859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:18:26.833072 containerd[1633]: time="2026-03-04T02:18:26.833037386Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:18:26.833268 containerd[1633]: time="2026-03-04T02:18:26.833055957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:18:26.834206 containerd[1633]: time="2026-03-04T02:18:26.833554817Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:18:26.838255 containerd[1633]: time="2026-03-04T02:18:26.838154892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:18:26.838427 containerd[1633]: time="2026-03-04T02:18:26.838379540Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:18:26.838498 containerd[1633]: time="2026-03-04T02:18:26.838448279Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:18:26.838698 containerd[1633]: time="2026-03-04T02:18:26.838651679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:18:26.987419 kubelet[2450]: E0304 02:18:26.985851 2450 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.230.66.70:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.230.66.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 4 02:18:26.988967 kubelet[2450]: E0304 02:18:26.988779 2450 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.230.66.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/srv-mtsxv.gb1.brightbox.com?timeout=10s\": dial tcp 10.230.66.70:6443: connect: connection refused" interval="1.6s" Mar 4 02:18:26.989573 containerd[1633]: time="2026-03-04T02:18:26.989521061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-srv-mtsxv.gb1.brightbox.com,Uid:47286106e728f96fac26c3ada988d036,Namespace:kube-system,Attempt:0,} returns sandbox id \"d85e4394c8120eca53d0dd09c43125596a933a6ee6b67b311bba4a2817c3a1c4\"" Mar 4 02:18:27.017689 containerd[1633]: time="2026-03-04T02:18:27.016224157Z" level=info msg="CreateContainer within sandbox \"d85e4394c8120eca53d0dd09c43125596a933a6ee6b67b311bba4a2817c3a1c4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 4 02:18:27.026040 containerd[1633]: time="2026-03-04T02:18:27.026003496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-srv-mtsxv.gb1.brightbox.com,Uid:c4c130856ffeb69b33907a335d3172db,Namespace:kube-system,Attempt:0,} returns sandbox id \"d25eca8c3471ad6a5b882fd5843417e1e9bba1ab2b12a7315da8828773eac8cf\"" Mar 4 02:18:27.027429 containerd[1633]: time="2026-03-04T02:18:27.027385570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-srv-mtsxv.gb1.brightbox.com,Uid:df9bc69dbf39e5c2a484a3b228637171,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec2a0f95bc0aaf8f4425aaaf629e39dd5ad1b0ff9288b9e935fe22601770ca06\"" Mar 4 02:18:27.042149 kubelet[2450]: E0304 02:18:27.042103 2450 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.230.66.70:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.230.66.70:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 4 02:18:27.046650 containerd[1633]: time="2026-03-04T02:18:27.046584272Z" level=info msg="CreateContainer within sandbox \"d25eca8c3471ad6a5b882fd5843417e1e9bba1ab2b12a7315da8828773eac8cf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 4 02:18:27.053000 containerd[1633]: time="2026-03-04T02:18:27.052818268Z" level=info msg="CreateContainer within sandbox \"ec2a0f95bc0aaf8f4425aaaf629e39dd5ad1b0ff9288b9e935fe22601770ca06\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 4 02:18:27.070032 containerd[1633]: time="2026-03-04T02:18:27.069969402Z" level=info msg="CreateContainer within sandbox \"d25eca8c3471ad6a5b882fd5843417e1e9bba1ab2b12a7315da8828773eac8cf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"05886c0eeaf5c9d8fa07edc320e26f0ed65dd5e3838a2ec81103e1d7b2194849\"" Mar 4 02:18:27.071319 containerd[1633]: time="2026-03-04T02:18:27.070692916Z" level=info msg="CreateContainer within sandbox \"d85e4394c8120eca53d0dd09c43125596a933a6ee6b67b311bba4a2817c3a1c4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"433812c9ac58b24ab1c6ef680d4321faacf1a05f46601b4b99c72cd4aab9dec2\"" Mar 4 02:18:27.071430 containerd[1633]: time="2026-03-04T02:18:27.071376843Z" level=info msg="StartContainer for \"433812c9ac58b24ab1c6ef680d4321faacf1a05f46601b4b99c72cd4aab9dec2\"" Mar 4 02:18:27.072110 containerd[1633]: time="2026-03-04T02:18:27.072079533Z" level=info msg="StartContainer for \"05886c0eeaf5c9d8fa07edc320e26f0ed65dd5e3838a2ec81103e1d7b2194849\"" Mar 4 02:18:27.086927 containerd[1633]: time="2026-03-04T02:18:27.085743479Z" level=info msg="CreateContainer within sandbox \"ec2a0f95bc0aaf8f4425aaaf629e39dd5ad1b0ff9288b9e935fe22601770ca06\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2a56c212353e657b7c5dc63a18ad1079980cf83b5127c06067871a291e4911d8\"" Mar 4 02:18:27.087411 containerd[1633]: time="2026-03-04T02:18:27.087378541Z" level=info msg="StartContainer for \"2a56c212353e657b7c5dc63a18ad1079980cf83b5127c06067871a291e4911d8\"" Mar 4 02:18:27.190531 kubelet[2450]: I0304 02:18:27.190458 2450 kubelet_node_status.go:75] "Attempting to register node" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:27.193359 kubelet[2450]: E0304 02:18:27.193319 2450 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.230.66.70:6443/api/v1/nodes\": dial tcp 10.230.66.70:6443: connect: connection refused" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:27.241401 containerd[1633]: time="2026-03-04T02:18:27.239886503Z" level=info msg="StartContainer for \"433812c9ac58b24ab1c6ef680d4321faacf1a05f46601b4b99c72cd4aab9dec2\" returns successfully" Mar 4 02:18:27.255509 containerd[1633]: time="2026-03-04T02:18:27.255265628Z" level=info msg="StartContainer for \"2a56c212353e657b7c5dc63a18ad1079980cf83b5127c06067871a291e4911d8\" returns successfully" Mar 4 02:18:27.283040 containerd[1633]: time="2026-03-04T02:18:27.282670618Z" level=info msg="StartContainer for \"05886c0eeaf5c9d8fa07edc320e26f0ed65dd5e3838a2ec81103e1d7b2194849\" returns successfully" Mar 4 02:18:27.529033 kubelet[2450]: E0304 02:18:27.527772 2450 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.230.66.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.230.66.70:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 4 02:18:27.668080 kubelet[2450]: E0304 02:18:27.665391 2450 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mtsxv.gb1.brightbox.com\" not found" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:27.668399 kubelet[2450]: E0304 02:18:27.668367 2450 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mtsxv.gb1.brightbox.com\" not found" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:27.672226 kubelet[2450]: E0304 02:18:27.672186 2450 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mtsxv.gb1.brightbox.com\" not found" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:28.679815 kubelet[2450]: E0304 02:18:28.679749 2450 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mtsxv.gb1.brightbox.com\" not found" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:28.680552 kubelet[2450]: E0304 02:18:28.680436 2450 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"srv-mtsxv.gb1.brightbox.com\" not found" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:28.799105 kubelet[2450]: I0304 02:18:28.799052 2450 kubelet_node_status.go:75] "Attempting to register node" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:30.863051 kubelet[2450]: E0304 02:18:30.862914 2450 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"srv-mtsxv.gb1.brightbox.com\" not found" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:30.945207 kubelet[2450]: I0304 02:18:30.945126 2450 kubelet_node_status.go:78] "Successfully registered node" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:30.945207 kubelet[2450]: E0304 02:18:30.945220 2450 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"srv-mtsxv.gb1.brightbox.com\": node \"srv-mtsxv.gb1.brightbox.com\" not found" Mar 4 02:18:30.984933 kubelet[2450]: I0304 02:18:30.982517 2450 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:30.993969 kubelet[2450]: E0304 02:18:30.993060 2450 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-mtsxv.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:30.993969 kubelet[2450]: I0304 02:18:30.993102 2450 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:30.998163 kubelet[2450]: E0304 02:18:30.997974 2450 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-mtsxv.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:30.998163 kubelet[2450]: I0304 02:18:30.998005 2450 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:31.002432 kubelet[2450]: E0304 02:18:31.002393 2450 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-mtsxv.gb1.brightbox.com\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:31.554411 kubelet[2450]: I0304 02:18:31.554085 2450 apiserver.go:52] "Watching apiserver" Mar 4 02:18:31.582503 kubelet[2450]: I0304 02:18:31.582443 2450 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 4 02:18:32.035597 update_engine[1617]: I20260304 02:18:32.035346 1617 update_attempter.cc:509] Updating boot flags... Mar 4 02:18:32.041424 kubelet[2450]: I0304 02:18:32.041033 2450 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:32.078716 kubelet[2450]: I0304 02:18:32.076573 2450 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 02:18:32.173112 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2741) Mar 4 02:18:32.314957 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2745) Mar 4 02:18:33.211545 systemd[1]: Reloading requested from client PID 2749 ('systemctl') (unit session-9.scope)... Mar 4 02:18:33.211610 systemd[1]: Reloading... Mar 4 02:18:33.346992 zram_generator::config[2791]: No configuration found. Mar 4 02:18:33.525321 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 4 02:18:33.645968 systemd[1]: Reloading finished in 433 ms. Mar 4 02:18:33.698039 kubelet[2450]: I0304 02:18:33.697890 2450 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 02:18:33.698436 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:18:33.712972 systemd[1]: kubelet.service: Deactivated successfully. Mar 4 02:18:33.713787 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:18:33.728218 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 4 02:18:33.980087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 4 02:18:33.990526 (kubelet)[2861]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 4 02:18:34.143406 kubelet[2861]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 02:18:34.148544 kubelet[2861]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 4 02:18:34.148544 kubelet[2861]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 4 02:18:34.148544 kubelet[2861]: I0304 02:18:34.146968 2861 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 4 02:18:34.164341 kubelet[2861]: I0304 02:18:34.164255 2861 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 4 02:18:34.164341 kubelet[2861]: I0304 02:18:34.164296 2861 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 4 02:18:34.164699 kubelet[2861]: I0304 02:18:34.164659 2861 server.go:956] "Client rotation is on, will bootstrap in background" Mar 4 02:18:34.166766 kubelet[2861]: I0304 02:18:34.166718 2861 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 4 02:18:34.173348 kubelet[2861]: I0304 02:18:34.173303 2861 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 4 02:18:34.188017 kubelet[2861]: E0304 02:18:34.187491 2861 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Mar 4 02:18:34.188017 kubelet[2861]: I0304 02:18:34.187533 2861 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Mar 4 02:18:34.197037 kubelet[2861]: I0304 02:18:34.196994 2861 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 4 02:18:34.197778 kubelet[2861]: I0304 02:18:34.197723 2861 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 4 02:18:34.198055 kubelet[2861]: I0304 02:18:34.197765 2861 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"srv-mtsxv.gb1.brightbox.com","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Mar 4 02:18:34.198390 kubelet[2861]: I0304 02:18:34.198106 2861 topology_manager.go:138] "Creating topology manager with none policy" Mar 4 02:18:34.198390 kubelet[2861]: I0304 02:18:34.198125 2861 container_manager_linux.go:303] "Creating device plugin manager" Mar 4 02:18:34.198390 kubelet[2861]: I0304 02:18:34.198240 2861 state_mem.go:36] "Initialized new in-memory state store" Mar 4 02:18:34.198585 kubelet[2861]: I0304 02:18:34.198540 2861 kubelet.go:480] "Attempting to sync node with API server" Mar 4 02:18:34.198829 kubelet[2861]: I0304 02:18:34.198784 2861 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 4 02:18:34.198909 kubelet[2861]: I0304 02:18:34.198861 2861 kubelet.go:386] "Adding apiserver pod source" Mar 4 02:18:34.198909 kubelet[2861]: I0304 02:18:34.198890 2861 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 4 02:18:34.210618 kubelet[2861]: I0304 02:18:34.210487 2861 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Mar 4 02:18:34.211946 kubelet[2861]: I0304 02:18:34.211923 2861 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 4 02:18:34.223217 kubelet[2861]: I0304 02:18:34.223192 2861 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 4 02:18:34.223813 kubelet[2861]: I0304 02:18:34.223378 2861 server.go:1289] "Started kubelet" Mar 4 02:18:34.228845 kubelet[2861]: I0304 02:18:34.228815 2861 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 4 02:18:34.243439 kubelet[2861]: I0304 02:18:34.243296 2861 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 4 02:18:34.246214 kubelet[2861]: I0304 02:18:34.245448 2861 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 4 02:18:34.246214 kubelet[2861]: E0304 02:18:34.245702 2861 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"srv-mtsxv.gb1.brightbox.com\" not found" Mar 4 02:18:34.246214 kubelet[2861]: I0304 02:18:34.246147 2861 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 4 02:18:34.247640 kubelet[2861]: I0304 02:18:34.247620 2861 reconciler.go:26] "Reconciler: start to sync state" Mar 4 02:18:34.251365 kubelet[2861]: I0304 02:18:34.248627 2861 server.go:317] "Adding debug handlers to kubelet server" Mar 4 02:18:34.252521 kubelet[2861]: I0304 02:18:34.248729 2861 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 4 02:18:34.253625 kubelet[2861]: I0304 02:18:34.253592 2861 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 4 02:18:34.260863 kubelet[2861]: I0304 02:18:34.259766 2861 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 4 02:18:34.282088 kubelet[2861]: I0304 02:18:34.281926 2861 factory.go:223] Registration of the systemd container factory successfully Mar 4 02:18:34.282989 kubelet[2861]: I0304 02:18:34.282381 2861 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 4 02:18:34.287190 kubelet[2861]: E0304 02:18:34.287014 2861 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 4 02:18:34.292771 kubelet[2861]: I0304 02:18:34.292699 2861 factory.go:223] Registration of the containerd container factory successfully Mar 4 02:18:34.298149 kubelet[2861]: I0304 02:18:34.297858 2861 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 4 02:18:34.305819 kubelet[2861]: I0304 02:18:34.305467 2861 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 4 02:18:34.305819 kubelet[2861]: I0304 02:18:34.305519 2861 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 4 02:18:34.305819 kubelet[2861]: I0304 02:18:34.305556 2861 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 4 02:18:34.305819 kubelet[2861]: I0304 02:18:34.305569 2861 kubelet.go:2436] "Starting kubelet main sync loop" Mar 4 02:18:34.305819 kubelet[2861]: E0304 02:18:34.305636 2861 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 4 02:18:34.406299 kubelet[2861]: E0304 02:18:34.406048 2861 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 4 02:18:34.426103 kubelet[2861]: I0304 02:18:34.425697 2861 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 4 02:18:34.426103 kubelet[2861]: I0304 02:18:34.425726 2861 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 4 02:18:34.426103 kubelet[2861]: I0304 02:18:34.425764 2861 state_mem.go:36] "Initialized new in-memory state store" Mar 4 02:18:34.426103 kubelet[2861]: I0304 02:18:34.426007 2861 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 4 02:18:34.427045 kubelet[2861]: I0304 02:18:34.426637 2861 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 4 02:18:34.427045 kubelet[2861]: I0304 02:18:34.426714 2861 policy_none.go:49] "None policy: Start" Mar 4 02:18:34.427045 kubelet[2861]: I0304 02:18:34.426745 2861 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 4 02:18:34.427045 kubelet[2861]: I0304 02:18:34.426773 2861 state_mem.go:35] "Initializing new in-memory state store" Mar 4 02:18:34.427045 kubelet[2861]: I0304 02:18:34.426947 2861 state_mem.go:75] "Updated machine memory state" Mar 4 02:18:34.431153 kubelet[2861]: E0304 02:18:34.430861 2861 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 4 02:18:34.431822 kubelet[2861]: I0304 02:18:34.431399 2861 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 4 02:18:34.431822 kubelet[2861]: I0304 02:18:34.431430 2861 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 4 02:18:34.434117 kubelet[2861]: I0304 02:18:34.433622 2861 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 4 02:18:34.438934 kubelet[2861]: E0304 02:18:34.438887 2861 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 4 02:18:34.560366 kubelet[2861]: I0304 02:18:34.560259 2861 kubelet_node_status.go:75] "Attempting to register node" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.579891 kubelet[2861]: I0304 02:18:34.579126 2861 kubelet_node_status.go:124] "Node was previously registered" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.579891 kubelet[2861]: I0304 02:18:34.579267 2861 kubelet_node_status.go:78] "Successfully registered node" node="srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.609470 kubelet[2861]: I0304 02:18:34.608842 2861 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.611816 kubelet[2861]: I0304 02:18:34.610921 2861 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.612135 kubelet[2861]: I0304 02:18:34.611851 2861 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.619622 kubelet[2861]: I0304 02:18:34.619208 2861 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 02:18:34.620038 kubelet[2861]: I0304 02:18:34.619898 2861 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 02:18:34.620038 kubelet[2861]: E0304 02:18:34.619960 2861 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-srv-mtsxv.gb1.brightbox.com\" already exists" pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.622686 kubelet[2861]: I0304 02:18:34.622509 2861 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 02:18:34.654328 kubelet[2861]: I0304 02:18:34.653952 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/df9bc69dbf39e5c2a484a3b228637171-flexvolume-dir\") pod \"kube-controller-manager-srv-mtsxv.gb1.brightbox.com\" (UID: \"df9bc69dbf39e5c2a484a3b228637171\") " pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.654328 kubelet[2861]: I0304 02:18:34.654017 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/df9bc69dbf39e5c2a484a3b228637171-usr-share-ca-certificates\") pod \"kube-controller-manager-srv-mtsxv.gb1.brightbox.com\" (UID: \"df9bc69dbf39e5c2a484a3b228637171\") " pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.654328 kubelet[2861]: I0304 02:18:34.654058 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4c130856ffeb69b33907a335d3172db-kubeconfig\") pod \"kube-scheduler-srv-mtsxv.gb1.brightbox.com\" (UID: \"c4c130856ffeb69b33907a335d3172db\") " pod="kube-system/kube-scheduler-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.654328 kubelet[2861]: I0304 02:18:34.654085 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/47286106e728f96fac26c3ada988d036-usr-share-ca-certificates\") pod \"kube-apiserver-srv-mtsxv.gb1.brightbox.com\" (UID: \"47286106e728f96fac26c3ada988d036\") " pod="kube-system/kube-apiserver-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.654328 kubelet[2861]: I0304 02:18:34.654122 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/df9bc69dbf39e5c2a484a3b228637171-ca-certs\") pod \"kube-controller-manager-srv-mtsxv.gb1.brightbox.com\" (UID: \"df9bc69dbf39e5c2a484a3b228637171\") " pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.654746 kubelet[2861]: I0304 02:18:34.654147 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/df9bc69dbf39e5c2a484a3b228637171-k8s-certs\") pod \"kube-controller-manager-srv-mtsxv.gb1.brightbox.com\" (UID: \"df9bc69dbf39e5c2a484a3b228637171\") " pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.654746 kubelet[2861]: I0304 02:18:34.654172 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/df9bc69dbf39e5c2a484a3b228637171-kubeconfig\") pod \"kube-controller-manager-srv-mtsxv.gb1.brightbox.com\" (UID: \"df9bc69dbf39e5c2a484a3b228637171\") " pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.654746 kubelet[2861]: I0304 02:18:34.654196 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/47286106e728f96fac26c3ada988d036-ca-certs\") pod \"kube-apiserver-srv-mtsxv.gb1.brightbox.com\" (UID: \"47286106e728f96fac26c3ada988d036\") " pod="kube-system/kube-apiserver-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:34.654746 kubelet[2861]: I0304 02:18:34.654224 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/47286106e728f96fac26c3ada988d036-k8s-certs\") pod \"kube-apiserver-srv-mtsxv.gb1.brightbox.com\" (UID: \"47286106e728f96fac26c3ada988d036\") " pod="kube-system/kube-apiserver-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:35.225547 kubelet[2861]: I0304 02:18:35.224901 2861 apiserver.go:52] "Watching apiserver" Mar 4 02:18:35.247081 kubelet[2861]: I0304 02:18:35.247031 2861 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 4 02:18:35.335826 kubelet[2861]: I0304 02:18:35.335735 2861 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:35.340116 kubelet[2861]: I0304 02:18:35.337647 2861 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:35.348998 kubelet[2861]: I0304 02:18:35.348945 2861 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 02:18:35.349157 kubelet[2861]: E0304 02:18:35.349057 2861 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-srv-mtsxv.gb1.brightbox.com\" already exists" pod="kube-system/kube-apiserver-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:35.349507 kubelet[2861]: I0304 02:18:35.349387 2861 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots]" Mar 4 02:18:35.349507 kubelet[2861]: E0304 02:18:35.349440 2861 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-srv-mtsxv.gb1.brightbox.com\" already exists" pod="kube-system/kube-scheduler-srv-mtsxv.gb1.brightbox.com" Mar 4 02:18:35.405249 kubelet[2861]: I0304 02:18:35.404746 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-srv-mtsxv.gb1.brightbox.com" podStartSLOduration=1.404645415 podStartE2EDuration="1.404645415s" podCreationTimestamp="2026-03-04 02:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 02:18:35.388377922 +0000 UTC m=+1.352732457" watchObservedRunningTime="2026-03-04 02:18:35.404645415 +0000 UTC m=+1.368999965" Mar 4 02:18:35.421352 kubelet[2861]: I0304 02:18:35.421269 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-srv-mtsxv.gb1.brightbox.com" podStartSLOduration=1.421249202 podStartE2EDuration="1.421249202s" podCreationTimestamp="2026-03-04 02:18:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 02:18:35.418127057 +0000 UTC m=+1.382481590" watchObservedRunningTime="2026-03-04 02:18:35.421249202 +0000 UTC m=+1.385603745" Mar 4 02:18:35.421644 kubelet[2861]: I0304 02:18:35.421385 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-srv-mtsxv.gb1.brightbox.com" podStartSLOduration=3.421377936 podStartE2EDuration="3.421377936s" podCreationTimestamp="2026-03-04 02:18:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 02:18:35.405091229 +0000 UTC m=+1.369445771" watchObservedRunningTime="2026-03-04 02:18:35.421377936 +0000 UTC m=+1.385732477" Mar 4 02:18:38.589984 kubelet[2861]: I0304 02:18:38.589906 2861 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 4 02:18:38.591788 containerd[1633]: time="2026-03-04T02:18:38.591093914Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 4 02:18:38.592407 kubelet[2861]: I0304 02:18:38.591421 2861 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 4 02:18:39.685521 kubelet[2861]: I0304 02:18:39.685250 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f03944c6-e8b8-4e5e-84a2-2c50d29b7d15-kube-proxy\") pod \"kube-proxy-lqmf6\" (UID: \"f03944c6-e8b8-4e5e-84a2-2c50d29b7d15\") " pod="kube-system/kube-proxy-lqmf6" Mar 4 02:18:39.685521 kubelet[2861]: I0304 02:18:39.685323 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f03944c6-e8b8-4e5e-84a2-2c50d29b7d15-xtables-lock\") pod \"kube-proxy-lqmf6\" (UID: \"f03944c6-e8b8-4e5e-84a2-2c50d29b7d15\") " pod="kube-system/kube-proxy-lqmf6" Mar 4 02:18:39.685521 kubelet[2861]: I0304 02:18:39.685356 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f03944c6-e8b8-4e5e-84a2-2c50d29b7d15-lib-modules\") pod \"kube-proxy-lqmf6\" (UID: \"f03944c6-e8b8-4e5e-84a2-2c50d29b7d15\") " pod="kube-system/kube-proxy-lqmf6" Mar 4 02:18:39.685521 kubelet[2861]: I0304 02:18:39.685389 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p87lg\" (UniqueName: \"kubernetes.io/projected/f03944c6-e8b8-4e5e-84a2-2c50d29b7d15-kube-api-access-p87lg\") pod \"kube-proxy-lqmf6\" (UID: \"f03944c6-e8b8-4e5e-84a2-2c50d29b7d15\") " pod="kube-system/kube-proxy-lqmf6" Mar 4 02:18:39.786481 kubelet[2861]: I0304 02:18:39.785705 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3323c26f-a5a6-42d0-aa81-521065936e5b-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-gmr6n\" (UID: \"3323c26f-a5a6-42d0-aa81-521065936e5b\") " pod="tigera-operator/tigera-operator-6bf85f8dd-gmr6n" Mar 4 02:18:39.786481 kubelet[2861]: I0304 02:18:39.785885 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s8lh\" (UniqueName: \"kubernetes.io/projected/3323c26f-a5a6-42d0-aa81-521065936e5b-kube-api-access-2s8lh\") pod \"tigera-operator-6bf85f8dd-gmr6n\" (UID: \"3323c26f-a5a6-42d0-aa81-521065936e5b\") " pod="tigera-operator/tigera-operator-6bf85f8dd-gmr6n" Mar 4 02:18:39.963041 containerd[1633]: time="2026-03-04T02:18:39.962874036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lqmf6,Uid:f03944c6-e8b8-4e5e-84a2-2c50d29b7d15,Namespace:kube-system,Attempt:0,}" Mar 4 02:18:40.015903 containerd[1633]: time="2026-03-04T02:18:40.015453092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:18:40.015903 containerd[1633]: time="2026-03-04T02:18:40.015575067Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:18:40.015903 containerd[1633]: time="2026-03-04T02:18:40.015592011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:18:40.015903 containerd[1633]: time="2026-03-04T02:18:40.015836389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:18:40.084151 containerd[1633]: time="2026-03-04T02:18:40.084092299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-gmr6n,Uid:3323c26f-a5a6-42d0-aa81-521065936e5b,Namespace:tigera-operator,Attempt:0,}" Mar 4 02:18:40.089539 containerd[1633]: time="2026-03-04T02:18:40.088942387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-lqmf6,Uid:f03944c6-e8b8-4e5e-84a2-2c50d29b7d15,Namespace:kube-system,Attempt:0,} returns sandbox id \"b574838270dbf255be9deae63bfa0e71f84c39489d586d0acabb235626f962f2\"" Mar 4 02:18:40.097055 containerd[1633]: time="2026-03-04T02:18:40.097008260Z" level=info msg="CreateContainer within sandbox \"b574838270dbf255be9deae63bfa0e71f84c39489d586d0acabb235626f962f2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 4 02:18:40.119210 containerd[1633]: time="2026-03-04T02:18:40.119127510Z" level=info msg="CreateContainer within sandbox \"b574838270dbf255be9deae63bfa0e71f84c39489d586d0acabb235626f962f2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7fdd5551ad9834cb34ce8db8d1523aacf48f4f75c60e65988c5575dc1ecad7af\"" Mar 4 02:18:40.121008 containerd[1633]: time="2026-03-04T02:18:40.120618852Z" level=info msg="StartContainer for \"7fdd5551ad9834cb34ce8db8d1523aacf48f4f75c60e65988c5575dc1ecad7af\"" Mar 4 02:18:40.131558 containerd[1633]: time="2026-03-04T02:18:40.131130245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:18:40.131558 containerd[1633]: time="2026-03-04T02:18:40.131233550Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:18:40.131558 containerd[1633]: time="2026-03-04T02:18:40.131257135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:18:40.131558 containerd[1633]: time="2026-03-04T02:18:40.131436199Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:18:40.244151 containerd[1633]: time="2026-03-04T02:18:40.244013829Z" level=info msg="StartContainer for \"7fdd5551ad9834cb34ce8db8d1523aacf48f4f75c60e65988c5575dc1ecad7af\" returns successfully" Mar 4 02:18:40.259710 containerd[1633]: time="2026-03-04T02:18:40.259278331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-gmr6n,Uid:3323c26f-a5a6-42d0-aa81-521065936e5b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"76e8ca09cdba1aa75681100b901dacc289fda5764f81c2dbb7a829de71febd0c\"" Mar 4 02:18:40.268229 containerd[1633]: time="2026-03-04T02:18:40.268193328Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 4 02:18:40.396764 kubelet[2861]: I0304 02:18:40.396105 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lqmf6" podStartSLOduration=1.396083674 podStartE2EDuration="1.396083674s" podCreationTimestamp="2026-03-04 02:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 02:18:40.378876936 +0000 UTC m=+6.343231472" watchObservedRunningTime="2026-03-04 02:18:40.396083674 +0000 UTC m=+6.360438221" Mar 4 02:18:42.619458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580652875.mount: Deactivated successfully. Mar 4 02:18:44.162158 containerd[1633]: time="2026-03-04T02:18:44.161449214Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:44.163416 containerd[1633]: time="2026-03-04T02:18:44.162925102Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 4 02:18:44.164082 containerd[1633]: time="2026-03-04T02:18:44.164009322Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:44.166914 containerd[1633]: time="2026-03-04T02:18:44.166858485Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:44.168556 containerd[1633]: time="2026-03-04T02:18:44.168271881Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.900002712s" Mar 4 02:18:44.168556 containerd[1633]: time="2026-03-04T02:18:44.168358557Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 4 02:18:44.177918 containerd[1633]: time="2026-03-04T02:18:44.177869208Z" level=info msg="CreateContainer within sandbox \"76e8ca09cdba1aa75681100b901dacc289fda5764f81c2dbb7a829de71febd0c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 4 02:18:44.208827 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1583856300.mount: Deactivated successfully. Mar 4 02:18:44.214828 containerd[1633]: time="2026-03-04T02:18:44.214036606Z" level=info msg="CreateContainer within sandbox \"76e8ca09cdba1aa75681100b901dacc289fda5764f81c2dbb7a829de71febd0c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"c035c1b7be64e992cba98d7d0e5d98d0c27f5488b8ecbf2b110c6e8345215f0d\"" Mar 4 02:18:44.215601 containerd[1633]: time="2026-03-04T02:18:44.215572940Z" level=info msg="StartContainer for \"c035c1b7be64e992cba98d7d0e5d98d0c27f5488b8ecbf2b110c6e8345215f0d\"" Mar 4 02:18:44.300065 containerd[1633]: time="2026-03-04T02:18:44.300013743Z" level=info msg="StartContainer for \"c035c1b7be64e992cba98d7d0e5d98d0c27f5488b8ecbf2b110c6e8345215f0d\" returns successfully" Mar 4 02:18:44.641217 kubelet[2861]: I0304 02:18:44.641091 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-gmr6n" podStartSLOduration=1.736628127 podStartE2EDuration="5.641015129s" podCreationTimestamp="2026-03-04 02:18:39 +0000 UTC" firstStartedPulling="2026-03-04 02:18:40.265995966 +0000 UTC m=+6.230350483" lastFinishedPulling="2026-03-04 02:18:44.170382966 +0000 UTC m=+10.134737485" observedRunningTime="2026-03-04 02:18:44.394131728 +0000 UTC m=+10.358486272" watchObservedRunningTime="2026-03-04 02:18:44.641015129 +0000 UTC m=+10.605369674" Mar 4 02:18:52.520242 sudo[1904]: pam_unix(sudo:session): session closed for user root Mar 4 02:18:52.619003 sshd[1900]: pam_unix(sshd:session): session closed for user core Mar 4 02:18:52.632933 systemd[1]: sshd@6-10.230.66.70:22-20.161.92.111:46742.service: Deactivated successfully. Mar 4 02:18:52.650734 systemd[1]: session-9.scope: Deactivated successfully. Mar 4 02:18:52.658103 systemd-logind[1611]: Session 9 logged out. Waiting for processes to exit. Mar 4 02:18:52.667327 systemd-logind[1611]: Removed session 9. Mar 4 02:18:56.621776 kubelet[2861]: I0304 02:18:56.621367 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/33754fc5-3024-4a1f-833f-e83bf58d0f5a-typha-certs\") pod \"calico-typha-84896f54b-c58wf\" (UID: \"33754fc5-3024-4a1f-833f-e83bf58d0f5a\") " pod="calico-system/calico-typha-84896f54b-c58wf" Mar 4 02:18:56.621776 kubelet[2861]: I0304 02:18:56.621474 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33754fc5-3024-4a1f-833f-e83bf58d0f5a-tigera-ca-bundle\") pod \"calico-typha-84896f54b-c58wf\" (UID: \"33754fc5-3024-4a1f-833f-e83bf58d0f5a\") " pod="calico-system/calico-typha-84896f54b-c58wf" Mar 4 02:18:56.621776 kubelet[2861]: I0304 02:18:56.621537 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb6xq\" (UniqueName: \"kubernetes.io/projected/33754fc5-3024-4a1f-833f-e83bf58d0f5a-kube-api-access-zb6xq\") pod \"calico-typha-84896f54b-c58wf\" (UID: \"33754fc5-3024-4a1f-833f-e83bf58d0f5a\") " pod="calico-system/calico-typha-84896f54b-c58wf" Mar 4 02:18:56.723086 kubelet[2861]: I0304 02:18:56.722889 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/85902b93-9ec6-479b-a8ee-e03fd351f07f-node-certs\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.723296 kubelet[2861]: I0304 02:18:56.723119 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/85902b93-9ec6-479b-a8ee-e03fd351f07f-sys-fs\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.723296 kubelet[2861]: I0304 02:18:56.723239 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/85902b93-9ec6-479b-a8ee-e03fd351f07f-var-lib-calico\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.723422 kubelet[2861]: I0304 02:18:56.723348 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85902b93-9ec6-479b-a8ee-e03fd351f07f-xtables-lock\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.724345 kubelet[2861]: I0304 02:18:56.723439 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/85902b93-9ec6-479b-a8ee-e03fd351f07f-cni-log-dir\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.725088 kubelet[2861]: I0304 02:18:56.724711 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85902b93-9ec6-479b-a8ee-e03fd351f07f-lib-modules\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.725088 kubelet[2861]: I0304 02:18:56.724819 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/85902b93-9ec6-479b-a8ee-e03fd351f07f-policysync\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.725088 kubelet[2861]: I0304 02:18:56.724878 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/85902b93-9ec6-479b-a8ee-e03fd351f07f-tigera-ca-bundle\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.725088 kubelet[2861]: I0304 02:18:56.724907 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/85902b93-9ec6-479b-a8ee-e03fd351f07f-bpffs\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.725088 kubelet[2861]: I0304 02:18:56.724960 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/85902b93-9ec6-479b-a8ee-e03fd351f07f-flexvol-driver-host\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.726853 kubelet[2861]: I0304 02:18:56.725725 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/85902b93-9ec6-479b-a8ee-e03fd351f07f-nodeproc\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.726853 kubelet[2861]: I0304 02:18:56.725763 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/85902b93-9ec6-479b-a8ee-e03fd351f07f-var-run-calico\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.726853 kubelet[2861]: I0304 02:18:56.726077 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/85902b93-9ec6-479b-a8ee-e03fd351f07f-cni-bin-dir\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.726853 kubelet[2861]: I0304 02:18:56.726119 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/85902b93-9ec6-479b-a8ee-e03fd351f07f-cni-net-dir\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.726853 kubelet[2861]: I0304 02:18:56.726165 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d9g4\" (UniqueName: \"kubernetes.io/projected/85902b93-9ec6-479b-a8ee-e03fd351f07f-kube-api-access-2d9g4\") pod \"calico-node-g7txx\" (UID: \"85902b93-9ec6-479b-a8ee-e03fd351f07f\") " pod="calico-system/calico-node-g7txx" Mar 4 02:18:56.802831 kubelet[2861]: E0304 02:18:56.802736 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76s4z" podUID="0d0411e4-66e2-420e-a25a-3a6cba15a516" Mar 4 02:18:56.842576 kubelet[2861]: E0304 02:18:56.842170 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.842576 kubelet[2861]: W0304 02:18:56.842247 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.842576 kubelet[2861]: E0304 02:18:56.842306 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.845159 kubelet[2861]: E0304 02:18:56.843279 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.845159 kubelet[2861]: W0304 02:18:56.843294 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.845159 kubelet[2861]: E0304 02:18:56.843308 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.857122 kubelet[2861]: E0304 02:18:56.856940 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.858644 kubelet[2861]: W0304 02:18:56.858161 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.858644 kubelet[2861]: E0304 02:18:56.858209 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.861915 kubelet[2861]: E0304 02:18:56.861883 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.862047 kubelet[2861]: W0304 02:18:56.862027 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.862478 kubelet[2861]: E0304 02:18:56.862157 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.864840 kubelet[2861]: E0304 02:18:56.863600 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.864840 kubelet[2861]: W0304 02:18:56.863619 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.864840 kubelet[2861]: E0304 02:18:56.863634 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.867909 kubelet[2861]: E0304 02:18:56.867877 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.870973 kubelet[2861]: W0304 02:18:56.869822 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.870973 kubelet[2861]: E0304 02:18:56.869862 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.874486 containerd[1633]: time="2026-03-04T02:18:56.874250859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84896f54b-c58wf,Uid:33754fc5-3024-4a1f-833f-e83bf58d0f5a,Namespace:calico-system,Attempt:0,}" Mar 4 02:18:56.896819 kubelet[2861]: E0304 02:18:56.891467 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.896819 kubelet[2861]: W0304 02:18:56.891527 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.896819 kubelet[2861]: E0304 02:18:56.891562 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.904158 kubelet[2861]: E0304 02:18:56.904129 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.904575 kubelet[2861]: W0304 02:18:56.904548 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.905121 kubelet[2861]: E0304 02:18:56.904949 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.906995 kubelet[2861]: E0304 02:18:56.906844 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.911991 kubelet[2861]: W0304 02:18:56.911958 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.916170 kubelet[2861]: E0304 02:18:56.915432 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.918244 kubelet[2861]: E0304 02:18:56.917072 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.918244 kubelet[2861]: W0304 02:18:56.917092 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.918244 kubelet[2861]: E0304 02:18:56.917110 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.918244 kubelet[2861]: E0304 02:18:56.917426 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.918244 kubelet[2861]: W0304 02:18:56.917440 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.918244 kubelet[2861]: E0304 02:18:56.917456 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.920125 kubelet[2861]: E0304 02:18:56.919902 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.920125 kubelet[2861]: W0304 02:18:56.919921 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.920125 kubelet[2861]: E0304 02:18:56.919937 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.921974 kubelet[2861]: E0304 02:18:56.921952 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.922273 kubelet[2861]: W0304 02:18:56.922086 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.922273 kubelet[2861]: E0304 02:18:56.922112 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.923079 kubelet[2861]: E0304 02:18:56.922723 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.923079 kubelet[2861]: W0304 02:18:56.922739 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.923079 kubelet[2861]: E0304 02:18:56.922953 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.929677 kubelet[2861]: E0304 02:18:56.929490 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.929677 kubelet[2861]: W0304 02:18:56.929526 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.929677 kubelet[2861]: E0304 02:18:56.929544 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.931404 kubelet[2861]: E0304 02:18:56.930683 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.931404 kubelet[2861]: W0304 02:18:56.931137 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.931404 kubelet[2861]: E0304 02:18:56.931158 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.932608 kubelet[2861]: E0304 02:18:56.932578 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.932964 kubelet[2861]: W0304 02:18:56.932703 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.932964 kubelet[2861]: E0304 02:18:56.932733 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.934430 kubelet[2861]: E0304 02:18:56.933948 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.934430 kubelet[2861]: W0304 02:18:56.933985 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.934430 kubelet[2861]: E0304 02:18:56.934002 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.935919 kubelet[2861]: E0304 02:18:56.935898 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.937077 kubelet[2861]: W0304 02:18:56.936051 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.937077 kubelet[2861]: E0304 02:18:56.936079 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.940590 kubelet[2861]: E0304 02:18:56.940194 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.940590 kubelet[2861]: W0304 02:18:56.940224 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.940590 kubelet[2861]: E0304 02:18:56.940269 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.947454 kubelet[2861]: E0304 02:18:56.946944 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.947454 kubelet[2861]: W0304 02:18:56.946972 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.947454 kubelet[2861]: E0304 02:18:56.946996 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.948180 kubelet[2861]: E0304 02:18:56.947779 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.948180 kubelet[2861]: W0304 02:18:56.947829 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.948180 kubelet[2861]: E0304 02:18:56.947844 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.953432 kubelet[2861]: E0304 02:18:56.953401 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.955590 kubelet[2861]: W0304 02:18:56.955560 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.955761 kubelet[2861]: E0304 02:18:56.955737 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.958840 kubelet[2861]: E0304 02:18:56.956784 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.959082 kubelet[2861]: W0304 02:18:56.958994 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.959082 kubelet[2861]: E0304 02:18:56.959020 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.961753 kubelet[2861]: E0304 02:18:56.960852 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.961753 kubelet[2861]: W0304 02:18:56.960875 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.961753 kubelet[2861]: E0304 02:18:56.960903 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.963977 kubelet[2861]: E0304 02:18:56.963950 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.963977 kubelet[2861]: W0304 02:18:56.963972 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.964132 kubelet[2861]: E0304 02:18:56.963997 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.966485 kubelet[2861]: E0304 02:18:56.966435 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.966485 kubelet[2861]: W0304 02:18:56.966457 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.966485 kubelet[2861]: E0304 02:18:56.966503 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.967187 kubelet[2861]: E0304 02:18:56.967160 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.968311 kubelet[2861]: W0304 02:18:56.967181 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.968412 kubelet[2861]: E0304 02:18:56.968315 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.970403 containerd[1633]: time="2026-03-04T02:18:56.970344452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g7txx,Uid:85902b93-9ec6-479b-a8ee-e03fd351f07f,Namespace:calico-system,Attempt:0,}" Mar 4 02:18:56.972225 kubelet[2861]: E0304 02:18:56.972193 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.972225 kubelet[2861]: W0304 02:18:56.972224 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.972376 kubelet[2861]: E0304 02:18:56.972240 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.972892 kubelet[2861]: I0304 02:18:56.972847 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0d0411e4-66e2-420e-a25a-3a6cba15a516-registration-dir\") pod \"csi-node-driver-76s4z\" (UID: \"0d0411e4-66e2-420e-a25a-3a6cba15a516\") " pod="calico-system/csi-node-driver-76s4z" Mar 4 02:18:56.975168 kubelet[2861]: E0304 02:18:56.975133 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.975168 kubelet[2861]: W0304 02:18:56.975154 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.975168 kubelet[2861]: E0304 02:18:56.975170 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.977153 kubelet[2861]: E0304 02:18:56.977123 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.977153 kubelet[2861]: W0304 02:18:56.977151 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.977295 kubelet[2861]: E0304 02:18:56.977165 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.993373 kubelet[2861]: E0304 02:18:56.993238 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.993373 kubelet[2861]: W0304 02:18:56.993332 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.993373 kubelet[2861]: E0304 02:18:56.993376 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:56.996614 kubelet[2861]: I0304 02:18:56.996225 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0d0411e4-66e2-420e-a25a-3a6cba15a516-socket-dir\") pod \"csi-node-driver-76s4z\" (UID: \"0d0411e4-66e2-420e-a25a-3a6cba15a516\") " pod="calico-system/csi-node-driver-76s4z" Mar 4 02:18:56.996700 kubelet[2861]: E0304 02:18:56.996666 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:56.996700 kubelet[2861]: W0304 02:18:56.996691 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:56.996860 kubelet[2861]: E0304 02:18:56.996708 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.000710 kubelet[2861]: E0304 02:18:57.000684 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.000710 kubelet[2861]: W0304 02:18:57.000707 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.001940 kubelet[2861]: E0304 02:18:57.000725 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.005268 kubelet[2861]: E0304 02:18:57.003948 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.005268 kubelet[2861]: W0304 02:18:57.003978 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.005268 kubelet[2861]: E0304 02:18:57.004011 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.005268 kubelet[2861]: I0304 02:18:57.004103 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snzlf\" (UniqueName: \"kubernetes.io/projected/0d0411e4-66e2-420e-a25a-3a6cba15a516-kube-api-access-snzlf\") pod \"csi-node-driver-76s4z\" (UID: \"0d0411e4-66e2-420e-a25a-3a6cba15a516\") " pod="calico-system/csi-node-driver-76s4z" Mar 4 02:18:57.009097 kubelet[2861]: E0304 02:18:57.008975 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.009097 kubelet[2861]: W0304 02:18:57.009001 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.009097 kubelet[2861]: E0304 02:18:57.009018 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.009097 kubelet[2861]: I0304 02:18:57.009060 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0d0411e4-66e2-420e-a25a-3a6cba15a516-kubelet-dir\") pod \"csi-node-driver-76s4z\" (UID: \"0d0411e4-66e2-420e-a25a-3a6cba15a516\") " pod="calico-system/csi-node-driver-76s4z" Mar 4 02:18:57.012899 kubelet[2861]: E0304 02:18:57.011450 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.012899 kubelet[2861]: W0304 02:18:57.011504 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.012899 kubelet[2861]: E0304 02:18:57.011535 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.012899 kubelet[2861]: E0304 02:18:57.012402 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.012899 kubelet[2861]: W0304 02:18:57.012419 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.012899 kubelet[2861]: E0304 02:18:57.012433 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.015834 kubelet[2861]: E0304 02:18:57.014930 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.015834 kubelet[2861]: W0304 02:18:57.014953 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.015834 kubelet[2861]: E0304 02:18:57.014979 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.015834 kubelet[2861]: I0304 02:18:57.015015 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0d0411e4-66e2-420e-a25a-3a6cba15a516-varrun\") pod \"csi-node-driver-76s4z\" (UID: \"0d0411e4-66e2-420e-a25a-3a6cba15a516\") " pod="calico-system/csi-node-driver-76s4z" Mar 4 02:18:57.018865 kubelet[2861]: E0304 02:18:57.017022 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.018865 kubelet[2861]: W0304 02:18:57.017042 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.018865 kubelet[2861]: E0304 02:18:57.017060 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.018865 kubelet[2861]: E0304 02:18:57.017344 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.018865 kubelet[2861]: W0304 02:18:57.017357 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.018865 kubelet[2861]: E0304 02:18:57.017371 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.018865 kubelet[2861]: E0304 02:18:57.018853 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.018865 kubelet[2861]: W0304 02:18:57.018868 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.019327 kubelet[2861]: E0304 02:18:57.018889 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.021427 kubelet[2861]: E0304 02:18:57.020943 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.021427 kubelet[2861]: W0304 02:18:57.020961 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.021427 kubelet[2861]: E0304 02:18:57.020975 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.116351 kubelet[2861]: E0304 02:18:57.116310 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.116703 kubelet[2861]: W0304 02:18:57.116676 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.116974 kubelet[2861]: E0304 02:18:57.116875 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.117654 kubelet[2861]: E0304 02:18:57.117624 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.117880 kubelet[2861]: W0304 02:18:57.117753 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.117880 kubelet[2861]: E0304 02:18:57.117783 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.118895 kubelet[2861]: E0304 02:18:57.118868 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.118895 kubelet[2861]: W0304 02:18:57.118893 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.119012 kubelet[2861]: E0304 02:18:57.118911 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.119243 kubelet[2861]: E0304 02:18:57.119207 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.119243 kubelet[2861]: W0304 02:18:57.119242 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.119545 kubelet[2861]: E0304 02:18:57.119256 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.119683 kubelet[2861]: E0304 02:18:57.119545 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.119683 kubelet[2861]: W0304 02:18:57.119559 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.119683 kubelet[2861]: E0304 02:18:57.119573 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.120919 kubelet[2861]: E0304 02:18:57.120401 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.120919 kubelet[2861]: W0304 02:18:57.120415 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.120919 kubelet[2861]: E0304 02:18:57.120432 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.122008 kubelet[2861]: E0304 02:18:57.121982 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.122008 kubelet[2861]: W0304 02:18:57.122007 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.122142 kubelet[2861]: E0304 02:18:57.122022 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.122448 kubelet[2861]: E0304 02:18:57.122266 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.122448 kubelet[2861]: W0304 02:18:57.122285 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.122448 kubelet[2861]: E0304 02:18:57.122300 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.122663 kubelet[2861]: E0304 02:18:57.122619 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.122663 kubelet[2861]: W0304 02:18:57.122632 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.122663 kubelet[2861]: E0304 02:18:57.122645 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.123359 kubelet[2861]: E0304 02:18:57.122897 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.123359 kubelet[2861]: W0304 02:18:57.122915 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.123359 kubelet[2861]: E0304 02:18:57.122929 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.123359 kubelet[2861]: E0304 02:18:57.123206 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.123359 kubelet[2861]: W0304 02:18:57.123219 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.123359 kubelet[2861]: E0304 02:18:57.123232 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.124384 kubelet[2861]: E0304 02:18:57.124019 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.124384 kubelet[2861]: W0304 02:18:57.124037 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.124384 kubelet[2861]: E0304 02:18:57.124051 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.126888 kubelet[2861]: E0304 02:18:57.126614 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.126888 kubelet[2861]: W0304 02:18:57.126632 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.126888 kubelet[2861]: E0304 02:18:57.126652 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.129637 kubelet[2861]: E0304 02:18:57.129611 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.129637 kubelet[2861]: W0304 02:18:57.129633 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.129950 kubelet[2861]: E0304 02:18:57.129649 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.130294 kubelet[2861]: E0304 02:18:57.130249 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.130294 kubelet[2861]: W0304 02:18:57.130270 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.131748 kubelet[2861]: E0304 02:18:57.130683 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.132605 kubelet[2861]: E0304 02:18:57.132379 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.132605 kubelet[2861]: W0304 02:18:57.132393 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.132605 kubelet[2861]: E0304 02:18:57.132408 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.136277 kubelet[2861]: E0304 02:18:57.135853 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.136277 kubelet[2861]: W0304 02:18:57.135873 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.136277 kubelet[2861]: E0304 02:18:57.135888 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.140155 kubelet[2861]: E0304 02:18:57.139240 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.140155 kubelet[2861]: W0304 02:18:57.139262 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.140155 kubelet[2861]: E0304 02:18:57.139279 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.141473 kubelet[2861]: E0304 02:18:57.141011 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.141473 kubelet[2861]: W0304 02:18:57.141030 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.141473 kubelet[2861]: E0304 02:18:57.141045 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.141473 kubelet[2861]: E0304 02:18:57.141316 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.141473 kubelet[2861]: W0304 02:18:57.141330 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.141473 kubelet[2861]: E0304 02:18:57.141353 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.142557 kubelet[2861]: E0304 02:18:57.141616 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.142557 kubelet[2861]: W0304 02:18:57.141629 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.142557 kubelet[2861]: E0304 02:18:57.141643 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.142557 kubelet[2861]: E0304 02:18:57.141941 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.142557 kubelet[2861]: W0304 02:18:57.141967 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.142557 kubelet[2861]: E0304 02:18:57.141981 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.142557 kubelet[2861]: E0304 02:18:57.142239 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.142557 kubelet[2861]: W0304 02:18:57.142281 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.142557 kubelet[2861]: E0304 02:18:57.142298 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.145086 kubelet[2861]: E0304 02:18:57.142969 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.145086 kubelet[2861]: W0304 02:18:57.142983 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.145086 kubelet[2861]: E0304 02:18:57.143000 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.145086 kubelet[2861]: E0304 02:18:57.144368 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.145086 kubelet[2861]: W0304 02:18:57.144621 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.145086 kubelet[2861]: E0304 02:18:57.144655 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.153420 containerd[1633]: time="2026-03-04T02:18:57.152326976Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:18:57.153420 containerd[1633]: time="2026-03-04T02:18:57.152517897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:18:57.153420 containerd[1633]: time="2026-03-04T02:18:57.152542324Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:18:57.153420 containerd[1633]: time="2026-03-04T02:18:57.152950734Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:18:57.161368 containerd[1633]: time="2026-03-04T02:18:57.161140258Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:18:57.161368 containerd[1633]: time="2026-03-04T02:18:57.161279554Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:18:57.162281 containerd[1633]: time="2026-03-04T02:18:57.162008755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:18:57.163331 containerd[1633]: time="2026-03-04T02:18:57.162864825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:18:57.163832 kubelet[2861]: E0304 02:18:57.163773 2861 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 4 02:18:57.163944 kubelet[2861]: W0304 02:18:57.163834 2861 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 4 02:18:57.163944 kubelet[2861]: E0304 02:18:57.163870 2861 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 4 02:18:57.277046 containerd[1633]: time="2026-03-04T02:18:57.276786419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g7txx,Uid:85902b93-9ec6-479b-a8ee-e03fd351f07f,Namespace:calico-system,Attempt:0,} returns sandbox id \"d5b14a04f90ebd92fcdffcf90b52c34746622d08b9c135065d9d6a0955beed31\"" Mar 4 02:18:57.321279 containerd[1633]: time="2026-03-04T02:18:57.321077110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-84896f54b-c58wf,Uid:33754fc5-3024-4a1f-833f-e83bf58d0f5a,Namespace:calico-system,Attempt:0,} returns sandbox id \"849b7c7c04a1a89025ec3ba2b8fd6d3f6017c6c143e9428fcf35e52902f54ec9\"" Mar 4 02:18:57.329283 containerd[1633]: time="2026-03-04T02:18:57.329253386Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 4 02:18:58.307429 kubelet[2861]: E0304 02:18:58.307286 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76s4z" podUID="0d0411e4-66e2-420e-a25a-3a6cba15a516" Mar 4 02:18:58.906865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount773855472.mount: Deactivated successfully. Mar 4 02:18:59.087910 containerd[1633]: time="2026-03-04T02:18:59.087839828Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:59.090995 containerd[1633]: time="2026-03-04T02:18:59.090921475Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=6186433" Mar 4 02:18:59.091744 containerd[1633]: time="2026-03-04T02:18:59.091687609Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:59.094830 containerd[1633]: time="2026-03-04T02:18:59.094737708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:18:59.097213 containerd[1633]: time="2026-03-04T02:18:59.096170388Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.766760324s" Mar 4 02:18:59.097213 containerd[1633]: time="2026-03-04T02:18:59.096228838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 4 02:18:59.099110 containerd[1633]: time="2026-03-04T02:18:59.098898306Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 4 02:18:59.122576 containerd[1633]: time="2026-03-04T02:18:59.122527241Z" level=info msg="CreateContainer within sandbox \"d5b14a04f90ebd92fcdffcf90b52c34746622d08b9c135065d9d6a0955beed31\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 4 02:18:59.156828 containerd[1633]: time="2026-03-04T02:18:59.156264997Z" level=info msg="CreateContainer within sandbox \"d5b14a04f90ebd92fcdffcf90b52c34746622d08b9c135065d9d6a0955beed31\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"57a0a1ff37f6308d69997f14f8a84c90c7d24ce19d45f4dcc7463a102c9de760\"" Mar 4 02:18:59.159257 containerd[1633]: time="2026-03-04T02:18:59.158973317Z" level=info msg="StartContainer for \"57a0a1ff37f6308d69997f14f8a84c90c7d24ce19d45f4dcc7463a102c9de760\"" Mar 4 02:18:59.365499 containerd[1633]: time="2026-03-04T02:18:59.365409631Z" level=info msg="StartContainer for \"57a0a1ff37f6308d69997f14f8a84c90c7d24ce19d45f4dcc7463a102c9de760\" returns successfully" Mar 4 02:18:59.494375 containerd[1633]: time="2026-03-04T02:18:59.483510953Z" level=info msg="shim disconnected" id=57a0a1ff37f6308d69997f14f8a84c90c7d24ce19d45f4dcc7463a102c9de760 namespace=k8s.io Mar 4 02:18:59.495094 containerd[1633]: time="2026-03-04T02:18:59.494833386Z" level=warning msg="cleaning up after shim disconnected" id=57a0a1ff37f6308d69997f14f8a84c90c7d24ce19d45f4dcc7463a102c9de760 namespace=k8s.io Mar 4 02:18:59.495094 containerd[1633]: time="2026-03-04T02:18:59.494869995Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:18:59.841673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57a0a1ff37f6308d69997f14f8a84c90c7d24ce19d45f4dcc7463a102c9de760-rootfs.mount: Deactivated successfully. Mar 4 02:19:00.307535 kubelet[2861]: E0304 02:19:00.306935 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76s4z" podUID="0d0411e4-66e2-420e-a25a-3a6cba15a516" Mar 4 02:19:02.307277 kubelet[2861]: E0304 02:19:02.307165 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76s4z" podUID="0d0411e4-66e2-420e-a25a-3a6cba15a516" Mar 4 02:19:02.537736 containerd[1633]: time="2026-03-04T02:19:02.537643346Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:02.539157 containerd[1633]: time="2026-03-04T02:19:02.538932107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=34551413" Mar 4 02:19:02.540387 containerd[1633]: time="2026-03-04T02:19:02.539875719Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:02.547178 containerd[1633]: time="2026-03-04T02:19:02.547139003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:02.549747 containerd[1633]: time="2026-03-04T02:19:02.549705740Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 3.450764966s" Mar 4 02:19:02.550534 containerd[1633]: time="2026-03-04T02:19:02.550409583Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 4 02:19:02.552383 containerd[1633]: time="2026-03-04T02:19:02.552070338Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 4 02:19:02.593828 containerd[1633]: time="2026-03-04T02:19:02.593486482Z" level=info msg="CreateContainer within sandbox \"849b7c7c04a1a89025ec3ba2b8fd6d3f6017c6c143e9428fcf35e52902f54ec9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 4 02:19:02.614672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3397864278.mount: Deactivated successfully. Mar 4 02:19:02.615558 containerd[1633]: time="2026-03-04T02:19:02.614993901Z" level=info msg="CreateContainer within sandbox \"849b7c7c04a1a89025ec3ba2b8fd6d3f6017c6c143e9428fcf35e52902f54ec9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"7e5d3c52f9f1fac2f53e82ed4679fe6fc8373270559a10ba34e61244b2ba2190\"" Mar 4 02:19:02.623377 containerd[1633]: time="2026-03-04T02:19:02.623338649Z" level=info msg="StartContainer for \"7e5d3c52f9f1fac2f53e82ed4679fe6fc8373270559a10ba34e61244b2ba2190\"" Mar 4 02:19:02.746737 containerd[1633]: time="2026-03-04T02:19:02.746663896Z" level=info msg="StartContainer for \"7e5d3c52f9f1fac2f53e82ed4679fe6fc8373270559a10ba34e61244b2ba2190\" returns successfully" Mar 4 02:19:03.587415 kubelet[2861]: I0304 02:19:03.585641 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-84896f54b-c58wf" podStartSLOduration=2.365233052 podStartE2EDuration="7.585593108s" podCreationTimestamp="2026-03-04 02:18:56 +0000 UTC" firstStartedPulling="2026-03-04 02:18:57.331050734 +0000 UTC m=+23.295405258" lastFinishedPulling="2026-03-04 02:19:02.551410781 +0000 UTC m=+28.515765314" observedRunningTime="2026-03-04 02:19:03.581898477 +0000 UTC m=+29.546253005" watchObservedRunningTime="2026-03-04 02:19:03.585593108 +0000 UTC m=+29.549947657" Mar 4 02:19:04.310839 kubelet[2861]: E0304 02:19:04.307615 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76s4z" podUID="0d0411e4-66e2-420e-a25a-3a6cba15a516" Mar 4 02:19:06.312780 kubelet[2861]: E0304 02:19:06.310781 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76s4z" podUID="0d0411e4-66e2-420e-a25a-3a6cba15a516" Mar 4 02:19:08.306837 kubelet[2861]: E0304 02:19:08.306639 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76s4z" podUID="0d0411e4-66e2-420e-a25a-3a6cba15a516" Mar 4 02:19:10.307310 kubelet[2861]: E0304 02:19:10.307210 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76s4z" podUID="0d0411e4-66e2-420e-a25a-3a6cba15a516" Mar 4 02:19:12.312756 kubelet[2861]: E0304 02:19:12.312019 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76s4z" podUID="0d0411e4-66e2-420e-a25a-3a6cba15a516" Mar 4 02:19:12.772501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount108646290.mount: Deactivated successfully. Mar 4 02:19:12.845822 containerd[1633]: time="2026-03-04T02:19:12.838865786Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 4 02:19:12.845822 containerd[1633]: time="2026-03-04T02:19:12.832647468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:12.847235 containerd[1633]: time="2026-03-04T02:19:12.845787495Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:12.848930 containerd[1633]: time="2026-03-04T02:19:12.848873836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:12.850535 containerd[1633]: time="2026-03-04T02:19:12.850137935Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 10.29801981s" Mar 4 02:19:12.850535 containerd[1633]: time="2026-03-04T02:19:12.850199341Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 4 02:19:12.970346 containerd[1633]: time="2026-03-04T02:19:12.970246384Z" level=info msg="CreateContainer within sandbox \"d5b14a04f90ebd92fcdffcf90b52c34746622d08b9c135065d9d6a0955beed31\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 4 02:19:13.007447 containerd[1633]: time="2026-03-04T02:19:13.007378968Z" level=info msg="CreateContainer within sandbox \"d5b14a04f90ebd92fcdffcf90b52c34746622d08b9c135065d9d6a0955beed31\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"29ac522680cf3df1594c43d13f2e883ce5a565c01f0ce5d32825a7f032eb68e9\"" Mar 4 02:19:13.014756 containerd[1633]: time="2026-03-04T02:19:13.013746537Z" level=info msg="StartContainer for \"29ac522680cf3df1594c43d13f2e883ce5a565c01f0ce5d32825a7f032eb68e9\"" Mar 4 02:19:13.202967 containerd[1633]: time="2026-03-04T02:19:13.202200866Z" level=info msg="StartContainer for \"29ac522680cf3df1594c43d13f2e883ce5a565c01f0ce5d32825a7f032eb68e9\" returns successfully" Mar 4 02:19:13.597476 containerd[1633]: time="2026-03-04T02:19:13.594700028Z" level=info msg="shim disconnected" id=29ac522680cf3df1594c43d13f2e883ce5a565c01f0ce5d32825a7f032eb68e9 namespace=k8s.io Mar 4 02:19:13.597476 containerd[1633]: time="2026-03-04T02:19:13.597026540Z" level=warning msg="cleaning up after shim disconnected" id=29ac522680cf3df1594c43d13f2e883ce5a565c01f0ce5d32825a7f032eb68e9 namespace=k8s.io Mar 4 02:19:13.597476 containerd[1633]: time="2026-03-04T02:19:13.597050359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:19:13.770934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29ac522680cf3df1594c43d13f2e883ce5a565c01f0ce5d32825a7f032eb68e9-rootfs.mount: Deactivated successfully. Mar 4 02:19:13.833111 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:13.828569 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:13.828685 systemd-resolved[1518]: Flushed all caches. Mar 4 02:19:14.307429 kubelet[2861]: E0304 02:19:14.307149 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76s4z" podUID="0d0411e4-66e2-420e-a25a-3a6cba15a516" Mar 4 02:19:14.628473 containerd[1633]: time="2026-03-04T02:19:14.627028578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 4 02:19:15.881887 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:15.876031 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:15.876081 systemd-resolved[1518]: Flushed all caches. Mar 4 02:19:16.310884 kubelet[2861]: E0304 02:19:16.308349 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76s4z" podUID="0d0411e4-66e2-420e-a25a-3a6cba15a516" Mar 4 02:19:18.307415 kubelet[2861]: E0304 02:19:18.307312 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76s4z" podUID="0d0411e4-66e2-420e-a25a-3a6cba15a516" Mar 4 02:19:19.389395 containerd[1633]: time="2026-03-04T02:19:19.378353800Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:19.389395 containerd[1633]: time="2026-03-04T02:19:19.380502407Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 4 02:19:19.389395 containerd[1633]: time="2026-03-04T02:19:19.384200655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.757058056s" Mar 4 02:19:19.389395 containerd[1633]: time="2026-03-04T02:19:19.388069419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 4 02:19:19.391692 containerd[1633]: time="2026-03-04T02:19:19.390344345Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:19.394608 containerd[1633]: time="2026-03-04T02:19:19.392787833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:19.398842 containerd[1633]: time="2026-03-04T02:19:19.398705530Z" level=info msg="CreateContainer within sandbox \"d5b14a04f90ebd92fcdffcf90b52c34746622d08b9c135065d9d6a0955beed31\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 4 02:19:19.425322 containerd[1633]: time="2026-03-04T02:19:19.424619572Z" level=info msg="CreateContainer within sandbox \"d5b14a04f90ebd92fcdffcf90b52c34746622d08b9c135065d9d6a0955beed31\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3eaaea4821e903287dc9d90d0b0a52b097207d5ad2c2553b2463f7391c3b84f8\"" Mar 4 02:19:19.428613 containerd[1633]: time="2026-03-04T02:19:19.426209246Z" level=info msg="StartContainer for \"3eaaea4821e903287dc9d90d0b0a52b097207d5ad2c2553b2463f7391c3b84f8\"" Mar 4 02:19:19.426494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount701122743.mount: Deactivated successfully. Mar 4 02:19:19.556775 containerd[1633]: time="2026-03-04T02:19:19.556171488Z" level=info msg="StartContainer for \"3eaaea4821e903287dc9d90d0b0a52b097207d5ad2c2553b2463f7391c3b84f8\" returns successfully" Mar 4 02:19:20.307355 kubelet[2861]: E0304 02:19:20.306966 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-76s4z" podUID="0d0411e4-66e2-420e-a25a-3a6cba15a516" Mar 4 02:19:20.785333 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3eaaea4821e903287dc9d90d0b0a52b097207d5ad2c2553b2463f7391c3b84f8-rootfs.mount: Deactivated successfully. Mar 4 02:19:20.789497 containerd[1633]: time="2026-03-04T02:19:20.789018005Z" level=info msg="shim disconnected" id=3eaaea4821e903287dc9d90d0b0a52b097207d5ad2c2553b2463f7391c3b84f8 namespace=k8s.io Mar 4 02:19:20.789497 containerd[1633]: time="2026-03-04T02:19:20.789157019Z" level=warning msg="cleaning up after shim disconnected" id=3eaaea4821e903287dc9d90d0b0a52b097207d5ad2c2553b2463f7391c3b84f8 namespace=k8s.io Mar 4 02:19:20.789497 containerd[1633]: time="2026-03-04T02:19:20.789179751Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:19:20.818841 kubelet[2861]: I0304 02:19:20.817240 2861 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 4 02:19:20.823101 containerd[1633]: time="2026-03-04T02:19:20.823000184Z" level=warning msg="cleanup warnings time=\"2026-03-04T02:19:20Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Mar 4 02:19:21.056344 containerd[1633]: time="2026-03-04T02:19:21.054581211Z" level=info msg="CreateContainer within sandbox \"d5b14a04f90ebd92fcdffcf90b52c34746622d08b9c135065d9d6a0955beed31\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 4 02:19:21.077569 kubelet[2861]: I0304 02:19:21.076174 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f95e2c4-0e2b-45a3-999b-371f9b171489-config-volume\") pod \"coredns-674b8bbfcf-v4hct\" (UID: \"1f95e2c4-0e2b-45a3-999b-371f9b171489\") " pod="kube-system/coredns-674b8bbfcf-v4hct" Mar 4 02:19:21.077569 kubelet[2861]: I0304 02:19:21.076270 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmfw2\" (UniqueName: \"kubernetes.io/projected/b9c9516a-f907-4907-a5cc-ffce8e9f7515-kube-api-access-xmfw2\") pod \"coredns-674b8bbfcf-gqxzr\" (UID: \"b9c9516a-f907-4907-a5cc-ffce8e9f7515\") " pod="kube-system/coredns-674b8bbfcf-gqxzr" Mar 4 02:19:21.077569 kubelet[2861]: I0304 02:19:21.076309 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/209ee552-e07e-43c8-b4cb-0bb161f67e80-nginx-config\") pod \"whisker-675b5b64c-gmzhg\" (UID: \"209ee552-e07e-43c8-b4cb-0bb161f67e80\") " pod="calico-system/whisker-675b5b64c-gmzhg" Mar 4 02:19:21.077569 kubelet[2861]: I0304 02:19:21.076366 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jfk7\" (UniqueName: \"kubernetes.io/projected/209ee552-e07e-43c8-b4cb-0bb161f67e80-kube-api-access-6jfk7\") pod \"whisker-675b5b64c-gmzhg\" (UID: \"209ee552-e07e-43c8-b4cb-0bb161f67e80\") " pod="calico-system/whisker-675b5b64c-gmzhg" Mar 4 02:19:21.077569 kubelet[2861]: I0304 02:19:21.076404 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b9c9516a-f907-4907-a5cc-ffce8e9f7515-config-volume\") pod \"coredns-674b8bbfcf-gqxzr\" (UID: \"b9c9516a-f907-4907-a5cc-ffce8e9f7515\") " pod="kube-system/coredns-674b8bbfcf-gqxzr" Mar 4 02:19:21.079191 kubelet[2861]: I0304 02:19:21.076430 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/209ee552-e07e-43c8-b4cb-0bb161f67e80-whisker-backend-key-pair\") pod \"whisker-675b5b64c-gmzhg\" (UID: \"209ee552-e07e-43c8-b4cb-0bb161f67e80\") " pod="calico-system/whisker-675b5b64c-gmzhg" Mar 4 02:19:21.079191 kubelet[2861]: I0304 02:19:21.076457 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/209ee552-e07e-43c8-b4cb-0bb161f67e80-whisker-ca-bundle\") pod \"whisker-675b5b64c-gmzhg\" (UID: \"209ee552-e07e-43c8-b4cb-0bb161f67e80\") " pod="calico-system/whisker-675b5b64c-gmzhg" Mar 4 02:19:21.079191 kubelet[2861]: I0304 02:19:21.076505 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s95bf\" (UniqueName: \"kubernetes.io/projected/1f95e2c4-0e2b-45a3-999b-371f9b171489-kube-api-access-s95bf\") pod \"coredns-674b8bbfcf-v4hct\" (UID: \"1f95e2c4-0e2b-45a3-999b-371f9b171489\") " pod="kube-system/coredns-674b8bbfcf-v4hct" Mar 4 02:19:21.091510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2709653813.mount: Deactivated successfully. Mar 4 02:19:21.098818 containerd[1633]: time="2026-03-04T02:19:21.095998371Z" level=info msg="CreateContainer within sandbox \"d5b14a04f90ebd92fcdffcf90b52c34746622d08b9c135065d9d6a0955beed31\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d013544a4a4f14357e597fd78362e1514a3f2039defcce1659f1dd18e6eb455f\"" Mar 4 02:19:21.136943 containerd[1633]: time="2026-03-04T02:19:21.136886088Z" level=info msg="StartContainer for \"d013544a4a4f14357e597fd78362e1514a3f2039defcce1659f1dd18e6eb455f\"" Mar 4 02:19:21.177628 kubelet[2861]: I0304 02:19:21.177472 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/95f1b570-a2f8-4407-94d9-6b1231857727-calico-apiserver-certs\") pod \"calico-apiserver-6b7d4f4c7f-nd4xh\" (UID: \"95f1b570-a2f8-4407-94d9-6b1231857727\") " pod="calico-system/calico-apiserver-6b7d4f4c7f-nd4xh" Mar 4 02:19:21.178121 kubelet[2861]: I0304 02:19:21.177765 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tss44\" (UniqueName: \"kubernetes.io/projected/9c3e591b-3214-4ea2-9f53-b02ad45f9933-kube-api-access-tss44\") pod \"calico-kube-controllers-64b664658b-2dz7l\" (UID: \"9c3e591b-3214-4ea2-9f53-b02ad45f9933\") " pod="calico-system/calico-kube-controllers-64b664658b-2dz7l" Mar 4 02:19:21.178121 kubelet[2861]: I0304 02:19:21.177862 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqkd9\" (UniqueName: \"kubernetes.io/projected/be02ed94-fac8-40fb-bf12-2f0443c96f50-kube-api-access-sqkd9\") pod \"calico-apiserver-6b7d4f4c7f-qrf7j\" (UID: \"be02ed94-fac8-40fb-bf12-2f0443c96f50\") " pod="calico-system/calico-apiserver-6b7d4f4c7f-qrf7j" Mar 4 02:19:21.178121 kubelet[2861]: I0304 02:19:21.178014 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6c252b40-b769-41c3-a985-3ad359dfb9c2-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-9npx6\" (UID: \"6c252b40-b769-41c3-a985-3ad359dfb9c2\") " pod="calico-system/goldmane-5b85766d88-9npx6" Mar 4 02:19:21.178121 kubelet[2861]: I0304 02:19:21.178054 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6c252b40-b769-41c3-a985-3ad359dfb9c2-goldmane-key-pair\") pod \"goldmane-5b85766d88-9npx6\" (UID: \"6c252b40-b769-41c3-a985-3ad359dfb9c2\") " pod="calico-system/goldmane-5b85766d88-9npx6" Mar 4 02:19:21.178121 kubelet[2861]: I0304 02:19:21.178112 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8fcc\" (UniqueName: \"kubernetes.io/projected/95f1b570-a2f8-4407-94d9-6b1231857727-kube-api-access-d8fcc\") pod \"calico-apiserver-6b7d4f4c7f-nd4xh\" (UID: \"95f1b570-a2f8-4407-94d9-6b1231857727\") " pod="calico-system/calico-apiserver-6b7d4f4c7f-nd4xh" Mar 4 02:19:21.179546 kubelet[2861]: I0304 02:19:21.178172 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldjl6\" (UniqueName: \"kubernetes.io/projected/6c252b40-b769-41c3-a985-3ad359dfb9c2-kube-api-access-ldjl6\") pod \"goldmane-5b85766d88-9npx6\" (UID: \"6c252b40-b769-41c3-a985-3ad359dfb9c2\") " pod="calico-system/goldmane-5b85766d88-9npx6" Mar 4 02:19:21.179546 kubelet[2861]: I0304 02:19:21.178246 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/be02ed94-fac8-40fb-bf12-2f0443c96f50-calico-apiserver-certs\") pod \"calico-apiserver-6b7d4f4c7f-qrf7j\" (UID: \"be02ed94-fac8-40fb-bf12-2f0443c96f50\") " pod="calico-system/calico-apiserver-6b7d4f4c7f-qrf7j" Mar 4 02:19:21.179546 kubelet[2861]: I0304 02:19:21.178273 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9c3e591b-3214-4ea2-9f53-b02ad45f9933-tigera-ca-bundle\") pod \"calico-kube-controllers-64b664658b-2dz7l\" (UID: \"9c3e591b-3214-4ea2-9f53-b02ad45f9933\") " pod="calico-system/calico-kube-controllers-64b664658b-2dz7l" Mar 4 02:19:21.179546 kubelet[2861]: I0304 02:19:21.178328 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6c252b40-b769-41c3-a985-3ad359dfb9c2-config\") pod \"goldmane-5b85766d88-9npx6\" (UID: \"6c252b40-b769-41c3-a985-3ad359dfb9c2\") " pod="calico-system/goldmane-5b85766d88-9npx6" Mar 4 02:19:21.351181 containerd[1633]: time="2026-03-04T02:19:21.351020974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-9npx6,Uid:6c252b40-b769-41c3-a985-3ad359dfb9c2,Namespace:calico-system,Attempt:0,}" Mar 4 02:19:21.364375 containerd[1633]: time="2026-03-04T02:19:21.363593918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v4hct,Uid:1f95e2c4-0e2b-45a3-999b-371f9b171489,Namespace:kube-system,Attempt:0,}" Mar 4 02:19:21.374722 containerd[1633]: time="2026-03-04T02:19:21.374165516Z" level=info msg="StartContainer for \"d013544a4a4f14357e597fd78362e1514a3f2039defcce1659f1dd18e6eb455f\" returns successfully" Mar 4 02:19:21.407678 containerd[1633]: time="2026-03-04T02:19:21.407418774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-675b5b64c-gmzhg,Uid:209ee552-e07e-43c8-b4cb-0bb161f67e80,Namespace:calico-system,Attempt:0,}" Mar 4 02:19:21.408881 containerd[1633]: time="2026-03-04T02:19:21.408664553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gqxzr,Uid:b9c9516a-f907-4907-a5cc-ffce8e9f7515,Namespace:kube-system,Attempt:0,}" Mar 4 02:19:21.427951 containerd[1633]: time="2026-03-04T02:19:21.427842915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b664658b-2dz7l,Uid:9c3e591b-3214-4ea2-9f53-b02ad45f9933,Namespace:calico-system,Attempt:0,}" Mar 4 02:19:21.435220 containerd[1633]: time="2026-03-04T02:19:21.434985452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d4f4c7f-qrf7j,Uid:be02ed94-fac8-40fb-bf12-2f0443c96f50,Namespace:calico-system,Attempt:0,}" Mar 4 02:19:21.435220 containerd[1633]: time="2026-03-04T02:19:21.435054405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d4f4c7f-nd4xh,Uid:95f1b570-a2f8-4407-94d9-6b1231857727,Namespace:calico-system,Attempt:0,}" Mar 4 02:19:21.834115 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:21.833106 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:21.833137 systemd-resolved[1518]: Flushed all caches. Mar 4 02:19:22.173217 kubelet[2861]: I0304 02:19:22.164623 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g7txx" podStartSLOduration=4.029874257 podStartE2EDuration="26.140746508s" podCreationTimestamp="2026-03-04 02:18:56 +0000 UTC" firstStartedPulling="2026-03-04 02:18:57.279943981 +0000 UTC m=+23.244298503" lastFinishedPulling="2026-03-04 02:19:19.390816217 +0000 UTC m=+45.355170754" observedRunningTime="2026-03-04 02:19:22.139757933 +0000 UTC m=+48.104112474" watchObservedRunningTime="2026-03-04 02:19:22.140746508 +0000 UTC m=+48.105101052" Mar 4 02:19:22.322680 containerd[1633]: time="2026-03-04T02:19:22.322386081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76s4z,Uid:0d0411e4-66e2-420e-a25a-3a6cba15a516,Namespace:calico-system,Attempt:0,}" Mar 4 02:19:22.965148 systemd-networkd[1264]: cali0bff7bddfbd: Link UP Mar 4 02:19:22.970264 systemd-networkd[1264]: cali0bff7bddfbd: Gained carrier Mar 4 02:19:23.078090 kubelet[2861]: I0304 02:19:23.078023 2861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 02:19:23.103560 systemd-networkd[1264]: cali16a3c47a355: Link UP Mar 4 02:19:23.107960 systemd-networkd[1264]: cali16a3c47a355: Gained carrier Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.066 [ERROR][3736] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.193 [INFO][3736] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v4hct-eth0 coredns-674b8bbfcf- kube-system 1f95e2c4-0e2b-45a3-999b-371f9b171489 862 0 2026-03-04 02:18:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-mtsxv.gb1.brightbox.com coredns-674b8bbfcf-v4hct eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0bff7bddfbd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" Namespace="kube-system" Pod="coredns-674b8bbfcf-v4hct" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v4hct-" Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.193 [INFO][3736] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" Namespace="kube-system" Pod="coredns-674b8bbfcf-v4hct" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v4hct-eth0" Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.649 [INFO][3851] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" HandleID="k8s-pod-network.1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" Workload="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v4hct-eth0" Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.693 [INFO][3851] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" HandleID="k8s-pod-network.1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" Workload="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v4hct-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e3b0), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-mtsxv.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-v4hct", "timestamp":"2026-03-04 02:19:22.649272949 +0000 UTC"}, Hostname:"srv-mtsxv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004fa000)} Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.693 [INFO][3851] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.693 [INFO][3851] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.693 [INFO][3851] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mtsxv.gb1.brightbox.com' Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.712 [INFO][3851] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.764 [INFO][3851] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.822 [INFO][3851] ipam/ipam.go 526: Trying affinity for 192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.828 [INFO][3851] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.836 [INFO][3851] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.836 [INFO][3851] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.192/26 handle="k8s-pod-network.1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.841 [INFO][3851] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481 Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.860 [INFO][3851] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.192/26 handle="k8s-pod-network.1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.876 [INFO][3851] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.193/26] block=192.168.10.192/26 handle="k8s-pod-network.1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.876 [INFO][3851] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.193/26] handle="k8s-pod-network.1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.877 [INFO][3851] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:19:23.110864 containerd[1633]: 2026-03-04 02:19:22.877 [INFO][3851] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.193/26] IPv6=[] ContainerID="1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" HandleID="k8s-pod-network.1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" Workload="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v4hct-eth0" Mar 4 02:19:23.112657 containerd[1633]: 2026-03-04 02:19:22.895 [INFO][3736] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" Namespace="kube-system" Pod="coredns-674b8bbfcf-v4hct" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v4hct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v4hct-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1f95e2c4-0e2b-45a3-999b-371f9b171489", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-v4hct", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bff7bddfbd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:23.112657 containerd[1633]: 2026-03-04 02:19:22.898 [INFO][3736] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.193/32] ContainerID="1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" Namespace="kube-system" Pod="coredns-674b8bbfcf-v4hct" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v4hct-eth0" Mar 4 02:19:23.112657 containerd[1633]: 2026-03-04 02:19:22.898 [INFO][3736] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0bff7bddfbd ContainerID="1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" Namespace="kube-system" Pod="coredns-674b8bbfcf-v4hct" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v4hct-eth0" Mar 4 02:19:23.112657 containerd[1633]: 2026-03-04 02:19:22.984 [INFO][3736] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" Namespace="kube-system" Pod="coredns-674b8bbfcf-v4hct" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v4hct-eth0" Mar 4 02:19:23.112657 containerd[1633]: 2026-03-04 02:19:23.007 [INFO][3736] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" Namespace="kube-system" Pod="coredns-674b8bbfcf-v4hct" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v4hct-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v4hct-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"1f95e2c4-0e2b-45a3-999b-371f9b171489", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481", Pod:"coredns-674b8bbfcf-v4hct", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0bff7bddfbd", MAC:"d6:e2:ca:b1:c0:5a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:23.112657 containerd[1633]: 2026-03-04 02:19:23.085 [INFO][3736] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481" Namespace="kube-system" Pod="coredns-674b8bbfcf-v4hct" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--v4hct-eth0" Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.090 [ERROR][3780] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.190 [INFO][3780] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mtsxv.gb1.brightbox.com-k8s-calico--kube--controllers--64b664658b--2dz7l-eth0 calico-kube-controllers-64b664658b- calico-system 9c3e591b-3214-4ea2-9f53-b02ad45f9933 867 0 2026-03-04 02:18:56 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:64b664658b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s srv-mtsxv.gb1.brightbox.com calico-kube-controllers-64b664658b-2dz7l eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali16a3c47a355 [] [] }} ContainerID="dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" Namespace="calico-system" Pod="calico-kube-controllers-64b664658b-2dz7l" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--kube--controllers--64b664658b--2dz7l-" Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.195 [INFO][3780] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" Namespace="calico-system" Pod="calico-kube-controllers-64b664658b-2dz7l" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--kube--controllers--64b664658b--2dz7l-eth0" Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.622 [INFO][3858] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" HandleID="k8s-pod-network.dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--kube--controllers--64b664658b--2dz7l-eth0" Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.691 [INFO][3858] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" HandleID="k8s-pod-network.dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--kube--controllers--64b664658b--2dz7l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ce390), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-mtsxv.gb1.brightbox.com", "pod":"calico-kube-controllers-64b664658b-2dz7l", "timestamp":"2026-03-04 02:19:22.622516184 +0000 UTC"}, Hostname:"srv-mtsxv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002b0420)} Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.694 [INFO][3858] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.878 [INFO][3858] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.878 [INFO][3858] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mtsxv.gb1.brightbox.com' Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.887 [INFO][3858] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.899 [INFO][3858] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.929 [INFO][3858] ipam/ipam.go 526: Trying affinity for 192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.938 [INFO][3858] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.981 [INFO][3858] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.982 [INFO][3858] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.192/26 handle="k8s-pod-network.dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:22.989 [INFO][3858] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92 Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:23.017 [INFO][3858] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.192/26 handle="k8s-pod-network.dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:23.065 [INFO][3858] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.194/26] block=192.168.10.192/26 handle="k8s-pod-network.dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:23.065 [INFO][3858] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.194/26] handle="k8s-pod-network.dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:23.065 [INFO][3858] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:19:23.187478 containerd[1633]: 2026-03-04 02:19:23.065 [INFO][3858] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.194/26] IPv6=[] ContainerID="dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" HandleID="k8s-pod-network.dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--kube--controllers--64b664658b--2dz7l-eth0" Mar 4 02:19:23.188682 containerd[1633]: 2026-03-04 02:19:23.098 [INFO][3780] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" Namespace="calico-system" Pod="calico-kube-controllers-64b664658b-2dz7l" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--kube--controllers--64b664658b--2dz7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-calico--kube--controllers--64b664658b--2dz7l-eth0", GenerateName:"calico-kube-controllers-64b664658b-", Namespace:"calico-system", SelfLink:"", UID:"9c3e591b-3214-4ea2-9f53-b02ad45f9933", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 18, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b664658b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"", Pod:"calico-kube-controllers-64b664658b-2dz7l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16a3c47a355", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:23.188682 containerd[1633]: 2026-03-04 02:19:23.098 [INFO][3780] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.194/32] ContainerID="dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" Namespace="calico-system" Pod="calico-kube-controllers-64b664658b-2dz7l" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--kube--controllers--64b664658b--2dz7l-eth0" Mar 4 02:19:23.188682 containerd[1633]: 2026-03-04 02:19:23.098 [INFO][3780] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali16a3c47a355 ContainerID="dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" Namespace="calico-system" Pod="calico-kube-controllers-64b664658b-2dz7l" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--kube--controllers--64b664658b--2dz7l-eth0" Mar 4 02:19:23.188682 containerd[1633]: 2026-03-04 02:19:23.103 [INFO][3780] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" Namespace="calico-system" Pod="calico-kube-controllers-64b664658b-2dz7l" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--kube--controllers--64b664658b--2dz7l-eth0" Mar 4 02:19:23.188682 containerd[1633]: 2026-03-04 02:19:23.128 [INFO][3780] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" Namespace="calico-system" Pod="calico-kube-controllers-64b664658b-2dz7l" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--kube--controllers--64b664658b--2dz7l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-calico--kube--controllers--64b664658b--2dz7l-eth0", GenerateName:"calico-kube-controllers-64b664658b-", Namespace:"calico-system", SelfLink:"", UID:"9c3e591b-3214-4ea2-9f53-b02ad45f9933", ResourceVersion:"867", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 18, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"64b664658b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92", Pod:"calico-kube-controllers-64b664658b-2dz7l", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.10.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali16a3c47a355", MAC:"82:3e:53:56:fd:f6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:23.188682 containerd[1633]: 2026-03-04 02:19:23.178 [INFO][3780] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92" Namespace="calico-system" Pod="calico-kube-controllers-64b664658b-2dz7l" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--kube--controllers--64b664658b--2dz7l-eth0" Mar 4 02:19:23.267451 systemd-networkd[1264]: calia058f725f92: Link UP Mar 4 02:19:23.275267 systemd-networkd[1264]: calia058f725f92: Gained carrier Mar 4 02:19:23.311828 containerd[1633]: time="2026-03-04T02:19:23.310583246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:19:23.313387 containerd[1633]: time="2026-03-04T02:19:23.312998106Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:19:23.313387 containerd[1633]: time="2026-03-04T02:19:23.313071560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:23.314571 containerd[1633]: 2026-03-04 02:19:22.817 [INFO][3888] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3" Mar 4 02:19:23.314571 containerd[1633]: 2026-03-04 02:19:22.820 [INFO][3888] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3" iface="eth0" netns="/var/run/netns/cni-8b36cfbb-3f3a-e75a-334f-b7f7e72bd822" Mar 4 02:19:23.314571 containerd[1633]: 2026-03-04 02:19:22.821 [INFO][3888] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3" iface="eth0" netns="/var/run/netns/cni-8b36cfbb-3f3a-e75a-334f-b7f7e72bd822" Mar 4 02:19:23.314571 containerd[1633]: 2026-03-04 02:19:22.823 [INFO][3888] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3" iface="eth0" netns="/var/run/netns/cni-8b36cfbb-3f3a-e75a-334f-b7f7e72bd822" Mar 4 02:19:23.314571 containerd[1633]: 2026-03-04 02:19:22.823 [INFO][3888] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3" Mar 4 02:19:23.314571 containerd[1633]: 2026-03-04 02:19:22.823 [INFO][3888] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3" Mar 4 02:19:23.314571 containerd[1633]: 2026-03-04 02:19:23.024 [INFO][3941] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3" HandleID="k8s-pod-network.67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3" Workload="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0" Mar 4 02:19:23.314571 containerd[1633]: 2026-03-04 02:19:23.025 [INFO][3941] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:19:23.314571 containerd[1633]: 2026-03-04 02:19:23.223 [INFO][3941] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:19:23.314571 containerd[1633]: 2026-03-04 02:19:23.269 [WARNING][3941] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3" HandleID="k8s-pod-network.67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3" Workload="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0" Mar 4 02:19:23.314571 containerd[1633]: 2026-03-04 02:19:23.270 [INFO][3941] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3" HandleID="k8s-pod-network.67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3" Workload="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0" Mar 4 02:19:23.314571 containerd[1633]: 2026-03-04 02:19:23.288 [INFO][3941] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:19:23.314571 containerd[1633]: 2026-03-04 02:19:23.304 [INFO][3888] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3" Mar 4 02:19:23.319608 systemd[1]: run-netns-cni\x2d8b36cfbb\x2d3f3a\x2de75a\x2d334f\x2db7f7e72bd822.mount: Deactivated successfully. Mar 4 02:19:23.325316 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3-shm.mount: Deactivated successfully. Mar 4 02:19:23.344992 containerd[1633]: time="2026-03-04T02:19:23.342725459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:22.044 [ERROR][3742] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:22.191 [INFO][3742] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0 whisker-675b5b64c- calico-system 209ee552-e07e-43c8-b4cb-0bb161f67e80 883 0 2026-03-04 02:19:03 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:675b5b64c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-mtsxv.gb1.brightbox.com whisker-675b5b64c-gmzhg eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia058f725f92 [] [] }} ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Namespace="calico-system" Pod="whisker-675b5b64c-gmzhg" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-" Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:22.191 [INFO][3742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Namespace="calico-system" Pod="whisker-675b5b64c-gmzhg" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:22.698 [INFO][3840] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" HandleID="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:22.758 [INFO][3840] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" HandleID="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a33a0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-mtsxv.gb1.brightbox.com", "pod":"whisker-675b5b64c-gmzhg", "timestamp":"2026-03-04 02:19:22.698647684 +0000 UTC"}, Hostname:"srv-mtsxv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004d0420)} Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:22.760 [INFO][3840] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:23.067 [INFO][3840] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:23.071 [INFO][3840] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mtsxv.gb1.brightbox.com' Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:23.092 [INFO][3840] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:23.137 [INFO][3840] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:23.155 [INFO][3840] ipam/ipam.go 526: Trying affinity for 192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:23.165 [INFO][3840] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:23.179 [INFO][3840] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:23.180 [INFO][3840] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.192/26 handle="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:23.185 [INFO][3840] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295 Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:23.198 [INFO][3840] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.192/26 handle="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:23.221 [INFO][3840] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.195/26] block=192.168.10.192/26 handle="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:23.221 [INFO][3840] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.195/26] handle="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:23.221 [INFO][3840] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:19:23.366016 containerd[1633]: 2026-03-04 02:19:23.221 [INFO][3840] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.195/26] IPv6=[] ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" HandleID="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:19:23.370111 containerd[1633]: 2026-03-04 02:19:23.242 [INFO][3742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Namespace="calico-system" Pod="whisker-675b5b64c-gmzhg" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0", GenerateName:"whisker-675b5b64c-", Namespace:"calico-system", SelfLink:"", UID:"209ee552-e07e-43c8-b4cb-0bb161f67e80", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 19, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"675b5b64c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"", Pod:"whisker-675b5b64c-gmzhg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.10.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia058f725f92", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:23.370111 containerd[1633]: 2026-03-04 02:19:23.245 [INFO][3742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.195/32] ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Namespace="calico-system" Pod="whisker-675b5b64c-gmzhg" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:19:23.370111 containerd[1633]: 2026-03-04 02:19:23.247 [INFO][3742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia058f725f92 ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Namespace="calico-system" Pod="whisker-675b5b64c-gmzhg" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:19:23.370111 containerd[1633]: 2026-03-04 02:19:23.280 [INFO][3742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Namespace="calico-system" Pod="whisker-675b5b64c-gmzhg" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:19:23.370111 containerd[1633]: 2026-03-04 02:19:23.281 [INFO][3742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Namespace="calico-system" Pod="whisker-675b5b64c-gmzhg" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0", GenerateName:"whisker-675b5b64c-", Namespace:"calico-system", SelfLink:"", UID:"209ee552-e07e-43c8-b4cb-0bb161f67e80", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 19, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"675b5b64c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295", Pod:"whisker-675b5b64c-gmzhg", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.10.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia058f725f92", MAC:"12:56:99:99:1b:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:23.370111 containerd[1633]: 2026-03-04 02:19:23.343 [INFO][3742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Namespace="calico-system" Pod="whisker-675b5b64c-gmzhg" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:19:23.407075 containerd[1633]: time="2026-03-04T02:19:23.406462780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:19:23.407075 containerd[1633]: time="2026-03-04T02:19:23.406559543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:19:23.407075 containerd[1633]: time="2026-03-04T02:19:23.406583560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:23.407075 containerd[1633]: time="2026-03-04T02:19:23.406727336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:23.479929 systemd-networkd[1264]: cali550fc22606e: Link UP Mar 4 02:19:23.488312 systemd-networkd[1264]: cali550fc22606e: Gained carrier Mar 4 02:19:23.492082 containerd[1633]: time="2026-03-04T02:19:23.491160565Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gqxzr,Uid:b9c9516a-f907-4907-a5cc-ffce8e9f7515,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 02:19:23.495460 containerd[1633]: 2026-03-04 02:19:22.769 [INFO][3862] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58" Mar 4 02:19:23.495460 containerd[1633]: 2026-03-04 02:19:22.770 [INFO][3862] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58" iface="eth0" netns="/var/run/netns/cni-7d5003f8-1229-dfde-352e-8e12f8afc257" Mar 4 02:19:23.495460 containerd[1633]: 2026-03-04 02:19:22.770 [INFO][3862] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58" iface="eth0" netns="/var/run/netns/cni-7d5003f8-1229-dfde-352e-8e12f8afc257" Mar 4 02:19:23.495460 containerd[1633]: 2026-03-04 02:19:22.774 [INFO][3862] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58" iface="eth0" netns="/var/run/netns/cni-7d5003f8-1229-dfde-352e-8e12f8afc257" Mar 4 02:19:23.495460 containerd[1633]: 2026-03-04 02:19:22.775 [INFO][3862] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58" Mar 4 02:19:23.495460 containerd[1633]: 2026-03-04 02:19:22.775 [INFO][3862] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58" Mar 4 02:19:23.495460 containerd[1633]: 2026-03-04 02:19:23.178 [INFO][3932] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58" HandleID="k8s-pod-network.985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0" Mar 4 02:19:23.495460 containerd[1633]: 2026-03-04 02:19:23.186 [INFO][3932] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:19:23.495460 containerd[1633]: 2026-03-04 02:19:23.434 [INFO][3932] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:19:23.495460 containerd[1633]: 2026-03-04 02:19:23.451 [WARNING][3932] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58" HandleID="k8s-pod-network.985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0" Mar 4 02:19:23.495460 containerd[1633]: 2026-03-04 02:19:23.451 [INFO][3932] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58" HandleID="k8s-pod-network.985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0" Mar 4 02:19:23.495460 containerd[1633]: 2026-03-04 02:19:23.454 [INFO][3932] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:19:23.495460 containerd[1633]: 2026-03-04 02:19:23.467 [INFO][3862] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58" Mar 4 02:19:23.526626 containerd[1633]: time="2026-03-04T02:19:23.526282844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d4f4c7f-qrf7j,Uid:be02ed94-fac8-40fb-bf12-2f0443c96f50,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 02:19:23.562947 containerd[1633]: 2026-03-04 02:19:22.798 [INFO][3874] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83" Mar 4 02:19:23.562947 containerd[1633]: 2026-03-04 02:19:22.801 [INFO][3874] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83" iface="eth0" netns="/var/run/netns/cni-76dba115-f577-6a07-92e3-9f8cc9b69d69" Mar 4 02:19:23.562947 containerd[1633]: 2026-03-04 02:19:22.801 [INFO][3874] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83" iface="eth0" netns="/var/run/netns/cni-76dba115-f577-6a07-92e3-9f8cc9b69d69" Mar 4 02:19:23.562947 containerd[1633]: 2026-03-04 02:19:22.825 [INFO][3874] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83" iface="eth0" netns="/var/run/netns/cni-76dba115-f577-6a07-92e3-9f8cc9b69d69" Mar 4 02:19:23.562947 containerd[1633]: 2026-03-04 02:19:22.825 [INFO][3874] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83" Mar 4 02:19:23.562947 containerd[1633]: 2026-03-04 02:19:22.825 [INFO][3874] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83" Mar 4 02:19:23.562947 containerd[1633]: 2026-03-04 02:19:23.206 [INFO][3942] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83" HandleID="k8s-pod-network.dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83" Workload="srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0" Mar 4 02:19:23.562947 containerd[1633]: 2026-03-04 02:19:23.206 [INFO][3942] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:19:23.562947 containerd[1633]: 2026-03-04 02:19:23.461 [INFO][3942] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:19:23.562947 containerd[1633]: 2026-03-04 02:19:23.498 [WARNING][3942] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83" HandleID="k8s-pod-network.dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83" Workload="srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0" Mar 4 02:19:23.562947 containerd[1633]: 2026-03-04 02:19:23.498 [INFO][3942] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83" HandleID="k8s-pod-network.dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83" Workload="srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0" Mar 4 02:19:23.562947 containerd[1633]: 2026-03-04 02:19:23.508 [INFO][3942] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:19:23.562947 containerd[1633]: 2026-03-04 02:19:23.523 [INFO][3874] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83" Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:22.733 [ERROR][3893] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:22.835 [INFO][3893] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mtsxv.gb1.brightbox.com-k8s-csi--node--driver--76s4z-eth0 csi-node-driver- calico-system 0d0411e4-66e2-420e-a25a-3a6cba15a516 710 0 2026-03-04 02:18:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s srv-mtsxv.gb1.brightbox.com csi-node-driver-76s4z eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali550fc22606e [] [] }} ContainerID="c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" Namespace="calico-system" Pod="csi-node-driver-76s4z" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-csi--node--driver--76s4z-" Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:22.835 [INFO][3893] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" Namespace="calico-system" Pod="csi-node-driver-76s4z" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-csi--node--driver--76s4z-eth0" Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.132 [INFO][3954] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" HandleID="k8s-pod-network.c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" Workload="srv--mtsxv.gb1.brightbox.com-k8s-csi--node--driver--76s4z-eth0" Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.159 [INFO][3954] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" HandleID="k8s-pod-network.c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" Workload="srv--mtsxv.gb1.brightbox.com-k8s-csi--node--driver--76s4z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000115780), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-mtsxv.gb1.brightbox.com", "pod":"csi-node-driver-76s4z", "timestamp":"2026-03-04 02:19:23.132493457 +0000 UTC"}, Hostname:"srv-mtsxv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00025f600)} Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.159 [INFO][3954] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.288 [INFO][3954] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.289 [INFO][3954] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mtsxv.gb1.brightbox.com' Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.308 [INFO][3954] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.370 [INFO][3954] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.388 [INFO][3954] ipam/ipam.go 526: Trying affinity for 192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.396 [INFO][3954] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.404 [INFO][3954] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.404 [INFO][3954] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.192/26 handle="k8s-pod-network.c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.408 [INFO][3954] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7 Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.417 [INFO][3954] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.192/26 handle="k8s-pod-network.c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.433 [INFO][3954] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.196/26] block=192.168.10.192/26 handle="k8s-pod-network.c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.434 [INFO][3954] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.196/26] handle="k8s-pod-network.c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.434 [INFO][3954] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:19:23.580327 containerd[1633]: 2026-03-04 02:19:23.434 [INFO][3954] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.196/26] IPv6=[] ContainerID="c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" HandleID="k8s-pod-network.c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" Workload="srv--mtsxv.gb1.brightbox.com-k8s-csi--node--driver--76s4z-eth0" Mar 4 02:19:23.584298 containerd[1633]: 2026-03-04 02:19:23.455 [INFO][3893] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" Namespace="calico-system" Pod="csi-node-driver-76s4z" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-csi--node--driver--76s4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-csi--node--driver--76s4z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0d0411e4-66e2-420e-a25a-3a6cba15a516", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 18, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"", Pod:"csi-node-driver-76s4z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali550fc22606e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:23.584298 containerd[1633]: 2026-03-04 02:19:23.456 [INFO][3893] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.196/32] ContainerID="c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" Namespace="calico-system" Pod="csi-node-driver-76s4z" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-csi--node--driver--76s4z-eth0" Mar 4 02:19:23.584298 containerd[1633]: 2026-03-04 02:19:23.458 [INFO][3893] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali550fc22606e ContainerID="c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" Namespace="calico-system" Pod="csi-node-driver-76s4z" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-csi--node--driver--76s4z-eth0" Mar 4 02:19:23.584298 containerd[1633]: 2026-03-04 02:19:23.492 [INFO][3893] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" Namespace="calico-system" Pod="csi-node-driver-76s4z" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-csi--node--driver--76s4z-eth0" Mar 4 02:19:23.584298 containerd[1633]: 2026-03-04 02:19:23.496 [INFO][3893] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" Namespace="calico-system" Pod="csi-node-driver-76s4z" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-csi--node--driver--76s4z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-csi--node--driver--76s4z-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0d0411e4-66e2-420e-a25a-3a6cba15a516", ResourceVersion:"710", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 18, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7", Pod:"csi-node-driver-76s4z", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.10.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali550fc22606e", MAC:"92:48:4a:40:f2:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:23.584298 containerd[1633]: 2026-03-04 02:19:23.543 [INFO][3893] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7" Namespace="calico-system" Pod="csi-node-driver-76s4z" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-csi--node--driver--76s4z-eth0" Mar 4 02:19:23.588651 kubelet[2861]: E0304 02:19:23.588261 2861 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 02:19:23.588651 kubelet[2861]: E0304 02:19:23.588540 2861 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 02:19:23.597522 kubelet[2861]: E0304 02:19:23.589603 2861 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gqxzr" Mar 4 02:19:23.597522 kubelet[2861]: E0304 02:19:23.590012 2861 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b7d4f4c7f-qrf7j" Mar 4 02:19:23.597522 kubelet[2861]: E0304 02:19:23.590558 2861 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b7d4f4c7f-qrf7j" Mar 4 02:19:23.597674 kubelet[2861]: E0304 02:19:23.591197 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b7d4f4c7f-qrf7j_calico-system(be02ed94-fac8-40fb-bf12-2f0443c96f50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b7d4f4c7f-qrf7j_calico-system(be02ed94-fac8-40fb-bf12-2f0443c96f50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b7d4f4c7f-qrf7j" podUID="be02ed94-fac8-40fb-bf12-2f0443c96f50" Mar 4 02:19:23.597674 kubelet[2861]: E0304 02:19:23.591332 2861 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-gqxzr" Mar 4 02:19:23.597674 kubelet[2861]: E0304 02:19:23.591438 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-gqxzr_kube-system(b9c9516a-f907-4907-a5cc-ffce8e9f7515)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-gqxzr_kube-system(b9c9516a-f907-4907-a5cc-ffce8e9f7515)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67877f307cee629d0120ddf56031fde52f439f09a90ab9c8a1c95072b8b41df3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-gqxzr" podUID="b9c9516a-f907-4907-a5cc-ffce8e9f7515" Mar 4 02:19:23.605906 containerd[1633]: 2026-03-04 02:19:22.804 [INFO][3867] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf" Mar 4 02:19:23.605906 containerd[1633]: 2026-03-04 02:19:22.805 [INFO][3867] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf" iface="eth0" netns="/var/run/netns/cni-17a089b8-fe20-46aa-97f7-0647c9e75aea" Mar 4 02:19:23.605906 containerd[1633]: 2026-03-04 02:19:22.807 [INFO][3867] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf" iface="eth0" netns="/var/run/netns/cni-17a089b8-fe20-46aa-97f7-0647c9e75aea" Mar 4 02:19:23.605906 containerd[1633]: 2026-03-04 02:19:22.807 [INFO][3867] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf" iface="eth0" netns="/var/run/netns/cni-17a089b8-fe20-46aa-97f7-0647c9e75aea" Mar 4 02:19:23.605906 containerd[1633]: 2026-03-04 02:19:22.809 [INFO][3867] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf" Mar 4 02:19:23.605906 containerd[1633]: 2026-03-04 02:19:22.809 [INFO][3867] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf" Mar 4 02:19:23.605906 containerd[1633]: 2026-03-04 02:19:23.208 [INFO][3937] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf" HandleID="k8s-pod-network.083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0" Mar 4 02:19:23.605906 containerd[1633]: 2026-03-04 02:19:23.209 [INFO][3937] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:19:23.605906 containerd[1633]: 2026-03-04 02:19:23.509 [INFO][3937] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:19:23.605906 containerd[1633]: 2026-03-04 02:19:23.577 [WARNING][3937] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf" HandleID="k8s-pod-network.083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0" Mar 4 02:19:23.605906 containerd[1633]: 2026-03-04 02:19:23.577 [INFO][3937] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf" HandleID="k8s-pod-network.083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0" Mar 4 02:19:23.605906 containerd[1633]: 2026-03-04 02:19:23.583 [INFO][3937] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:19:23.605906 containerd[1633]: 2026-03-04 02:19:23.596 [INFO][3867] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf" Mar 4 02:19:23.615629 containerd[1633]: time="2026-03-04T02:19:23.615281631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-9npx6,Uid:6c252b40-b769-41c3-a985-3ad359dfb9c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 02:19:23.616176 containerd[1633]: time="2026-03-04T02:19:23.615815190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:19:23.616176 containerd[1633]: time="2026-03-04T02:19:23.615932912Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:19:23.616176 containerd[1633]: time="2026-03-04T02:19:23.615955222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:23.616176 containerd[1633]: time="2026-03-04T02:19:23.616134043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:23.618166 kubelet[2861]: E0304 02:19:23.617611 2861 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 02:19:23.618166 kubelet[2861]: E0304 02:19:23.617702 2861 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-9npx6" Mar 4 02:19:23.618166 kubelet[2861]: E0304 02:19:23.617732 2861 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-9npx6" Mar 4 02:19:23.622865 kubelet[2861]: E0304 02:19:23.617847 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-9npx6_calico-system(6c252b40-b769-41c3-a985-3ad359dfb9c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-9npx6_calico-system(6c252b40-b769-41c3-a985-3ad359dfb9c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-9npx6" podUID="6c252b40-b769-41c3-a985-3ad359dfb9c2" Mar 4 02:19:23.833722 containerd[1633]: time="2026-03-04T02:19:23.830343167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d4f4c7f-nd4xh,Uid:95f1b570-a2f8-4407-94d9-6b1231857727,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 02:19:23.833993 kubelet[2861]: E0304 02:19:23.832828 2861 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 4 02:19:23.833993 kubelet[2861]: E0304 02:19:23.832924 2861 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b7d4f4c7f-nd4xh" Mar 4 02:19:23.833993 kubelet[2861]: E0304 02:19:23.832953 2861 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6b7d4f4c7f-nd4xh" Mar 4 02:19:23.834218 kubelet[2861]: E0304 02:19:23.833034 2861 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b7d4f4c7f-nd4xh_calico-system(95f1b570-a2f8-4407-94d9-6b1231857727)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b7d4f4c7f-nd4xh_calico-system(95f1b570-a2f8-4407-94d9-6b1231857727)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6b7d4f4c7f-nd4xh" podUID="95f1b570-a2f8-4407-94d9-6b1231857727" Mar 4 02:19:23.883884 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:23.877126 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:23.877137 systemd-resolved[1518]: Flushed all caches. Mar 4 02:19:23.967849 containerd[1633]: time="2026-03-04T02:19:23.966535977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-v4hct,Uid:1f95e2c4-0e2b-45a3-999b-371f9b171489,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481\"" Mar 4 02:19:23.971788 containerd[1633]: time="2026-03-04T02:19:23.971022375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-64b664658b-2dz7l,Uid:9c3e591b-3214-4ea2-9f53-b02ad45f9933,Namespace:calico-system,Attempt:0,} returns sandbox id \"dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92\"" Mar 4 02:19:23.989078 containerd[1633]: time="2026-03-04T02:19:23.988163285Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 4 02:19:23.994252 containerd[1633]: time="2026-03-04T02:19:23.990183468Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:19:23.994252 containerd[1633]: time="2026-03-04T02:19:23.990592525Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:19:23.994252 containerd[1633]: time="2026-03-04T02:19:23.990630892Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:23.994252 containerd[1633]: time="2026-03-04T02:19:23.993467135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:24.071616 containerd[1633]: time="2026-03-04T02:19:24.071429838Z" level=info msg="CreateContainer within sandbox \"1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 02:19:24.086923 containerd[1633]: time="2026-03-04T02:19:24.086666573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d4f4c7f-qrf7j,Uid:be02ed94-fac8-40fb-bf12-2f0443c96f50,Namespace:calico-system,Attempt:0,}" Mar 4 02:19:24.091785 containerd[1633]: time="2026-03-04T02:19:24.091512586Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-9npx6,Uid:6c252b40-b769-41c3-a985-3ad359dfb9c2,Namespace:calico-system,Attempt:0,}" Mar 4 02:19:24.120489 containerd[1633]: time="2026-03-04T02:19:24.119972857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d4f4c7f-nd4xh,Uid:95f1b570-a2f8-4407-94d9-6b1231857727,Namespace:calico-system,Attempt:0,}" Mar 4 02:19:24.129226 containerd[1633]: time="2026-03-04T02:19:24.121739400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gqxzr,Uid:b9c9516a-f907-4907-a5cc-ffce8e9f7515,Namespace:kube-system,Attempt:0,}" Mar 4 02:19:24.202741 systemd-networkd[1264]: cali0bff7bddfbd: Gained IPv6LL Mar 4 02:19:24.342127 systemd[1]: run-netns-cni\x2d7d5003f8\x2d1229\x2ddfde\x2d352e\x2d8e12f8afc257.mount: Deactivated successfully. Mar 4 02:19:24.342892 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-985c27da47491fb41247db069b56e685631324196f4350fcfb24faf8075b6d58-shm.mount: Deactivated successfully. Mar 4 02:19:24.343358 systemd[1]: run-netns-cni\x2d17a089b8\x2dfe20\x2d46aa\x2d97f7\x2d0647c9e75aea.mount: Deactivated successfully. Mar 4 02:19:24.343967 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-083a5aad7c5f773134d211d09f5ce36b9555a27f3a76ee3677ddf23c338895cf-shm.mount: Deactivated successfully. Mar 4 02:19:24.344258 systemd[1]: run-netns-cni\x2d76dba115\x2df577\x2d6a07\x2d92e3\x2d9f8cc9b69d69.mount: Deactivated successfully. Mar 4 02:19:24.345120 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dce7e1ec66ef66944214a9f5f9ebef4af2e0cf7cc43b1c3ae451520f7e4d1f83-shm.mount: Deactivated successfully. Mar 4 02:19:24.414498 containerd[1633]: time="2026-03-04T02:19:24.414310505Z" level=info msg="CreateContainer within sandbox \"1b2f24c745b314b80c2867f402400bbd1c767bc54b87ded7e1eb97658ac29481\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e2ccf3947232072c8b82820392fca92352fdbd1e41abfb623b8f1ae5969e3f3d\"" Mar 4 02:19:24.435421 containerd[1633]: time="2026-03-04T02:19:24.435271097Z" level=info msg="StartContainer for \"e2ccf3947232072c8b82820392fca92352fdbd1e41abfb623b8f1ae5969e3f3d\"" Mar 4 02:19:24.437644 containerd[1633]: time="2026-03-04T02:19:24.437527557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-675b5b64c-gmzhg,Uid:209ee552-e07e-43c8-b4cb-0bb161f67e80,Namespace:calico-system,Attempt:0,} returns sandbox id \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\"" Mar 4 02:19:24.516095 systemd-networkd[1264]: calia058f725f92: Gained IPv6LL Mar 4 02:19:24.597783 containerd[1633]: time="2026-03-04T02:19:24.595906531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-76s4z,Uid:0d0411e4-66e2-420e-a25a-3a6cba15a516,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7\"" Mar 4 02:19:24.644160 systemd-networkd[1264]: cali550fc22606e: Gained IPv6LL Mar 4 02:19:24.772549 systemd-networkd[1264]: cali16a3c47a355: Gained IPv6LL Mar 4 02:19:25.068431 kubelet[2861]: I0304 02:19:25.068348 2861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 02:19:25.368834 kernel: calico-node[4183]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 4 02:19:25.687197 containerd[1633]: time="2026-03-04T02:19:25.686947418Z" level=info msg="StartContainer for \"e2ccf3947232072c8b82820392fca92352fdbd1e41abfb623b8f1ae5969e3f3d\" returns successfully" Mar 4 02:19:25.933898 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:25.939931 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:25.939986 systemd-resolved[1518]: Flushed all caches. Mar 4 02:19:26.508911 systemd-networkd[1264]: calid453d876c04: Link UP Mar 4 02:19:26.521233 systemd-networkd[1264]: calid453d876c04: Gained carrier Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:24.698 [ERROR][4248] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:24.928 [INFO][4248] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0 coredns-674b8bbfcf- kube-system b9c9516a-f907-4907-a5cc-ffce8e9f7515 900 0 2026-03-04 02:18:39 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s srv-mtsxv.gb1.brightbox.com coredns-674b8bbfcf-gqxzr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid453d876c04 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqxzr" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-" Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:24.929 [INFO][4248] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqxzr" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0" Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.049 [INFO][4326] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" HandleID="k8s-pod-network.7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" Workload="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0" Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.065 [INFO][4326] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" HandleID="k8s-pod-network.7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" Workload="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00035f920), Attrs:map[string]string{"namespace":"kube-system", "node":"srv-mtsxv.gb1.brightbox.com", "pod":"coredns-674b8bbfcf-gqxzr", "timestamp":"2026-03-04 02:19:26.049048477 +0000 UTC"}, Hostname:"srv-mtsxv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002a89a0)} Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.065 [INFO][4326] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.065 [INFO][4326] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.066 [INFO][4326] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mtsxv.gb1.brightbox.com' Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.134 [INFO][4326] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.178 [INFO][4326] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.215 [INFO][4326] ipam/ipam.go 526: Trying affinity for 192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.279 [INFO][4326] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.307 [INFO][4326] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.308 [INFO][4326] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.192/26 handle="k8s-pod-network.7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.327 [INFO][4326] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1 Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.355 [INFO][4326] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.192/26 handle="k8s-pod-network.7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.391 [INFO][4326] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.197/26] block=192.168.10.192/26 handle="k8s-pod-network.7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.398 [INFO][4326] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.197/26] handle="k8s-pod-network.7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.398 [INFO][4326] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:19:26.713833 containerd[1633]: 2026-03-04 02:19:26.398 [INFO][4326] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.197/26] IPv6=[] ContainerID="7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" HandleID="k8s-pod-network.7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" Workload="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0" Mar 4 02:19:26.749098 containerd[1633]: 2026-03-04 02:19:26.452 [INFO][4248] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqxzr" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b9c9516a-f907-4907-a5cc-ffce8e9f7515", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"", Pod:"coredns-674b8bbfcf-gqxzr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid453d876c04", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:26.749098 containerd[1633]: 2026-03-04 02:19:26.469 [INFO][4248] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.197/32] ContainerID="7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqxzr" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0" Mar 4 02:19:26.749098 containerd[1633]: 2026-03-04 02:19:26.470 [INFO][4248] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid453d876c04 ContainerID="7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqxzr" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0" Mar 4 02:19:26.749098 containerd[1633]: 2026-03-04 02:19:26.542 [INFO][4248] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqxzr" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0" Mar 4 02:19:26.749098 containerd[1633]: 2026-03-04 02:19:26.561 [INFO][4248] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqxzr" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b9c9516a-f907-4907-a5cc-ffce8e9f7515", ResourceVersion:"900", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 18, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1", Pod:"coredns-674b8bbfcf-gqxzr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.10.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid453d876c04", MAC:"4a:0e:f3:5d:42:d7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:26.749098 containerd[1633]: 2026-03-04 02:19:26.631 [INFO][4248] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1" Namespace="kube-system" Pod="coredns-674b8bbfcf-gqxzr" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-coredns--674b8bbfcf--gqxzr-eth0" Mar 4 02:19:26.926754 systemd-networkd[1264]: calia4e6c875de0: Link UP Mar 4 02:19:26.929376 systemd-networkd[1264]: calia4e6c875de0: Gained carrier Mar 4 02:19:26.992944 kubelet[2861]: I0304 02:19:26.946263 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-v4hct" podStartSLOduration=47.821150548 podStartE2EDuration="47.821150548s" podCreationTimestamp="2026-03-04 02:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 02:19:26.816920002 +0000 UTC m=+52.781274555" watchObservedRunningTime="2026-03-04 02:19:26.821150548 +0000 UTC m=+52.785505077" Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:24.894 [ERROR][4237] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:25.100 [INFO][4237] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0 calico-apiserver-6b7d4f4c7f- calico-system be02ed94-fac8-40fb-bf12-2f0443c96f50 897 0 2026-03-04 02:18:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b7d4f4c7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-mtsxv.gb1.brightbox.com calico-apiserver-6b7d4f4c7f-qrf7j eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calia4e6c875de0 [] [] }} ContainerID="cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-qrf7j" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-" Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:25.101 [INFO][4237] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-qrf7j" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0" Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:25.975 [INFO][4355] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" HandleID="k8s-pod-network.cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0" Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.107 [INFO][4355] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" HandleID="k8s-pod-network.cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a7aa0), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-mtsxv.gb1.brightbox.com", "pod":"calico-apiserver-6b7d4f4c7f-qrf7j", "timestamp":"2026-03-04 02:19:25.975634036 +0000 UTC"}, Hostname:"srv-mtsxv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000358dc0)} Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.108 [INFO][4355] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.403 [INFO][4355] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.403 [INFO][4355] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mtsxv.gb1.brightbox.com' Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.413 [INFO][4355] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.448 [INFO][4355] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.470 [INFO][4355] ipam/ipam.go 526: Trying affinity for 192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.539 [INFO][4355] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.594 [INFO][4355] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.596 [INFO][4355] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.192/26 handle="k8s-pod-network.cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.649 [INFO][4355] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.708 [INFO][4355] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.192/26 handle="k8s-pod-network.cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.749 [INFO][4355] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.198/26] block=192.168.10.192/26 handle="k8s-pod-network.cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.749 [INFO][4355] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.198/26] handle="k8s-pod-network.cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.749 [INFO][4355] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:19:27.137247 containerd[1633]: 2026-03-04 02:19:26.749 [INFO][4355] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.198/26] IPv6=[] ContainerID="cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" HandleID="k8s-pod-network.cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0" Mar 4 02:19:27.148623 containerd[1633]: 2026-03-04 02:19:26.809 [INFO][4237] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-qrf7j" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0", GenerateName:"calico-apiserver-6b7d4f4c7f-", Namespace:"calico-system", SelfLink:"", UID:"be02ed94-fac8-40fb-bf12-2f0443c96f50", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 18, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b7d4f4c7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6b7d4f4c7f-qrf7j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia4e6c875de0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:27.148623 containerd[1633]: 2026-03-04 02:19:26.809 [INFO][4237] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.198/32] ContainerID="cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-qrf7j" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0" Mar 4 02:19:27.148623 containerd[1633]: 2026-03-04 02:19:26.809 [INFO][4237] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia4e6c875de0 ContainerID="cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-qrf7j" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0" Mar 4 02:19:27.148623 containerd[1633]: 2026-03-04 02:19:26.932 [INFO][4237] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-qrf7j" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0" Mar 4 02:19:27.148623 containerd[1633]: 2026-03-04 02:19:26.937 [INFO][4237] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-qrf7j" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0", GenerateName:"calico-apiserver-6b7d4f4c7f-", Namespace:"calico-system", SelfLink:"", UID:"be02ed94-fac8-40fb-bf12-2f0443c96f50", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 18, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b7d4f4c7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e", Pod:"calico-apiserver-6b7d4f4c7f-qrf7j", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calia4e6c875de0", MAC:"b6:32:f0:b8:e9:9a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:27.148623 containerd[1633]: 2026-03-04 02:19:27.011 [INFO][4237] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-qrf7j" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--qrf7j-eth0" Mar 4 02:19:27.212967 systemd-networkd[1264]: cali93de6b5128a: Link UP Mar 4 02:19:27.260049 systemd-networkd[1264]: cali93de6b5128a: Gained carrier Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:25.284 [INFO][4269] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0 calico-apiserver-6b7d4f4c7f- calico-system 95f1b570-a2f8-4407-94d9-6b1231857727 899 0 2026-03-04 02:18:55 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6b7d4f4c7f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s srv-mtsxv.gb1.brightbox.com calico-apiserver-6b7d4f4c7f-nd4xh eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali93de6b5128a [] [] }} ContainerID="57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-nd4xh" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-" Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:25.284 [INFO][4269] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-nd4xh" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0" Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:25.969 [INFO][4377] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" HandleID="k8s-pod-network.57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0" Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:26.127 [INFO][4377] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" HandleID="k8s-pod-network.57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000302350), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-mtsxv.gb1.brightbox.com", "pod":"calico-apiserver-6b7d4f4c7f-nd4xh", "timestamp":"2026-03-04 02:19:25.969306388 +0000 UTC"}, Hostname:"srv-mtsxv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000405a20)} Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:26.128 [INFO][4377] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:26.750 [INFO][4377] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:26.750 [INFO][4377] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mtsxv.gb1.brightbox.com' Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:26.788 [INFO][4377] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:26.916 [INFO][4377] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:26.939 [INFO][4377] ipam/ipam.go 526: Trying affinity for 192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:26.947 [INFO][4377] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:26.959 [INFO][4377] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:26.959 [INFO][4377] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.192/26 handle="k8s-pod-network.57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:26.982 [INFO][4377] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065 Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:27.027 [INFO][4377] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.192/26 handle="k8s-pod-network.57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:27.062 [INFO][4377] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.199/26] block=192.168.10.192/26 handle="k8s-pod-network.57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:27.062 [INFO][4377] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.199/26] handle="k8s-pod-network.57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:27.062 [INFO][4377] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:19:27.528699 containerd[1633]: 2026-03-04 02:19:27.062 [INFO][4377] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.199/26] IPv6=[] ContainerID="57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" HandleID="k8s-pod-network.57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" Workload="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0" Mar 4 02:19:27.536010 containerd[1633]: 2026-03-04 02:19:27.097 [INFO][4269] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-nd4xh" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0", GenerateName:"calico-apiserver-6b7d4f4c7f-", Namespace:"calico-system", SelfLink:"", UID:"95f1b570-a2f8-4407-94d9-6b1231857727", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 18, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b7d4f4c7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"", Pod:"calico-apiserver-6b7d4f4c7f-nd4xh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali93de6b5128a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:27.536010 containerd[1633]: 2026-03-04 02:19:27.107 [INFO][4269] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.199/32] ContainerID="57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-nd4xh" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0" Mar 4 02:19:27.536010 containerd[1633]: 2026-03-04 02:19:27.108 [INFO][4269] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93de6b5128a ContainerID="57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-nd4xh" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0" Mar 4 02:19:27.536010 containerd[1633]: 2026-03-04 02:19:27.332 [INFO][4269] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-nd4xh" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0" Mar 4 02:19:27.536010 containerd[1633]: 2026-03-04 02:19:27.350 [INFO][4269] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-nd4xh" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0", GenerateName:"calico-apiserver-6b7d4f4c7f-", Namespace:"calico-system", SelfLink:"", UID:"95f1b570-a2f8-4407-94d9-6b1231857727", ResourceVersion:"899", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 18, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6b7d4f4c7f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065", Pod:"calico-apiserver-6b7d4f4c7f-nd4xh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.10.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali93de6b5128a", MAC:"ee:ff:78:67:60:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:27.536010 containerd[1633]: 2026-03-04 02:19:27.418 [INFO][4269] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065" Namespace="calico-system" Pod="calico-apiserver-6b7d4f4c7f-nd4xh" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-calico--apiserver--6b7d4f4c7f--nd4xh-eth0" Mar 4 02:19:27.591841 systemd-networkd[1264]: calia5aa1020ea3: Link UP Mar 4 02:19:27.592194 systemd-networkd[1264]: calia5aa1020ea3: Gained carrier Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:25.376 [INFO][4292] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0 goldmane-5b85766d88- calico-system 6c252b40-b769-41c3-a985-3ad359dfb9c2 898 0 2026-03-04 02:18:55 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s srv-mtsxv.gb1.brightbox.com goldmane-5b85766d88-9npx6 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calia5aa1020ea3 [] [] }} ContainerID="75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" Namespace="calico-system" Pod="goldmane-5b85766d88-9npx6" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-" Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:25.376 [INFO][4292] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" Namespace="calico-system" Pod="goldmane-5b85766d88-9npx6" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0" Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:26.102 [INFO][4382] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" HandleID="k8s-pod-network.75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" Workload="srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0" Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:26.160 [INFO][4382] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" HandleID="k8s-pod-network.75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" Workload="srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003af350), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-mtsxv.gb1.brightbox.com", "pod":"goldmane-5b85766d88-9npx6", "timestamp":"2026-03-04 02:19:26.102217973 +0000 UTC"}, Hostname:"srv-mtsxv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00054af20)} Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:26.160 [INFO][4382] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:27.077 [INFO][4382] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:27.078 [INFO][4382] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mtsxv.gb1.brightbox.com' Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:27.109 [INFO][4382] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:27.206 [INFO][4382] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:27.406 [INFO][4382] ipam/ipam.go 526: Trying affinity for 192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:27.427 [INFO][4382] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:27.435 [INFO][4382] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:27.435 [INFO][4382] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.192/26 handle="k8s-pod-network.75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:27.449 [INFO][4382] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385 Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:27.463 [INFO][4382] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.192/26 handle="k8s-pod-network.75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:27.481 [INFO][4382] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.200/26] block=192.168.10.192/26 handle="k8s-pod-network.75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:27.481 [INFO][4382] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.200/26] handle="k8s-pod-network.75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:27.481 [INFO][4382] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:19:27.810020 containerd[1633]: 2026-03-04 02:19:27.481 [INFO][4382] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.200/26] IPv6=[] ContainerID="75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" HandleID="k8s-pod-network.75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" Workload="srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0" Mar 4 02:19:27.815291 containerd[1633]: 2026-03-04 02:19:27.577 [INFO][4292] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" Namespace="calico-system" Pod="goldmane-5b85766d88-9npx6" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"6c252b40-b769-41c3-a985-3ad359dfb9c2", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 18, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"", Pod:"goldmane-5b85766d88-9npx6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.10.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia5aa1020ea3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:27.815291 containerd[1633]: 2026-03-04 02:19:27.583 [INFO][4292] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.200/32] ContainerID="75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" Namespace="calico-system" Pod="goldmane-5b85766d88-9npx6" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0" Mar 4 02:19:27.815291 containerd[1633]: 2026-03-04 02:19:27.583 [INFO][4292] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia5aa1020ea3 ContainerID="75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" Namespace="calico-system" Pod="goldmane-5b85766d88-9npx6" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0" Mar 4 02:19:27.815291 containerd[1633]: 2026-03-04 02:19:27.587 [INFO][4292] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" Namespace="calico-system" Pod="goldmane-5b85766d88-9npx6" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0" Mar 4 02:19:27.815291 containerd[1633]: 2026-03-04 02:19:27.651 [INFO][4292] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" Namespace="calico-system" Pod="goldmane-5b85766d88-9npx6" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"6c252b40-b769-41c3-a985-3ad359dfb9c2", ResourceVersion:"898", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 18, 55, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385", Pod:"goldmane-5b85766d88-9npx6", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.10.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calia5aa1020ea3", MAC:"ce:93:ef:da:d4:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:27.815291 containerd[1633]: 2026-03-04 02:19:27.672 [INFO][4292] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385" Namespace="calico-system" Pod="goldmane-5b85766d88-9npx6" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-goldmane--5b85766d88--9npx6-eth0" Mar 4 02:19:27.997197 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:27.972438 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:27.972511 systemd-resolved[1518]: Flushed all caches. Mar 4 02:19:28.357132 systemd-networkd[1264]: calid453d876c04: Gained IPv6LL Mar 4 02:19:28.403830 containerd[1633]: time="2026-03-04T02:19:28.300630780Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:19:28.403830 containerd[1633]: time="2026-03-04T02:19:28.324396347Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:19:28.403830 containerd[1633]: time="2026-03-04T02:19:28.324424374Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:28.403830 containerd[1633]: time="2026-03-04T02:19:28.324657732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:28.457467 containerd[1633]: time="2026-03-04T02:19:28.419230201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:19:28.457467 containerd[1633]: time="2026-03-04T02:19:28.419346624Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:19:28.457467 containerd[1633]: time="2026-03-04T02:19:28.419371327Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:28.457467 containerd[1633]: time="2026-03-04T02:19:28.419619315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:28.464091 containerd[1633]: time="2026-03-04T02:19:28.450957825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:19:28.464091 containerd[1633]: time="2026-03-04T02:19:28.451101733Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:19:28.464091 containerd[1633]: time="2026-03-04T02:19:28.451128309Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:28.464091 containerd[1633]: time="2026-03-04T02:19:28.451339006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:28.514061 systemd-networkd[1264]: vxlan.calico: Link UP Mar 4 02:19:28.514072 systemd-networkd[1264]: vxlan.calico: Gained carrier Mar 4 02:19:28.556447 systemd-networkd[1264]: cali93de6b5128a: Gained IPv6LL Mar 4 02:19:28.812363 systemd-networkd[1264]: calia4e6c875de0: Gained IPv6LL Mar 4 02:19:28.920684 containerd[1633]: time="2026-03-04T02:19:28.614963457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:19:28.920684 containerd[1633]: time="2026-03-04T02:19:28.616060117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:19:28.920684 containerd[1633]: time="2026-03-04T02:19:28.616090646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:28.920684 containerd[1633]: time="2026-03-04T02:19:28.624145350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:29.574138 systemd-networkd[1264]: calia5aa1020ea3: Gained IPv6LL Mar 4 02:19:29.766692 containerd[1633]: time="2026-03-04T02:19:29.766403733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-9npx6,Uid:6c252b40-b769-41c3-a985-3ad359dfb9c2,Namespace:calico-system,Attempt:0,} returns sandbox id \"75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385\"" Mar 4 02:19:29.788209 containerd[1633]: time="2026-03-04T02:19:29.786484820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d4f4c7f-nd4xh,Uid:95f1b570-a2f8-4407-94d9-6b1231857727,Namespace:calico-system,Attempt:0,} returns sandbox id \"57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065\"" Mar 4 02:19:29.791044 containerd[1633]: time="2026-03-04T02:19:29.791001550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-gqxzr,Uid:b9c9516a-f907-4907-a5cc-ffce8e9f7515,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1\"" Mar 4 02:19:29.792323 systemd[1]: run-containerd-runc-k8s.io-d013544a4a4f14357e597fd78362e1514a3f2039defcce1659f1dd18e6eb455f-runc.j9MdCj.mount: Deactivated successfully. Mar 4 02:19:29.799619 containerd[1633]: time="2026-03-04T02:19:29.799050456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b7d4f4c7f-qrf7j,Uid:be02ed94-fac8-40fb-bf12-2f0443c96f50,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e\"" Mar 4 02:19:29.924538 containerd[1633]: time="2026-03-04T02:19:29.924326497Z" level=info msg="CreateContainer within sandbox \"7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 4 02:19:30.025971 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:30.025372 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:30.025691 systemd-resolved[1518]: Flushed all caches. Mar 4 02:19:30.027932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1697840511.mount: Deactivated successfully. Mar 4 02:19:30.038983 containerd[1633]: time="2026-03-04T02:19:30.038857625Z" level=info msg="CreateContainer within sandbox \"7f8011c47a28881410d164057664fc93d27ff68495e46cda7b2fdb3eb37d49d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"dad014190bd6dda2be9ff4b4f64c94e96589a331ffa2f7e5caf0b9abef252abb\"" Mar 4 02:19:30.045069 containerd[1633]: time="2026-03-04T02:19:30.043670339Z" level=info msg="StartContainer for \"dad014190bd6dda2be9ff4b4f64c94e96589a331ffa2f7e5caf0b9abef252abb\"" Mar 4 02:19:30.210144 containerd[1633]: time="2026-03-04T02:19:30.209029085Z" level=info msg="StartContainer for \"dad014190bd6dda2be9ff4b4f64c94e96589a331ffa2f7e5caf0b9abef252abb\" returns successfully" Mar 4 02:19:30.214686 systemd-networkd[1264]: vxlan.calico: Gained IPv6LL Mar 4 02:19:30.684757 kubelet[2861]: I0304 02:19:30.683816 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-gqxzr" podStartSLOduration=51.683723327 podStartE2EDuration="51.683723327s" podCreationTimestamp="2026-03-04 02:18:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 02:19:30.651550079 +0000 UTC m=+56.615904603" watchObservedRunningTime="2026-03-04 02:19:30.683723327 +0000 UTC m=+56.648077870" Mar 4 02:19:32.344751 containerd[1633]: time="2026-03-04T02:19:32.344621927Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 4 02:19:32.382259 containerd[1633]: time="2026-03-04T02:19:32.382108077Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 8.390403429s" Mar 4 02:19:32.395142 containerd[1633]: time="2026-03-04T02:19:32.394997086Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 4 02:19:32.479698 containerd[1633]: time="2026-03-04T02:19:32.479195963Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:32.483431 containerd[1633]: time="2026-03-04T02:19:32.482571443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 4 02:19:32.532170 containerd[1633]: time="2026-03-04T02:19:32.532113307Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:32.539752 containerd[1633]: time="2026-03-04T02:19:32.539416621Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:32.646832 containerd[1633]: time="2026-03-04T02:19:32.645499958Z" level=info msg="CreateContainer within sandbox \"dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 4 02:19:32.701833 containerd[1633]: time="2026-03-04T02:19:32.695873476Z" level=info msg="CreateContainer within sandbox \"dbe458fe647bb11e3c5f644d2e8eacabb0ee0c4cee12dbad100ac66439a84b92\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"a743fe2f978935098d1d14bcd34b50b6f63c38404619fb1c4a1bde0228f81a35\"" Mar 4 02:19:32.701833 containerd[1633]: time="2026-03-04T02:19:32.698163919Z" level=info msg="StartContainer for \"a743fe2f978935098d1d14bcd34b50b6f63c38404619fb1c4a1bde0228f81a35\"" Mar 4 02:19:33.010760 containerd[1633]: time="2026-03-04T02:19:33.010047357Z" level=info msg="StartContainer for \"a743fe2f978935098d1d14bcd34b50b6f63c38404619fb1c4a1bde0228f81a35\" returns successfully" Mar 4 02:19:33.779303 kubelet[2861]: I0304 02:19:33.773960 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-64b664658b-2dz7l" podStartSLOduration=29.318391528 podStartE2EDuration="37.744176431s" podCreationTimestamp="2026-03-04 02:18:56 +0000 UTC" firstStartedPulling="2026-03-04 02:19:23.987138698 +0000 UTC m=+49.951493221" lastFinishedPulling="2026-03-04 02:19:32.412923595 +0000 UTC m=+58.377278124" observedRunningTime="2026-03-04 02:19:33.726281015 +0000 UTC m=+59.690635549" watchObservedRunningTime="2026-03-04 02:19:33.744176431 +0000 UTC m=+59.708530948" Mar 4 02:19:33.867388 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:33.866864 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:33.866898 systemd-resolved[1518]: Flushed all caches. Mar 4 02:19:35.095004 containerd[1633]: time="2026-03-04T02:19:35.094927805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:35.098118 containerd[1633]: time="2026-03-04T02:19:35.097698016Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 4 02:19:35.100829 containerd[1633]: time="2026-03-04T02:19:35.099106608Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:35.102544 containerd[1633]: time="2026-03-04T02:19:35.102512985Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:35.106483 containerd[1633]: time="2026-03-04T02:19:35.106438038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 2.623821256s" Mar 4 02:19:35.106629 containerd[1633]: time="2026-03-04T02:19:35.106602880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 4 02:19:35.108919 containerd[1633]: time="2026-03-04T02:19:35.108890698Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 4 02:19:35.115970 containerd[1633]: time="2026-03-04T02:19:35.115925277Z" level=info msg="CreateContainer within sandbox \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 4 02:19:35.160955 containerd[1633]: time="2026-03-04T02:19:35.160879616Z" level=info msg="CreateContainer within sandbox \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7480df1c8385f445a4cad8d8c3676b2b8d7d3bff6f3d9d6b0f67f304d9470206\"" Mar 4 02:19:35.169879 containerd[1633]: time="2026-03-04T02:19:35.168189336Z" level=info msg="StartContainer for \"7480df1c8385f445a4cad8d8c3676b2b8d7d3bff6f3d9d6b0f67f304d9470206\"" Mar 4 02:19:35.365470 containerd[1633]: time="2026-03-04T02:19:35.365022323Z" level=info msg="StartContainer for \"7480df1c8385f445a4cad8d8c3676b2b8d7d3bff6f3d9d6b0f67f304d9470206\" returns successfully" Mar 4 02:19:35.912043 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:35.910898 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:35.910926 systemd-resolved[1518]: Flushed all caches. Mar 4 02:19:37.100663 containerd[1633]: time="2026-03-04T02:19:37.099045842Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:37.102506 containerd[1633]: time="2026-03-04T02:19:37.102460144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 4 02:19:37.104839 containerd[1633]: time="2026-03-04T02:19:37.103905462Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:37.111745 containerd[1633]: time="2026-03-04T02:19:37.111686001Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 2.002644714s" Mar 4 02:19:37.112114 containerd[1633]: time="2026-03-04T02:19:37.112075873Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 4 02:19:37.112315 containerd[1633]: time="2026-03-04T02:19:37.112024792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:37.130045 containerd[1633]: time="2026-03-04T02:19:37.129993407Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 4 02:19:37.139918 containerd[1633]: time="2026-03-04T02:19:37.139718145Z" level=info msg="CreateContainer within sandbox \"c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 4 02:19:37.188826 containerd[1633]: time="2026-03-04T02:19:37.186328501Z" level=info msg="CreateContainer within sandbox \"c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"0a46ded7d6c767b617bf940a6cd739d903e5b94f8a8c3bd1f1efa96f7fd5214d\"" Mar 4 02:19:37.189288 containerd[1633]: time="2026-03-04T02:19:37.189251984Z" level=info msg="StartContainer for \"0a46ded7d6c767b617bf940a6cd739d903e5b94f8a8c3bd1f1efa96f7fd5214d\"" Mar 4 02:19:37.367855 containerd[1633]: time="2026-03-04T02:19:37.367658208Z" level=info msg="StartContainer for \"0a46ded7d6c767b617bf940a6cd739d903e5b94f8a8c3bd1f1efa96f7fd5214d\" returns successfully" Mar 4 02:19:40.283697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028703712.mount: Deactivated successfully. Mar 4 02:19:41.045824 containerd[1633]: time="2026-03-04T02:19:41.044316003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:41.047553 containerd[1633]: time="2026-03-04T02:19:41.047471467Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 4 02:19:41.049588 containerd[1633]: time="2026-03-04T02:19:41.049555895Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:41.057296 containerd[1633]: time="2026-03-04T02:19:41.057246225Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:41.059742 containerd[1633]: time="2026-03-04T02:19:41.058246757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.927678703s" Mar 4 02:19:41.059955 containerd[1633]: time="2026-03-04T02:19:41.059926640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 4 02:19:41.068873 containerd[1633]: time="2026-03-04T02:19:41.068822321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 02:19:41.098036 containerd[1633]: time="2026-03-04T02:19:41.097884582Z" level=info msg="CreateContainer within sandbox \"75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 4 02:19:41.121264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3753644855.mount: Deactivated successfully. Mar 4 02:19:41.129807 containerd[1633]: time="2026-03-04T02:19:41.129734780Z" level=info msg="CreateContainer within sandbox \"75e036d6db4866840b7c20beda871d64ce64e2e8d618257298b0bfaac9c2e385\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"93f28aa2f84aaed261b600e1fe0d42b8b3b92219a0eedd41d942dd19796fc054\"" Mar 4 02:19:41.131176 containerd[1633]: time="2026-03-04T02:19:41.131141527Z" level=info msg="StartContainer for \"93f28aa2f84aaed261b600e1fe0d42b8b3b92219a0eedd41d942dd19796fc054\"" Mar 4 02:19:41.273636 containerd[1633]: time="2026-03-04T02:19:41.273563720Z" level=info msg="StartContainer for \"93f28aa2f84aaed261b600e1fe0d42b8b3b92219a0eedd41d942dd19796fc054\" returns successfully" Mar 4 02:19:42.875832 systemd[1]: run-containerd-runc-k8s.io-93f28aa2f84aaed261b600e1fe0d42b8b3b92219a0eedd41d942dd19796fc054-runc.MFu2cs.mount: Deactivated successfully. Mar 4 02:19:44.901611 containerd[1633]: time="2026-03-04T02:19:44.900777843Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:44.905165 containerd[1633]: time="2026-03-04T02:19:44.902433627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 4 02:19:44.914544 containerd[1633]: time="2026-03-04T02:19:44.914484521Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:44.917383 containerd[1633]: time="2026-03-04T02:19:44.917318787Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:44.918820 containerd[1633]: time="2026-03-04T02:19:44.918649581Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.845599357s" Mar 4 02:19:44.918820 containerd[1633]: time="2026-03-04T02:19:44.918699723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 4 02:19:44.921409 containerd[1633]: time="2026-03-04T02:19:44.921104039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 4 02:19:44.927436 containerd[1633]: time="2026-03-04T02:19:44.927385104Z" level=info msg="CreateContainer within sandbox \"57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 02:19:44.971420 containerd[1633]: time="2026-03-04T02:19:44.971266763Z" level=info msg="CreateContainer within sandbox \"57f0da536f79317e268c16df08f9d189a9ea7dbfd2dec21f2aec8b9b8f5f7065\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"334ef38be015a8c7f29dfdc38b7db04ce7e62a0c3874a027111574ec1061a895\"" Mar 4 02:19:44.975425 containerd[1633]: time="2026-03-04T02:19:44.972236598Z" level=info msg="StartContainer for \"334ef38be015a8c7f29dfdc38b7db04ce7e62a0c3874a027111574ec1061a895\"" Mar 4 02:19:45.126764 containerd[1633]: time="2026-03-04T02:19:45.126703357Z" level=info msg="StartContainer for \"334ef38be015a8c7f29dfdc38b7db04ce7e62a0c3874a027111574ec1061a895\" returns successfully" Mar 4 02:19:45.302528 containerd[1633]: time="2026-03-04T02:19:45.302446591Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:45.303575 containerd[1633]: time="2026-03-04T02:19:45.303499926Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 4 02:19:45.309265 containerd[1633]: time="2026-03-04T02:19:45.309158386Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 388.00191ms" Mar 4 02:19:45.309265 containerd[1633]: time="2026-03-04T02:19:45.309215512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 4 02:19:45.312920 containerd[1633]: time="2026-03-04T02:19:45.312163768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 4 02:19:45.315716 containerd[1633]: time="2026-03-04T02:19:45.315648170Z" level=info msg="CreateContainer within sandbox \"cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 4 02:19:45.338610 containerd[1633]: time="2026-03-04T02:19:45.338526547Z" level=info msg="CreateContainer within sandbox \"cf5370d916f5a0cc56b0a419c4fc75765e8d1df119006bcae4cbd710adae8f8e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3e010fcfc71cc4ddd4d06aac5f11bf7ca63676961dcde359512d6ba770bb1120\"" Mar 4 02:19:45.342940 containerd[1633]: time="2026-03-04T02:19:45.340815430Z" level=info msg="StartContainer for \"3e010fcfc71cc4ddd4d06aac5f11bf7ca63676961dcde359512d6ba770bb1120\"" Mar 4 02:19:45.505827 containerd[1633]: time="2026-03-04T02:19:45.505190514Z" level=info msg="StartContainer for \"3e010fcfc71cc4ddd4d06aac5f11bf7ca63676961dcde359512d6ba770bb1120\" returns successfully" Mar 4 02:19:45.902025 kubelet[2861]: I0304 02:19:45.865916 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-9npx6" podStartSLOduration=39.643980117 podStartE2EDuration="50.865831929s" podCreationTimestamp="2026-03-04 02:18:55 +0000 UTC" firstStartedPulling="2026-03-04 02:19:29.844206695 +0000 UTC m=+55.808561215" lastFinishedPulling="2026-03-04 02:19:41.066058499 +0000 UTC m=+67.030413027" observedRunningTime="2026-03-04 02:19:41.834924165 +0000 UTC m=+67.799278706" watchObservedRunningTime="2026-03-04 02:19:45.865831929 +0000 UTC m=+71.830186466" Mar 4 02:19:45.902025 kubelet[2861]: I0304 02:19:45.901711 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6b7d4f4c7f-nd4xh" podStartSLOduration=35.825249397 podStartE2EDuration="50.901660277s" podCreationTimestamp="2026-03-04 02:18:55 +0000 UTC" firstStartedPulling="2026-03-04 02:19:29.84432806 +0000 UTC m=+55.808682576" lastFinishedPulling="2026-03-04 02:19:44.920738932 +0000 UTC m=+70.885093456" observedRunningTime="2026-03-04 02:19:45.862278225 +0000 UTC m=+71.826632755" watchObservedRunningTime="2026-03-04 02:19:45.901660277 +0000 UTC m=+71.866014862" Mar 4 02:19:45.930271 kubelet[2861]: I0304 02:19:45.929510 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6b7d4f4c7f-qrf7j" podStartSLOduration=35.467990523 podStartE2EDuration="50.929485963s" podCreationTimestamp="2026-03-04 02:18:55 +0000 UTC" firstStartedPulling="2026-03-04 02:19:29.849036267 +0000 UTC m=+55.813390784" lastFinishedPulling="2026-03-04 02:19:45.310531703 +0000 UTC m=+71.274886224" observedRunningTime="2026-03-04 02:19:45.903453738 +0000 UTC m=+71.867808272" watchObservedRunningTime="2026-03-04 02:19:45.929485963 +0000 UTC m=+71.893840493" Mar 4 02:19:46.863749 kubelet[2861]: I0304 02:19:46.863601 2861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 02:19:47.848755 kubelet[2861]: I0304 02:19:47.847992 2861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 02:19:47.885658 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:47.882287 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:47.882361 systemd-resolved[1518]: Flushed all caches. Mar 4 02:19:48.246644 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount620923559.mount: Deactivated successfully. Mar 4 02:19:48.303300 containerd[1633]: time="2026-03-04T02:19:48.303237890Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:48.305828 containerd[1633]: time="2026-03-04T02:19:48.305435851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 4 02:19:48.306638 containerd[1633]: time="2026-03-04T02:19:48.306606936Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:48.328513 containerd[1633]: time="2026-03-04T02:19:48.328457673Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:48.330769 containerd[1633]: time="2026-03-04T02:19:48.330728080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 3.018510631s" Mar 4 02:19:48.330904 containerd[1633]: time="2026-03-04T02:19:48.330773186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 4 02:19:48.334250 containerd[1633]: time="2026-03-04T02:19:48.333714027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 4 02:19:48.345495 containerd[1633]: time="2026-03-04T02:19:48.345204691Z" level=info msg="CreateContainer within sandbox \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 4 02:19:48.376197 containerd[1633]: time="2026-03-04T02:19:48.373871849Z" level=info msg="CreateContainer within sandbox \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"dd06aa5e78742eadeba051748da96c8e829ba05e79145d7df8681afeddb4a80e\"" Mar 4 02:19:48.389815 containerd[1633]: time="2026-03-04T02:19:48.387766261Z" level=info msg="StartContainer for \"dd06aa5e78742eadeba051748da96c8e829ba05e79145d7df8681afeddb4a80e\"" Mar 4 02:19:48.663823 containerd[1633]: time="2026-03-04T02:19:48.660850302Z" level=info msg="StartContainer for \"dd06aa5e78742eadeba051748da96c8e829ba05e79145d7df8681afeddb4a80e\" returns successfully" Mar 4 02:19:48.993516 kubelet[2861]: I0304 02:19:48.993039 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-675b5b64c-gmzhg" podStartSLOduration=22.376546359 podStartE2EDuration="45.992974258s" podCreationTimestamp="2026-03-04 02:19:03 +0000 UTC" firstStartedPulling="2026-03-04 02:19:24.716395232 +0000 UTC m=+50.680749756" lastFinishedPulling="2026-03-04 02:19:48.332823123 +0000 UTC m=+74.297177655" observedRunningTime="2026-03-04 02:19:48.992370638 +0000 UTC m=+74.956725172" watchObservedRunningTime="2026-03-04 02:19:48.992974258 +0000 UTC m=+74.957328787" Mar 4 02:19:49.176491 containerd[1633]: time="2026-03-04T02:19:49.175596583Z" level=info msg="StopContainer for \"7480df1c8385f445a4cad8d8c3676b2b8d7d3bff6f3d9d6b0f67f304d9470206\" with timeout 30 (s)" Mar 4 02:19:49.179591 containerd[1633]: time="2026-03-04T02:19:49.179454175Z" level=info msg="StopContainer for \"dd06aa5e78742eadeba051748da96c8e829ba05e79145d7df8681afeddb4a80e\" with timeout 30 (s)" Mar 4 02:19:49.190349 containerd[1633]: time="2026-03-04T02:19:49.190200344Z" level=info msg="Stop container \"dd06aa5e78742eadeba051748da96c8e829ba05e79145d7df8681afeddb4a80e\" with signal terminated" Mar 4 02:19:49.191370 containerd[1633]: time="2026-03-04T02:19:49.191240571Z" level=info msg="Stop container \"7480df1c8385f445a4cad8d8c3676b2b8d7d3bff6f3d9d6b0f67f304d9470206\" with signal terminated" Mar 4 02:19:49.407940 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd06aa5e78742eadeba051748da96c8e829ba05e79145d7df8681afeddb4a80e-rootfs.mount: Deactivated successfully. Mar 4 02:19:49.438508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7480df1c8385f445a4cad8d8c3676b2b8d7d3bff6f3d9d6b0f67f304d9470206-rootfs.mount: Deactivated successfully. Mar 4 02:19:49.459924 containerd[1633]: time="2026-03-04T02:19:49.436074764Z" level=info msg="shim disconnected" id=7480df1c8385f445a4cad8d8c3676b2b8d7d3bff6f3d9d6b0f67f304d9470206 namespace=k8s.io Mar 4 02:19:49.488261 containerd[1633]: time="2026-03-04T02:19:49.488082747Z" level=info msg="shim disconnected" id=dd06aa5e78742eadeba051748da96c8e829ba05e79145d7df8681afeddb4a80e namespace=k8s.io Mar 4 02:19:49.490773 containerd[1633]: time="2026-03-04T02:19:49.490596297Z" level=warning msg="cleaning up after shim disconnected" id=dd06aa5e78742eadeba051748da96c8e829ba05e79145d7df8681afeddb4a80e namespace=k8s.io Mar 4 02:19:49.490773 containerd[1633]: time="2026-03-04T02:19:49.490648935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:19:49.509818 containerd[1633]: time="2026-03-04T02:19:49.507896983Z" level=warning msg="cleaning up after shim disconnected" id=7480df1c8385f445a4cad8d8c3676b2b8d7d3bff6f3d9d6b0f67f304d9470206 namespace=k8s.io Mar 4 02:19:49.509818 containerd[1633]: time="2026-03-04T02:19:49.507962363Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:19:49.622504 containerd[1633]: time="2026-03-04T02:19:49.622393329Z" level=info msg="StopContainer for \"7480df1c8385f445a4cad8d8c3676b2b8d7d3bff6f3d9d6b0f67f304d9470206\" returns successfully" Mar 4 02:19:49.623349 containerd[1633]: time="2026-03-04T02:19:49.623295108Z" level=info msg="StopContainer for \"dd06aa5e78742eadeba051748da96c8e829ba05e79145d7df8681afeddb4a80e\" returns successfully" Mar 4 02:19:49.644676 containerd[1633]: time="2026-03-04T02:19:49.644597704Z" level=info msg="StopPodSandbox for \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\"" Mar 4 02:19:49.671071 containerd[1633]: time="2026-03-04T02:19:49.670838728Z" level=info msg="Container to stop \"dd06aa5e78742eadeba051748da96c8e829ba05e79145d7df8681afeddb4a80e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 02:19:49.671071 containerd[1633]: time="2026-03-04T02:19:49.670964633Z" level=info msg="Container to stop \"7480df1c8385f445a4cad8d8c3676b2b8d7d3bff6f3d9d6b0f67f304d9470206\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 4 02:19:49.683932 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295-shm.mount: Deactivated successfully. Mar 4 02:19:49.756764 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295-rootfs.mount: Deactivated successfully. Mar 4 02:19:49.758526 containerd[1633]: time="2026-03-04T02:19:49.756937913Z" level=info msg="shim disconnected" id=636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295 namespace=k8s.io Mar 4 02:19:49.758526 containerd[1633]: time="2026-03-04T02:19:49.758059146Z" level=warning msg="cleaning up after shim disconnected" id=636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295 namespace=k8s.io Mar 4 02:19:49.759982 containerd[1633]: time="2026-03-04T02:19:49.758980845Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 4 02:19:49.904853 kubelet[2861]: I0304 02:19:49.904650 2861 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Mar 4 02:19:49.930725 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:49.929942 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:49.930010 systemd-resolved[1518]: Flushed all caches. Mar 4 02:19:50.173514 systemd-networkd[1264]: calia058f725f92: Link DOWN Mar 4 02:19:50.173527 systemd-networkd[1264]: calia058f725f92: Lost carrier Mar 4 02:19:50.690454 containerd[1633]: 2026-03-04 02:19:50.150 [INFO][5339] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Mar 4 02:19:50.690454 containerd[1633]: 2026-03-04 02:19:50.156 [INFO][5339] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" iface="eth0" netns="/var/run/netns/cni-e8d6bdac-8974-9691-85d9-cfd2f69d6816" Mar 4 02:19:50.690454 containerd[1633]: 2026-03-04 02:19:50.157 [INFO][5339] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" iface="eth0" netns="/var/run/netns/cni-e8d6bdac-8974-9691-85d9-cfd2f69d6816" Mar 4 02:19:50.690454 containerd[1633]: 2026-03-04 02:19:50.196 [INFO][5339] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" after=40.171439ms iface="eth0" netns="/var/run/netns/cni-e8d6bdac-8974-9691-85d9-cfd2f69d6816" Mar 4 02:19:50.690454 containerd[1633]: 2026-03-04 02:19:50.196 [INFO][5339] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Mar 4 02:19:50.690454 containerd[1633]: 2026-03-04 02:19:50.197 [INFO][5339] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Mar 4 02:19:50.690454 containerd[1633]: 2026-03-04 02:19:50.575 [INFO][5346] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" HandleID="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:19:50.690454 containerd[1633]: 2026-03-04 02:19:50.578 [INFO][5346] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:19:50.690454 containerd[1633]: 2026-03-04 02:19:50.578 [INFO][5346] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:19:50.690454 containerd[1633]: 2026-03-04 02:19:50.680 [INFO][5346] ipam/ipam_plugin.go 516: Released address using handleID ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" HandleID="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:19:50.690454 containerd[1633]: 2026-03-04 02:19:50.680 [INFO][5346] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" HandleID="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:19:50.690454 containerd[1633]: 2026-03-04 02:19:50.683 [INFO][5346] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:19:50.690454 containerd[1633]: 2026-03-04 02:19:50.687 [INFO][5339] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Mar 4 02:19:50.695582 containerd[1633]: time="2026-03-04T02:19:50.692991756Z" level=info msg="TearDown network for sandbox \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\" successfully" Mar 4 02:19:50.695582 containerd[1633]: time="2026-03-04T02:19:50.693054215Z" level=info msg="StopPodSandbox for \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\" returns successfully" Mar 4 02:19:50.697531 systemd[1]: run-netns-cni\x2de8d6bdac\x2d8974\x2d9691\x2d85d9\x2dcfd2f69d6816.mount: Deactivated successfully. Mar 4 02:19:50.910525 kubelet[2861]: I0304 02:19:50.910097 2861 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/209ee552-e07e-43c8-b4cb-0bb161f67e80-nginx-config\") pod \"209ee552-e07e-43c8-b4cb-0bb161f67e80\" (UID: \"209ee552-e07e-43c8-b4cb-0bb161f67e80\") " Mar 4 02:19:50.910525 kubelet[2861]: I0304 02:19:50.910235 2861 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/209ee552-e07e-43c8-b4cb-0bb161f67e80-whisker-backend-key-pair\") pod \"209ee552-e07e-43c8-b4cb-0bb161f67e80\" (UID: \"209ee552-e07e-43c8-b4cb-0bb161f67e80\") " Mar 4 02:19:50.910525 kubelet[2861]: I0304 02:19:50.910276 2861 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/209ee552-e07e-43c8-b4cb-0bb161f67e80-whisker-ca-bundle\") pod \"209ee552-e07e-43c8-b4cb-0bb161f67e80\" (UID: \"209ee552-e07e-43c8-b4cb-0bb161f67e80\") " Mar 4 02:19:50.910525 kubelet[2861]: I0304 02:19:50.910306 2861 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jfk7\" (UniqueName: \"kubernetes.io/projected/209ee552-e07e-43c8-b4cb-0bb161f67e80-kube-api-access-6jfk7\") pod \"209ee552-e07e-43c8-b4cb-0bb161f67e80\" (UID: \"209ee552-e07e-43c8-b4cb-0bb161f67e80\") " Mar 4 02:19:50.973224 systemd[1]: var-lib-kubelet-pods-209ee552\x2de07e\x2d43c8\x2db4cb\x2d0bb161f67e80-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 4 02:19:50.979045 systemd[1]: var-lib-kubelet-pods-209ee552\x2de07e\x2d43c8\x2db4cb\x2d0bb161f67e80-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6jfk7.mount: Deactivated successfully. Mar 4 02:19:50.995829 kubelet[2861]: I0304 02:19:50.986106 2861 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/209ee552-e07e-43c8-b4cb-0bb161f67e80-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "209ee552-e07e-43c8-b4cb-0bb161f67e80" (UID: "209ee552-e07e-43c8-b4cb-0bb161f67e80"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 02:19:50.996113 kubelet[2861]: I0304 02:19:50.985785 2861 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/209ee552-e07e-43c8-b4cb-0bb161f67e80-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "209ee552-e07e-43c8-b4cb-0bb161f67e80" (UID: "209ee552-e07e-43c8-b4cb-0bb161f67e80"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 4 02:19:50.996113 kubelet[2861]: I0304 02:19:50.994057 2861 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/209ee552-e07e-43c8-b4cb-0bb161f67e80-kube-api-access-6jfk7" (OuterVolumeSpecName: "kube-api-access-6jfk7") pod "209ee552-e07e-43c8-b4cb-0bb161f67e80" (UID: "209ee552-e07e-43c8-b4cb-0bb161f67e80"). InnerVolumeSpecName "kube-api-access-6jfk7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 4 02:19:50.996260 kubelet[2861]: I0304 02:19:50.996236 2861 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/209ee552-e07e-43c8-b4cb-0bb161f67e80-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "209ee552-e07e-43c8-b4cb-0bb161f67e80" (UID: "209ee552-e07e-43c8-b4cb-0bb161f67e80"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 4 02:19:51.012473 kubelet[2861]: I0304 02:19:51.012369 2861 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6jfk7\" (UniqueName: \"kubernetes.io/projected/209ee552-e07e-43c8-b4cb-0bb161f67e80-kube-api-access-6jfk7\") on node \"srv-mtsxv.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:19:51.012473 kubelet[2861]: I0304 02:19:51.012443 2861 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/209ee552-e07e-43c8-b4cb-0bb161f67e80-nginx-config\") on node \"srv-mtsxv.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:19:51.013057 kubelet[2861]: I0304 02:19:51.012494 2861 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/209ee552-e07e-43c8-b4cb-0bb161f67e80-whisker-backend-key-pair\") on node \"srv-mtsxv.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:19:51.013057 kubelet[2861]: I0304 02:19:51.012513 2861 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/209ee552-e07e-43c8-b4cb-0bb161f67e80-whisker-ca-bundle\") on node \"srv-mtsxv.gb1.brightbox.com\" DevicePath \"\"" Mar 4 02:19:51.021761 containerd[1633]: time="2026-03-04T02:19:51.021485712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:51.024219 containerd[1633]: time="2026-03-04T02:19:51.024112967Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 4 02:19:51.032983 containerd[1633]: time="2026-03-04T02:19:51.032368551Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:51.040190 containerd[1633]: time="2026-03-04T02:19:51.039891931Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 4 02:19:51.041309 containerd[1633]: time="2026-03-04T02:19:51.041078560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 2.707290188s" Mar 4 02:19:51.041309 containerd[1633]: time="2026-03-04T02:19:51.041140322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 4 02:19:51.052610 containerd[1633]: time="2026-03-04T02:19:51.052518069Z" level=info msg="CreateContainer within sandbox \"c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 4 02:19:51.081170 containerd[1633]: time="2026-03-04T02:19:51.080956265Z" level=info msg="CreateContainer within sandbox \"c4f1b1566b6c0dd1e4adad5627b58faf079a708e02c3d44592a9966eccaa8af7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7fe46a934255ee3ba045447dd3b603bd4ed4f7b8de7997983d9757a9a73e909d\"" Mar 4 02:19:51.083829 containerd[1633]: time="2026-03-04T02:19:51.082205518Z" level=info msg="StartContainer for \"7fe46a934255ee3ba045447dd3b603bd4ed4f7b8de7997983d9757a9a73e909d\"" Mar 4 02:19:51.290012 containerd[1633]: time="2026-03-04T02:19:51.289952194Z" level=info msg="StartContainer for \"7fe46a934255ee3ba045447dd3b603bd4ed4f7b8de7997983d9757a9a73e909d\" returns successfully" Mar 4 02:19:51.651371 kubelet[2861]: I0304 02:19:51.651103 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/9e8da5df-23af-405f-ba88-cab4be6d3515-whisker-backend-key-pair\") pod \"whisker-7fd6566f6d-jvtlb\" (UID: \"9e8da5df-23af-405f-ba88-cab4be6d3515\") " pod="calico-system/whisker-7fd6566f6d-jvtlb" Mar 4 02:19:51.652194 kubelet[2861]: I0304 02:19:51.652147 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xgc5\" (UniqueName: \"kubernetes.io/projected/9e8da5df-23af-405f-ba88-cab4be6d3515-kube-api-access-8xgc5\") pod \"whisker-7fd6566f6d-jvtlb\" (UID: \"9e8da5df-23af-405f-ba88-cab4be6d3515\") " pod="calico-system/whisker-7fd6566f6d-jvtlb" Mar 4 02:19:51.652514 kubelet[2861]: I0304 02:19:51.652405 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/9e8da5df-23af-405f-ba88-cab4be6d3515-nginx-config\") pod \"whisker-7fd6566f6d-jvtlb\" (UID: \"9e8da5df-23af-405f-ba88-cab4be6d3515\") " pod="calico-system/whisker-7fd6566f6d-jvtlb" Mar 4 02:19:51.652830 kubelet[2861]: I0304 02:19:51.652686 2861 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9e8da5df-23af-405f-ba88-cab4be6d3515-whisker-ca-bundle\") pod \"whisker-7fd6566f6d-jvtlb\" (UID: \"9e8da5df-23af-405f-ba88-cab4be6d3515\") " pod="calico-system/whisker-7fd6566f6d-jvtlb" Mar 4 02:19:51.730296 kubelet[2861]: I0304 02:19:51.730234 2861 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 4 02:19:51.732808 kubelet[2861]: I0304 02:19:51.731485 2861 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 4 02:19:51.889563 containerd[1633]: time="2026-03-04T02:19:51.889472640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fd6566f6d-jvtlb,Uid:9e8da5df-23af-405f-ba88-cab4be6d3515,Namespace:calico-system,Attempt:0,}" Mar 4 02:19:51.954944 kubelet[2861]: I0304 02:19:51.953081 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-76s4z" podStartSLOduration=29.629044242 podStartE2EDuration="55.953045086s" podCreationTimestamp="2026-03-04 02:18:56 +0000 UTC" firstStartedPulling="2026-03-04 02:19:24.718533071 +0000 UTC m=+50.682887595" lastFinishedPulling="2026-03-04 02:19:51.042533911 +0000 UTC m=+77.006888439" observedRunningTime="2026-03-04 02:19:51.942007943 +0000 UTC m=+77.906362475" watchObservedRunningTime="2026-03-04 02:19:51.953045086 +0000 UTC m=+77.917399616" Mar 4 02:19:51.982982 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:51.975738 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:51.975748 systemd-resolved[1518]: Flushed all caches. Mar 4 02:19:52.239572 systemd-networkd[1264]: cali70be4b4cf25: Link UP Mar 4 02:19:52.242454 systemd-networkd[1264]: cali70be4b4cf25: Gained carrier Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.102 [INFO][5417] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {srv--mtsxv.gb1.brightbox.com-k8s-whisker--7fd6566f6d--jvtlb-eth0 whisker-7fd6566f6d- calico-system 9e8da5df-23af-405f-ba88-cab4be6d3515 1112 0 2026-03-04 02:19:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7fd6566f6d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s srv-mtsxv.gb1.brightbox.com whisker-7fd6566f6d-jvtlb eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali70be4b4cf25 [] [] }} ContainerID="7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" Namespace="calico-system" Pod="whisker-7fd6566f6d-jvtlb" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--7fd6566f6d--jvtlb-" Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.103 [INFO][5417] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" Namespace="calico-system" Pod="whisker-7fd6566f6d-jvtlb" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--7fd6566f6d--jvtlb-eth0" Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.167 [INFO][5430] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" HandleID="k8s-pod-network.7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--7fd6566f6d--jvtlb-eth0" Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.177 [INFO][5430] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" HandleID="k8s-pod-network.7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--7fd6566f6d--jvtlb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000122710), Attrs:map[string]string{"namespace":"calico-system", "node":"srv-mtsxv.gb1.brightbox.com", "pod":"whisker-7fd6566f6d-jvtlb", "timestamp":"2026-03-04 02:19:52.167025199 +0000 UTC"}, Hostname:"srv-mtsxv.gb1.brightbox.com", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004eab00)} Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.177 [INFO][5430] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.177 [INFO][5430] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.177 [INFO][5430] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'srv-mtsxv.gb1.brightbox.com' Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.181 [INFO][5430] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.187 [INFO][5430] ipam/ipam.go 409: Looking up existing affinities for host host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.195 [INFO][5430] ipam/ipam.go 526: Trying affinity for 192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.198 [INFO][5430] ipam/ipam.go 160: Attempting to load block cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.204 [INFO][5430] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.10.192/26 host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.204 [INFO][5430] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.10.192/26 handle="k8s-pod-network.7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.208 [INFO][5430] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389 Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.215 [INFO][5430] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.10.192/26 handle="k8s-pod-network.7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.226 [INFO][5430] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.10.201/26] block=192.168.10.192/26 handle="k8s-pod-network.7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.226 [INFO][5430] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.10.201/26] handle="k8s-pod-network.7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" host="srv-mtsxv.gb1.brightbox.com" Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.226 [INFO][5430] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:19:52.318936 containerd[1633]: 2026-03-04 02:19:52.226 [INFO][5430] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.10.201/26] IPv6=[] ContainerID="7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" HandleID="k8s-pod-network.7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--7fd6566f6d--jvtlb-eth0" Mar 4 02:19:52.326551 containerd[1633]: 2026-03-04 02:19:52.233 [INFO][5417] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" Namespace="calico-system" Pod="whisker-7fd6566f6d-jvtlb" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--7fd6566f6d--jvtlb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-whisker--7fd6566f6d--jvtlb-eth0", GenerateName:"whisker-7fd6566f6d-", Namespace:"calico-system", SelfLink:"", UID:"9e8da5df-23af-405f-ba88-cab4be6d3515", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 19, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7fd6566f6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"", Pod:"whisker-7fd6566f6d-jvtlb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.10.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali70be4b4cf25", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:52.326551 containerd[1633]: 2026-03-04 02:19:52.234 [INFO][5417] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.10.201/32] ContainerID="7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" Namespace="calico-system" Pod="whisker-7fd6566f6d-jvtlb" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--7fd6566f6d--jvtlb-eth0" Mar 4 02:19:52.326551 containerd[1633]: 2026-03-04 02:19:52.234 [INFO][5417] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali70be4b4cf25 ContainerID="7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" Namespace="calico-system" Pod="whisker-7fd6566f6d-jvtlb" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--7fd6566f6d--jvtlb-eth0" Mar 4 02:19:52.326551 containerd[1633]: 2026-03-04 02:19:52.242 [INFO][5417] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" Namespace="calico-system" Pod="whisker-7fd6566f6d-jvtlb" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--7fd6566f6d--jvtlb-eth0" Mar 4 02:19:52.326551 containerd[1633]: 2026-03-04 02:19:52.243 [INFO][5417] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" Namespace="calico-system" Pod="whisker-7fd6566f6d-jvtlb" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--7fd6566f6d--jvtlb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"srv--mtsxv.gb1.brightbox.com-k8s-whisker--7fd6566f6d--jvtlb-eth0", GenerateName:"whisker-7fd6566f6d-", Namespace:"calico-system", SelfLink:"", UID:"9e8da5df-23af-405f-ba88-cab4be6d3515", ResourceVersion:"1112", Generation:0, CreationTimestamp:time.Date(2026, time.March, 4, 2, 19, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7fd6566f6d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"srv-mtsxv.gb1.brightbox.com", ContainerID:"7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389", Pod:"whisker-7fd6566f6d-jvtlb", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.10.201/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali70be4b4cf25", MAC:"a6:96:cc:8a:4e:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 4 02:19:52.326551 containerd[1633]: 2026-03-04 02:19:52.293 [INFO][5417] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389" Namespace="calico-system" Pod="whisker-7fd6566f6d-jvtlb" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--7fd6566f6d--jvtlb-eth0" Mar 4 02:19:52.335136 kubelet[2861]: I0304 02:19:52.334809 2861 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="209ee552-e07e-43c8-b4cb-0bb161f67e80" path="/var/lib/kubelet/pods/209ee552-e07e-43c8-b4cb-0bb161f67e80/volumes" Mar 4 02:19:52.515255 containerd[1633]: time="2026-03-04T02:19:52.509543016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 4 02:19:52.515255 containerd[1633]: time="2026-03-04T02:19:52.509684083Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 4 02:19:52.515255 containerd[1633]: time="2026-03-04T02:19:52.509733658Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:52.520567 containerd[1633]: time="2026-03-04T02:19:52.518604485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 4 02:19:52.738119 containerd[1633]: time="2026-03-04T02:19:52.738016603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7fd6566f6d-jvtlb,Uid:9e8da5df-23af-405f-ba88-cab4be6d3515,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389\"" Mar 4 02:19:52.774246 containerd[1633]: time="2026-03-04T02:19:52.774176588Z" level=info msg="CreateContainer within sandbox \"7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 4 02:19:52.830514 containerd[1633]: time="2026-03-04T02:19:52.830320918Z" level=info msg="CreateContainer within sandbox \"7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"464c0a88b1b5da4491357446aba7400d078ebfc88dbd9686a87192acb3c19a46\"" Mar 4 02:19:52.832821 containerd[1633]: time="2026-03-04T02:19:52.831591207Z" level=info msg="StartContainer for \"464c0a88b1b5da4491357446aba7400d078ebfc88dbd9686a87192acb3c19a46\"" Mar 4 02:19:52.952360 containerd[1633]: time="2026-03-04T02:19:52.952274755Z" level=info msg="StartContainer for \"464c0a88b1b5da4491357446aba7400d078ebfc88dbd9686a87192acb3c19a46\" returns successfully" Mar 4 02:19:52.960692 containerd[1633]: time="2026-03-04T02:19:52.960606278Z" level=info msg="CreateContainer within sandbox \"7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 4 02:19:53.022199 containerd[1633]: time="2026-03-04T02:19:53.022129327Z" level=info msg="CreateContainer within sandbox \"7ea069f1679b821193d7aeecaaf93a3a82f675cf68861980f11f7287371d6389\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"b80321da35c8bb8f6ddb2e21d82cd5417d072d132a51734cc7bbbe27c44ae2f8\"" Mar 4 02:19:53.025686 containerd[1633]: time="2026-03-04T02:19:53.025571921Z" level=info msg="StartContainer for \"b80321da35c8bb8f6ddb2e21d82cd5417d072d132a51734cc7bbbe27c44ae2f8\"" Mar 4 02:19:53.149604 containerd[1633]: time="2026-03-04T02:19:53.149544062Z" level=info msg="StartContainer for \"b80321da35c8bb8f6ddb2e21d82cd5417d072d132a51734cc7bbbe27c44ae2f8\" returns successfully" Mar 4 02:19:53.508361 systemd-networkd[1264]: cali70be4b4cf25: Gained IPv6LL Mar 4 02:19:53.703550 systemd[1]: run-containerd-runc-k8s.io-464c0a88b1b5da4491357446aba7400d078ebfc88dbd9686a87192acb3c19a46-runc.jEiPlk.mount: Deactivated successfully. Mar 4 02:19:53.943395 kubelet[2861]: I0304 02:19:53.943187 2861 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7fd6566f6d-jvtlb" podStartSLOduration=2.943159832 podStartE2EDuration="2.943159832s" podCreationTimestamp="2026-03-04 02:19:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-04 02:19:53.940335141 +0000 UTC m=+79.904689676" watchObservedRunningTime="2026-03-04 02:19:53.943159832 +0000 UTC m=+79.907514373" Mar 4 02:19:54.021664 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:54.025360 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:54.021688 systemd-resolved[1518]: Flushed all caches. Mar 4 02:19:55.693629 systemd[1]: Started sshd@7-10.230.66.70:22-20.161.92.111:41804.service - OpenSSH per-connection server daemon (20.161.92.111:41804). Mar 4 02:19:56.386172 sshd[5618]: Accepted publickey for core from 20.161.92.111 port 41804 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:19:56.391435 sshd[5618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:19:56.430981 systemd-logind[1611]: New session 10 of user core. Mar 4 02:19:56.437885 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 4 02:19:57.525911 sshd[5618]: pam_unix(sshd:session): session closed for user core Mar 4 02:19:57.539509 systemd[1]: sshd@7-10.230.66.70:22-20.161.92.111:41804.service: Deactivated successfully. Mar 4 02:19:57.544109 systemd-logind[1611]: Session 10 logged out. Waiting for processes to exit. Mar 4 02:19:57.544783 systemd[1]: session-10.scope: Deactivated successfully. Mar 4 02:19:57.547746 systemd-logind[1611]: Removed session 10. Mar 4 02:19:57.860263 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:57.860274 systemd-resolved[1518]: Flushed all caches. Mar 4 02:19:57.862821 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:59.912055 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:19:59.909429 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:19:59.909454 systemd-resolved[1518]: Flushed all caches. Mar 4 02:20:02.625464 systemd[1]: Started sshd@8-10.230.66.70:22-20.161.92.111:39680.service - OpenSSH per-connection server daemon (20.161.92.111:39680). Mar 4 02:20:03.316843 sshd[5662]: Accepted publickey for core from 20.161.92.111 port 39680 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:20:03.320046 sshd[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:20:03.333142 systemd-logind[1611]: New session 11 of user core. Mar 4 02:20:03.339620 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 4 02:20:04.276489 sshd[5662]: pam_unix(sshd:session): session closed for user core Mar 4 02:20:04.281736 systemd-logind[1611]: Session 11 logged out. Waiting for processes to exit. Mar 4 02:20:04.282525 systemd[1]: sshd@8-10.230.66.70:22-20.161.92.111:39680.service: Deactivated successfully. Mar 4 02:20:04.289031 systemd[1]: session-11.scope: Deactivated successfully. Mar 4 02:20:04.290372 systemd-logind[1611]: Removed session 11. Mar 4 02:20:07.387435 systemd[1]: Started sshd@9-10.230.66.70:22-181.115.147.5:49354.service - OpenSSH per-connection server daemon (181.115.147.5:49354). Mar 4 02:20:08.596126 sshd[5703]: Received disconnect from 181.115.147.5 port 49354:11: Bye Bye [preauth] Mar 4 02:20:08.596126 sshd[5703]: Disconnected from authenticating user root 181.115.147.5 port 49354 [preauth] Mar 4 02:20:08.598748 systemd[1]: sshd@9-10.230.66.70:22-181.115.147.5:49354.service: Deactivated successfully. Mar 4 02:20:09.373116 systemd[1]: Started sshd@10-10.230.66.70:22-20.161.92.111:39686.service - OpenSSH per-connection server daemon (20.161.92.111:39686). Mar 4 02:20:10.012645 sshd[5708]: Accepted publickey for core from 20.161.92.111 port 39686 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:20:10.016530 sshd[5708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:20:10.027229 systemd-logind[1611]: New session 12 of user core. Mar 4 02:20:10.032328 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 4 02:20:10.599480 sshd[5708]: pam_unix(sshd:session): session closed for user core Mar 4 02:20:10.605486 systemd-logind[1611]: Session 12 logged out. Waiting for processes to exit. Mar 4 02:20:10.606469 systemd[1]: sshd@10-10.230.66.70:22-20.161.92.111:39686.service: Deactivated successfully. Mar 4 02:20:10.610514 systemd[1]: session-12.scope: Deactivated successfully. Mar 4 02:20:10.611818 systemd-logind[1611]: Removed session 12. Mar 4 02:20:11.389182 kubelet[2861]: I0304 02:20:11.384327 2861 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 4 02:20:15.700903 systemd[1]: Started sshd@11-10.230.66.70:22-20.161.92.111:44090.service - OpenSSH per-connection server daemon (20.161.92.111:44090). Mar 4 02:20:16.324250 sshd[5769]: Accepted publickey for core from 20.161.92.111 port 44090 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:20:16.327244 sshd[5769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:20:16.336566 systemd-logind[1611]: New session 13 of user core. Mar 4 02:20:16.342352 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 4 02:20:16.449212 systemd[1]: Started sshd@12-10.230.66.70:22-101.47.140.127:41700.service - OpenSSH per-connection server daemon (101.47.140.127:41700). Mar 4 02:20:17.219042 sshd[5769]: pam_unix(sshd:session): session closed for user core Mar 4 02:20:17.224242 systemd-logind[1611]: Session 13 logged out. Waiting for processes to exit. Mar 4 02:20:17.225106 systemd[1]: sshd@11-10.230.66.70:22-20.161.92.111:44090.service: Deactivated successfully. Mar 4 02:20:17.230250 systemd[1]: session-13.scope: Deactivated successfully. Mar 4 02:20:17.233147 systemd-logind[1611]: Removed session 13. Mar 4 02:20:17.834869 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:20:17.833648 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:20:17.833675 systemd-resolved[1518]: Flushed all caches. Mar 4 02:20:22.330864 systemd[1]: Started sshd@13-10.230.66.70:22-20.161.92.111:57552.service - OpenSSH per-connection server daemon (20.161.92.111:57552). Mar 4 02:20:22.956736 sshd[5805]: Accepted publickey for core from 20.161.92.111 port 57552 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:20:22.961955 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:20:22.972493 systemd-logind[1611]: New session 14 of user core. Mar 4 02:20:22.978294 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 4 02:20:23.496041 sshd[5773]: Received disconnect from 101.47.140.127 port 41700:11: Bye Bye [preauth] Mar 4 02:20:23.496041 sshd[5773]: Disconnected from authenticating user root 101.47.140.127 port 41700 [preauth] Mar 4 02:20:23.498018 systemd[1]: sshd@12-10.230.66.70:22-101.47.140.127:41700.service: Deactivated successfully. Mar 4 02:20:23.548207 sshd[5805]: pam_unix(sshd:session): session closed for user core Mar 4 02:20:23.555059 systemd[1]: sshd@13-10.230.66.70:22-20.161.92.111:57552.service: Deactivated successfully. Mar 4 02:20:23.559993 systemd-logind[1611]: Session 14 logged out. Waiting for processes to exit. Mar 4 02:20:23.561180 systemd[1]: session-14.scope: Deactivated successfully. Mar 4 02:20:23.563385 systemd-logind[1611]: Removed session 14. Mar 4 02:20:23.650200 systemd[1]: Started sshd@14-10.230.66.70:22-20.161.92.111:57562.service - OpenSSH per-connection server daemon (20.161.92.111:57562). Mar 4 02:20:24.238690 sshd[5823]: Accepted publickey for core from 20.161.92.111 port 57562 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:20:24.242102 sshd[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:20:24.250893 systemd-logind[1611]: New session 15 of user core. Mar 4 02:20:24.255513 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 4 02:20:24.863205 sshd[5823]: pam_unix(sshd:session): session closed for user core Mar 4 02:20:24.868888 systemd-logind[1611]: Session 15 logged out. Waiting for processes to exit. Mar 4 02:20:24.869469 systemd[1]: sshd@14-10.230.66.70:22-20.161.92.111:57562.service: Deactivated successfully. Mar 4 02:20:24.876025 systemd[1]: session-15.scope: Deactivated successfully. Mar 4 02:20:24.879755 systemd-logind[1611]: Removed session 15. Mar 4 02:20:24.968222 systemd[1]: Started sshd@15-10.230.66.70:22-20.161.92.111:57572.service - OpenSSH per-connection server daemon (20.161.92.111:57572). Mar 4 02:20:25.563332 sshd[5835]: Accepted publickey for core from 20.161.92.111 port 57572 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:20:25.568338 sshd[5835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:20:25.576256 systemd-logind[1611]: New session 16 of user core. Mar 4 02:20:25.581889 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 4 02:20:26.136333 sshd[5835]: pam_unix(sshd:session): session closed for user core Mar 4 02:20:26.144596 systemd[1]: sshd@15-10.230.66.70:22-20.161.92.111:57572.service: Deactivated successfully. Mar 4 02:20:26.151617 systemd[1]: session-16.scope: Deactivated successfully. Mar 4 02:20:26.152082 systemd-logind[1611]: Session 16 logged out. Waiting for processes to exit. Mar 4 02:20:26.154477 systemd-logind[1611]: Removed session 16. Mar 4 02:20:31.235311 systemd[1]: Started sshd@16-10.230.66.70:22-20.161.92.111:41190.service - OpenSSH per-connection server daemon (20.161.92.111:41190). Mar 4 02:20:31.867286 sshd[5891]: Accepted publickey for core from 20.161.92.111 port 41190 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:20:31.874278 sshd[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:20:31.884493 systemd-logind[1611]: New session 17 of user core. Mar 4 02:20:31.890212 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 4 02:20:32.588359 sshd[5891]: pam_unix(sshd:session): session closed for user core Mar 4 02:20:32.602628 systemd[1]: sshd@16-10.230.66.70:22-20.161.92.111:41190.service: Deactivated successfully. Mar 4 02:20:32.611775 systemd[1]: session-17.scope: Deactivated successfully. Mar 4 02:20:32.612116 systemd-logind[1611]: Session 17 logged out. Waiting for processes to exit. Mar 4 02:20:32.615342 systemd-logind[1611]: Removed session 17. Mar 4 02:20:34.661471 kubelet[2861]: I0304 02:20:34.661356 2861 scope.go:117] "RemoveContainer" containerID="7480df1c8385f445a4cad8d8c3676b2b8d7d3bff6f3d9d6b0f67f304d9470206" Mar 4 02:20:34.872342 containerd[1633]: time="2026-03-04T02:20:34.850831225Z" level=info msg="RemoveContainer for \"7480df1c8385f445a4cad8d8c3676b2b8d7d3bff6f3d9d6b0f67f304d9470206\"" Mar 4 02:20:34.963273 containerd[1633]: time="2026-03-04T02:20:34.962906263Z" level=info msg="RemoveContainer for \"7480df1c8385f445a4cad8d8c3676b2b8d7d3bff6f3d9d6b0f67f304d9470206\" returns successfully" Mar 4 02:20:34.973711 kubelet[2861]: I0304 02:20:34.973427 2861 scope.go:117] "RemoveContainer" containerID="dd06aa5e78742eadeba051748da96c8e829ba05e79145d7df8681afeddb4a80e" Mar 4 02:20:34.976173 containerd[1633]: time="2026-03-04T02:20:34.976094450Z" level=info msg="RemoveContainer for \"dd06aa5e78742eadeba051748da96c8e829ba05e79145d7df8681afeddb4a80e\"" Mar 4 02:20:34.980845 containerd[1633]: time="2026-03-04T02:20:34.980750700Z" level=info msg="RemoveContainer for \"dd06aa5e78742eadeba051748da96c8e829ba05e79145d7df8681afeddb4a80e\" returns successfully" Mar 4 02:20:34.984779 containerd[1633]: time="2026-03-04T02:20:34.984604640Z" level=info msg="StopPodSandbox for \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\"" Mar 4 02:20:35.588484 containerd[1633]: 2026-03-04 02:20:35.322 [WARNING][5941] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:20:35.588484 containerd[1633]: 2026-03-04 02:20:35.325 [INFO][5941] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Mar 4 02:20:35.588484 containerd[1633]: 2026-03-04 02:20:35.325 [INFO][5941] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" iface="eth0" netns="" Mar 4 02:20:35.588484 containerd[1633]: 2026-03-04 02:20:35.325 [INFO][5941] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Mar 4 02:20:35.588484 containerd[1633]: 2026-03-04 02:20:35.325 [INFO][5941] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Mar 4 02:20:35.588484 containerd[1633]: 2026-03-04 02:20:35.547 [INFO][5948] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" HandleID="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:20:35.588484 containerd[1633]: 2026-03-04 02:20:35.551 [INFO][5948] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:20:35.588484 containerd[1633]: 2026-03-04 02:20:35.552 [INFO][5948] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:20:35.588484 containerd[1633]: 2026-03-04 02:20:35.574 [WARNING][5948] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" HandleID="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:20:35.588484 containerd[1633]: 2026-03-04 02:20:35.574 [INFO][5948] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" HandleID="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:20:35.588484 containerd[1633]: 2026-03-04 02:20:35.582 [INFO][5948] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:20:35.588484 containerd[1633]: 2026-03-04 02:20:35.585 [INFO][5941] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Mar 4 02:20:35.595724 containerd[1633]: time="2026-03-04T02:20:35.595638528Z" level=info msg="TearDown network for sandbox \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\" successfully" Mar 4 02:20:35.595724 containerd[1633]: time="2026-03-04T02:20:35.595707616Z" level=info msg="StopPodSandbox for \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\" returns successfully" Mar 4 02:20:35.608695 containerd[1633]: time="2026-03-04T02:20:35.608632042Z" level=info msg="RemovePodSandbox for \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\"" Mar 4 02:20:35.619389 containerd[1633]: time="2026-03-04T02:20:35.619346136Z" level=info msg="Forcibly stopping sandbox \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\"" Mar 4 02:20:35.800107 containerd[1633]: 2026-03-04 02:20:35.725 [WARNING][5962] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" WorkloadEndpoint="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:20:35.800107 containerd[1633]: 2026-03-04 02:20:35.726 [INFO][5962] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Mar 4 02:20:35.800107 containerd[1633]: 2026-03-04 02:20:35.726 [INFO][5962] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" iface="eth0" netns="" Mar 4 02:20:35.800107 containerd[1633]: 2026-03-04 02:20:35.726 [INFO][5962] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Mar 4 02:20:35.800107 containerd[1633]: 2026-03-04 02:20:35.726 [INFO][5962] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Mar 4 02:20:35.800107 containerd[1633]: 2026-03-04 02:20:35.767 [INFO][5969] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" HandleID="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:20:35.800107 containerd[1633]: 2026-03-04 02:20:35.768 [INFO][5969] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 4 02:20:35.800107 containerd[1633]: 2026-03-04 02:20:35.768 [INFO][5969] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 4 02:20:35.800107 containerd[1633]: 2026-03-04 02:20:35.785 [WARNING][5969] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" HandleID="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:20:35.800107 containerd[1633]: 2026-03-04 02:20:35.785 [INFO][5969] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" HandleID="k8s-pod-network.636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Workload="srv--mtsxv.gb1.brightbox.com-k8s-whisker--675b5b64c--gmzhg-eth0" Mar 4 02:20:35.800107 containerd[1633]: 2026-03-04 02:20:35.791 [INFO][5969] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 4 02:20:35.800107 containerd[1633]: 2026-03-04 02:20:35.796 [INFO][5962] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295" Mar 4 02:20:35.803439 containerd[1633]: time="2026-03-04T02:20:35.800545268Z" level=info msg="TearDown network for sandbox \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\" successfully" Mar 4 02:20:35.859951 containerd[1633]: time="2026-03-04T02:20:35.859770942Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 4 02:20:35.860270 containerd[1633]: time="2026-03-04T02:20:35.860232388Z" level=info msg="RemovePodSandbox \"636c03d832ff087382775c42ddcee8afd5f0af64a05eb85b121d741d3ff06295\" returns successfully" Mar 4 02:20:37.699523 systemd[1]: Started sshd@17-10.230.66.70:22-20.161.92.111:41202.service - OpenSSH per-connection server daemon (20.161.92.111:41202). Mar 4 02:20:38.370599 sshd[5976]: Accepted publickey for core from 20.161.92.111 port 41202 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:20:38.374347 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:20:38.384569 systemd-logind[1611]: New session 18 of user core. Mar 4 02:20:38.392675 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 4 02:20:39.396866 sshd[5976]: pam_unix(sshd:session): session closed for user core Mar 4 02:20:39.409652 systemd[1]: sshd@17-10.230.66.70:22-20.161.92.111:41202.service: Deactivated successfully. Mar 4 02:20:39.416927 systemd-logind[1611]: Session 18 logged out. Waiting for processes to exit. Mar 4 02:20:39.417840 systemd[1]: session-18.scope: Deactivated successfully. Mar 4 02:20:39.421663 systemd-logind[1611]: Removed session 18. Mar 4 02:20:39.492107 systemd[1]: Started sshd@18-10.230.66.70:22-20.161.92.111:41206.service - OpenSSH per-connection server daemon (20.161.92.111:41206). Mar 4 02:20:39.844072 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:20:39.852221 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:20:39.844096 systemd-resolved[1518]: Flushed all caches. Mar 4 02:20:40.077855 sshd[5989]: Accepted publickey for core from 20.161.92.111 port 41206 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:20:40.079512 sshd[5989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:20:40.087179 systemd-logind[1611]: New session 19 of user core. Mar 4 02:20:40.092437 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 4 02:20:40.911722 sshd[5989]: pam_unix(sshd:session): session closed for user core Mar 4 02:20:40.920854 systemd[1]: sshd@18-10.230.66.70:22-20.161.92.111:41206.service: Deactivated successfully. Mar 4 02:20:40.928276 systemd-logind[1611]: Session 19 logged out. Waiting for processes to exit. Mar 4 02:20:40.928712 systemd[1]: session-19.scope: Deactivated successfully. Mar 4 02:20:40.932117 systemd-logind[1611]: Removed session 19. Mar 4 02:20:41.013280 systemd[1]: Started sshd@19-10.230.66.70:22-20.161.92.111:35124.service - OpenSSH per-connection server daemon (20.161.92.111:35124). Mar 4 02:20:41.621113 sshd[6006]: Accepted publickey for core from 20.161.92.111 port 35124 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:20:41.624462 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:20:41.633031 systemd-logind[1611]: New session 20 of user core. Mar 4 02:20:41.638209 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 4 02:20:43.051856 sshd[6006]: pam_unix(sshd:session): session closed for user core Mar 4 02:20:43.075424 systemd[1]: sshd@19-10.230.66.70:22-20.161.92.111:35124.service: Deactivated successfully. Mar 4 02:20:43.087750 systemd-logind[1611]: Session 20 logged out. Waiting for processes to exit. Mar 4 02:20:43.089603 systemd[1]: session-20.scope: Deactivated successfully. Mar 4 02:20:43.094374 systemd-logind[1611]: Removed session 20. Mar 4 02:20:43.152762 systemd[1]: Started sshd@20-10.230.66.70:22-20.161.92.111:35132.service - OpenSSH per-connection server daemon (20.161.92.111:35132). Mar 4 02:20:43.759055 sshd[6035]: Accepted publickey for core from 20.161.92.111 port 35132 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:20:43.761345 sshd[6035]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:20:43.768327 systemd-logind[1611]: New session 21 of user core. Mar 4 02:20:43.773213 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 4 02:20:45.148462 sshd[6035]: pam_unix(sshd:session): session closed for user core Mar 4 02:20:45.161001 systemd[1]: sshd@20-10.230.66.70:22-20.161.92.111:35132.service: Deactivated successfully. Mar 4 02:20:45.171161 systemd-logind[1611]: Session 21 logged out. Waiting for processes to exit. Mar 4 02:20:45.171833 systemd[1]: session-21.scope: Deactivated successfully. Mar 4 02:20:45.177237 systemd-logind[1611]: Removed session 21. Mar 4 02:20:45.244327 systemd[1]: Started sshd@21-10.230.66.70:22-20.161.92.111:35146.service - OpenSSH per-connection server daemon (20.161.92.111:35146). Mar 4 02:20:45.860104 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:20:45.868511 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:20:45.860125 systemd-resolved[1518]: Flushed all caches. Mar 4 02:20:45.878892 sshd[6069]: Accepted publickey for core from 20.161.92.111 port 35146 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:20:45.882212 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:20:45.893403 systemd-logind[1611]: New session 22 of user core. Mar 4 02:20:45.900482 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 4 02:20:46.456658 sshd[6069]: pam_unix(sshd:session): session closed for user core Mar 4 02:20:46.461760 systemd-logind[1611]: Session 22 logged out. Waiting for processes to exit. Mar 4 02:20:46.464532 systemd[1]: sshd@21-10.230.66.70:22-20.161.92.111:35146.service: Deactivated successfully. Mar 4 02:20:46.468680 systemd[1]: session-22.scope: Deactivated successfully. Mar 4 02:20:46.470904 systemd-logind[1611]: Removed session 22. Mar 4 02:20:51.570078 systemd[1]: Started sshd@22-10.230.66.70:22-20.161.92.111:57272.service - OpenSSH per-connection server daemon (20.161.92.111:57272). Mar 4 02:20:52.203291 sshd[6097]: Accepted publickey for core from 20.161.92.111 port 57272 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:20:52.207160 sshd[6097]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:20:52.216410 systemd-logind[1611]: New session 23 of user core. Mar 4 02:20:52.221208 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 4 02:20:52.879438 sshd[6097]: pam_unix(sshd:session): session closed for user core Mar 4 02:20:52.889923 systemd[1]: sshd@22-10.230.66.70:22-20.161.92.111:57272.service: Deactivated successfully. Mar 4 02:20:52.897574 systemd-logind[1611]: Session 23 logged out. Waiting for processes to exit. Mar 4 02:20:52.898032 systemd[1]: session-23.scope: Deactivated successfully. Mar 4 02:20:52.901216 systemd-logind[1611]: Removed session 23. Mar 4 02:20:57.971235 systemd[1]: Started sshd@23-10.230.66.70:22-20.161.92.111:57286.service - OpenSSH per-connection server daemon (20.161.92.111:57286). Mar 4 02:20:58.596552 sshd[6133]: Accepted publickey for core from 20.161.92.111 port 57286 ssh2: RSA SHA256:phL7137i5y6DHtmwXYw8sU0DtZKGvJBo2Tpr6jEeFOI Mar 4 02:20:58.600659 sshd[6133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 4 02:20:58.610397 systemd-logind[1611]: New session 24 of user core. Mar 4 02:20:58.617289 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 4 02:20:59.405084 sshd[6133]: pam_unix(sshd:session): session closed for user core Mar 4 02:20:59.411620 systemd[1]: sshd@23-10.230.66.70:22-20.161.92.111:57286.service: Deactivated successfully. Mar 4 02:20:59.417461 systemd-logind[1611]: Session 24 logged out. Waiting for processes to exit. Mar 4 02:20:59.418639 systemd[1]: session-24.scope: Deactivated successfully. Mar 4 02:20:59.420909 systemd-logind[1611]: Removed session 24. Mar 4 02:20:59.882074 systemd-journald[1184]: Under memory pressure, flushing caches. Mar 4 02:20:59.880320 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:20:59.880348 systemd-resolved[1518]: Flushed all caches. Mar 4 02:21:01.924363 systemd-resolved[1518]: Under memory pressure, flushing caches. Mar 4 02:21:01.924382 systemd-resolved[1518]: Flushed all caches. Mar 4 02:21:01.926828 systemd-journald[1184]: Under memory pressure, flushing caches.