Apr 13 20:23:05.505291 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:23:05.505348 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:23:05.505370 kernel: BIOS-provided physical RAM map: Apr 13 20:23:05.505400 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 13 20:23:05.505417 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 13 20:23:05.505433 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 13 20:23:05.505448 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 13 20:23:05.505467 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 13 20:23:05.505481 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Apr 13 20:23:05.505500 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Apr 13 20:23:05.505518 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Apr 13 20:23:05.505534 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Apr 13 20:23:05.508599 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 13 20:23:05.508645 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 13 20:23:05.508683 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 13 20:23:05.508706 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 13 20:23:05.508727 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 13 20:23:05.508747 kernel: NX (Execute Disable) protection: active Apr 13 20:23:05.508766 kernel: APIC: Static calls initialized Apr 13 20:23:05.508786 kernel: efi: EFI v2.7 by EDK II Apr 13 20:23:05.508806 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Apr 13 20:23:05.508826 kernel: SMBIOS 2.4 present. Apr 13 20:23:05.508846 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Apr 13 20:23:05.508870 kernel: Hypervisor detected: KVM Apr 13 20:23:05.508897 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:23:05.508918 kernel: kvm-clock: using sched offset of 13829996700 cycles Apr 13 20:23:05.508943 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:23:05.508967 kernel: tsc: Detected 2299.998 MHz processor Apr 13 20:23:05.508988 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:23:05.509012 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:23:05.509034 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 13 20:23:05.509056 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Apr 13 20:23:05.509079 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:23:05.509107 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 13 20:23:05.509128 kernel: Using GB pages for direct mapping Apr 13 20:23:05.509147 kernel: Secure boot disabled Apr 13 20:23:05.509171 kernel: ACPI: Early table checksum verification disabled Apr 13 20:23:05.509193 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 13 20:23:05.509215 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 13 20:23:05.509240 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 13 20:23:05.509275 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 13 20:23:05.509300 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 13 20:23:05.509321 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Apr 13 20:23:05.509345 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 13 20:23:05.509371 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 13 20:23:05.509404 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 13 20:23:05.509426 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 13 20:23:05.509454 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 13 20:23:05.509477 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 13 20:23:05.509499 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 13 20:23:05.509520 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 13 20:23:05.509545 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 13 20:23:05.509823 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 13 20:23:05.509847 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 13 20:23:05.509870 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 13 20:23:05.509892 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 13 20:23:05.509921 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 13 20:23:05.509943 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 13 20:23:05.509965 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 13 20:23:05.509989 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 13 20:23:05.510013 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 13 20:23:05.510037 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 13 20:23:05.510063 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Apr 13 20:23:05.510086 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Apr 13 20:23:05.510110 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Apr 13 20:23:05.510137 kernel: Zone ranges: Apr 13 20:23:05.510162 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:23:05.510185 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:23:05.510209 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 13 20:23:05.510232 kernel: Movable zone start for each node Apr 13 20:23:05.510257 kernel: Early memory node ranges Apr 13 20:23:05.510283 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 13 20:23:05.510307 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 13 20:23:05.510333 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Apr 13 20:23:05.510364 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 13 20:23:05.510393 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 13 20:23:05.510415 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 13 20:23:05.510441 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:23:05.510462 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 13 20:23:05.510481 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 13 20:23:05.510503 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 13 20:23:05.510527 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 13 20:23:05.510579 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 13 20:23:05.510614 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:23:05.510637 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:23:05.510656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:23:05.510678 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:23:05.510702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:23:05.510725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:23:05.510747 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:23:05.510773 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:23:05.510796 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 13 20:23:05.510825 kernel: Booting paravirtualized kernel on KVM Apr 13 20:23:05.510850 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:23:05.510876 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:23:05.510902 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:23:05.510922 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:23:05.510946 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:23:05.511210 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:23:05.511233 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:23:05.511256 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:23:05.511285 kernel: random: crng init done Apr 13 20:23:05.511303 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 13 20:23:05.511323 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:23:05.511346 kernel: Fallback order for Node 0: 0 Apr 13 20:23:05.511364 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Apr 13 20:23:05.511457 kernel: Policy zone: Normal Apr 13 20:23:05.511476 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:23:05.511495 kernel: software IO TLB: area num 2. Apr 13 20:23:05.511514 kernel: Memory: 7513176K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 347148K reserved, 0K cma-reserved) Apr 13 20:23:05.511571 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:23:05.511592 kernel: Kernel/User page tables isolation: enabled Apr 13 20:23:05.511611 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:23:05.511629 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:23:05.511648 kernel: Dynamic Preempt: voluntary Apr 13 20:23:05.511666 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:23:05.511687 kernel: rcu: RCU event tracing is enabled. Apr 13 20:23:05.511707 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:23:05.511764 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:23:05.511784 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:23:05.511805 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:23:05.511829 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:23:05.511849 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:23:05.511868 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:23:05.511889 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:23:05.511910 kernel: Console: colour dummy device 80x25 Apr 13 20:23:05.511934 kernel: printk: console [ttyS0] enabled Apr 13 20:23:05.511954 kernel: ACPI: Core revision 20230628 Apr 13 20:23:05.511974 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:23:05.511994 kernel: x2apic enabled Apr 13 20:23:05.512015 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:23:05.512034 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 13 20:23:05.512054 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 13 20:23:05.512076 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 13 20:23:05.512096 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 13 20:23:05.512121 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 13 20:23:05.512140 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:23:05.512161 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 13 20:23:05.512181 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 13 20:23:05.512202 kernel: Spectre V2 : Mitigation: IBRS Apr 13 20:23:05.512222 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:23:05.512244 kernel: RETBleed: Mitigation: IBRS Apr 13 20:23:05.512264 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:23:05.512284 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Apr 13 20:23:05.512309 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:23:05.512329 kernel: MDS: Mitigation: Clear CPU buffers Apr 13 20:23:05.512594 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:23:05.512615 kernel: active return thunk: its_return_thunk Apr 13 20:23:05.512636 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 20:23:05.512656 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:23:05.512676 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:23:05.512696 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:23:05.512722 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:23:05.512748 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 13 20:23:05.512769 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:23:05.512789 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:23:05.512810 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:23:05.512830 kernel: landlock: Up and running. Apr 13 20:23:05.512850 kernel: SELinux: Initializing. Apr 13 20:23:05.512870 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:23:05.512890 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:23:05.512911 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 13 20:23:05.512935 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:23:05.512956 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:23:05.512976 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:23:05.512996 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 13 20:23:05.513015 kernel: signal: max sigframe size: 1776 Apr 13 20:23:05.513037 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:23:05.513057 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:23:05.513078 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 20:23:05.513095 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:23:05.513119 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:23:05.513137 kernel: .... node #0, CPUs: #1 Apr 13 20:23:05.513158 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 13 20:23:05.513179 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 13 20:23:05.513200 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:23:05.513220 kernel: smpboot: Max logical packages: 1 Apr 13 20:23:05.513240 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 13 20:23:05.513260 kernel: devtmpfs: initialized Apr 13 20:23:05.513284 kernel: x86/mm: Memory block size: 128MB Apr 13 20:23:05.513305 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 13 20:23:05.513326 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:23:05.513347 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:23:05.513367 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:23:05.513394 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:23:05.513414 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:23:05.513434 kernel: audit: type=2000 audit(1776111782.964:1): state=initialized audit_enabled=0 res=1 Apr 13 20:23:05.513454 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:23:05.513478 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:23:05.513498 kernel: cpuidle: using governor menu Apr 13 20:23:05.513518 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:23:05.513538 kernel: dca service started, version 1.12.1 Apr 13 20:23:05.513573 kernel: PCI: Using configuration type 1 for base access Apr 13 20:23:05.513594 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:23:05.513615 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:23:05.513636 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:23:05.513656 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:23:05.513681 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:23:05.513701 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:23:05.513722 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:23:05.513912 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:23:05.513932 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 13 20:23:05.513952 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:23:05.513972 kernel: ACPI: Interpreter enabled Apr 13 20:23:05.513993 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 20:23:05.514014 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:23:05.514039 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:23:05.514059 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 13 20:23:05.514079 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 13 20:23:05.514099 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:23:05.514425 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:23:05.514680 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 13 20:23:05.514902 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 13 20:23:05.514934 kernel: PCI host bridge to bus 0000:00 Apr 13 20:23:05.515139 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:23:05.515339 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:23:05.515565 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:23:05.515781 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 13 20:23:05.515974 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:23:05.516207 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 13 20:23:05.516460 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Apr 13 20:23:05.516706 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 13 20:23:05.516927 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 13 20:23:05.517208 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Apr 13 20:23:05.517442 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 13 20:23:05.518757 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Apr 13 20:23:05.519028 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 13 20:23:05.519255 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Apr 13 20:23:05.519513 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Apr 13 20:23:05.527957 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Apr 13 20:23:05.528270 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 13 20:23:05.528537 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Apr 13 20:23:05.528614 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:23:05.528644 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:23:05.528699 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:23:05.528729 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:23:05.528753 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 13 20:23:05.528773 kernel: iommu: Default domain type: Translated Apr 13 20:23:05.528793 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:23:05.528816 kernel: efivars: Registered efivars operations Apr 13 20:23:05.528843 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:23:05.528872 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:23:05.528899 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 13 20:23:05.528933 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 13 20:23:05.528961 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 13 20:23:05.528983 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 13 20:23:05.529002 kernel: vgaarb: loaded Apr 13 20:23:05.529023 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:23:05.529043 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:23:05.529063 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:23:05.529084 kernel: pnp: PnP ACPI init Apr 13 20:23:05.529104 kernel: pnp: PnP ACPI: found 7 devices Apr 13 20:23:05.529129 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:23:05.529149 kernel: NET: Registered PF_INET protocol family Apr 13 20:23:05.529170 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 13 20:23:05.529191 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 13 20:23:05.529211 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:23:05.529232 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:23:05.529252 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 13 20:23:05.529273 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 13 20:23:05.529298 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 13 20:23:05.529319 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 13 20:23:05.529339 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:23:05.529359 kernel: NET: Registered PF_XDP protocol family Apr 13 20:23:05.529652 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:23:05.529887 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:23:05.530126 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:23:05.530339 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 13 20:23:05.530620 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 13 20:23:05.530659 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:23:05.530689 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:23:05.530720 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Apr 13 20:23:05.530751 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 20:23:05.530781 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 13 20:23:05.530811 kernel: clocksource: Switched to clocksource tsc Apr 13 20:23:05.530841 kernel: Initialise system trusted keyrings Apr 13 20:23:05.530878 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 13 20:23:05.530908 kernel: Key type asymmetric registered Apr 13 20:23:05.530937 kernel: Asymmetric key parser 'x509' registered Apr 13 20:23:05.530967 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:23:05.530997 kernel: io scheduler mq-deadline registered Apr 13 20:23:05.531027 kernel: io scheduler kyber registered Apr 13 20:23:05.531057 kernel: io scheduler bfq registered Apr 13 20:23:05.531087 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:23:05.531115 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 13 20:23:05.531362 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 13 20:23:05.531407 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 13 20:23:05.531681 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 13 20:23:05.531718 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 13 20:23:05.531954 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 13 20:23:05.531990 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:23:05.532020 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:23:05.532051 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 13 20:23:05.532080 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 13 20:23:05.532117 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 13 20:23:05.532357 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 13 20:23:05.532404 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:23:05.532434 kernel: i8042: Warning: Keylock active Apr 13 20:23:05.532463 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:23:05.532493 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:23:05.532780 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 13 20:23:05.533011 kernel: rtc_cmos 00:00: registered as rtc0 Apr 13 20:23:05.533231 kernel: rtc_cmos 00:00: setting system clock to 2026-04-13T20:23:04 UTC (1776111784) Apr 13 20:23:05.533461 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 13 20:23:05.533496 kernel: intel_pstate: CPU model not supported Apr 13 20:23:05.533526 kernel: pstore: Using crash dump compression: deflate Apr 13 20:23:05.533595 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 20:23:05.533626 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:23:05.533657 kernel: Segment Routing with IPv6 Apr 13 20:23:05.533686 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:23:05.533723 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:23:05.533753 kernel: Key type dns_resolver registered Apr 13 20:23:05.533783 kernel: IPI shorthand broadcast: enabled Apr 13 20:23:05.533817 kernel: sched_clock: Marking stable (1067005403, 354663615)->(1633447713, -211778695) Apr 13 20:23:05.533854 kernel: registered taskstats version 1 Apr 13 20:23:05.533898 kernel: Loading compiled-in X.509 certificates Apr 13 20:23:05.533938 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:23:05.533968 kernel: Key type .fscrypt registered Apr 13 20:23:05.533997 kernel: Key type fscrypt-provisioning registered Apr 13 20:23:05.534031 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:23:05.534061 kernel: ima: No architecture policies found Apr 13 20:23:05.534091 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 13 20:23:05.534121 kernel: clk: Disabling unused clocks Apr 13 20:23:05.534147 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:23:05.534177 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:23:05.534207 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:23:05.534237 kernel: Run /init as init process Apr 13 20:23:05.534268 kernel: with arguments: Apr 13 20:23:05.534302 kernel: /init Apr 13 20:23:05.534331 kernel: with environment: Apr 13 20:23:05.534360 kernel: HOME=/ Apr 13 20:23:05.534396 kernel: TERM=linux Apr 13 20:23:05.534432 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:23:05.534467 systemd[1]: Detected virtualization google. Apr 13 20:23:05.534499 systemd[1]: Detected architecture x86-64. Apr 13 20:23:05.534528 systemd[1]: Running in initrd. Apr 13 20:23:05.534545 systemd[1]: No hostname configured, using default hostname. Apr 13 20:23:05.534602 systemd[1]: Hostname set to . Apr 13 20:23:05.534625 systemd[1]: Initializing machine ID from random generator. Apr 13 20:23:05.534646 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:23:05.534667 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:23:05.534688 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:23:05.534711 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:23:05.534737 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:23:05.534758 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:23:05.534780 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:23:05.534804 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:23:05.534825 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:23:05.534849 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:23:05.534871 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:23:05.534896 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:23:05.534918 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:23:05.534961 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:23:05.534987 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:23:05.535009 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:23:05.535031 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:23:05.535057 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:23:05.535079 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:23:05.535102 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:23:05.535123 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:23:05.535144 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:23:05.535166 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:23:05.535189 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:23:05.535211 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:23:05.535234 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:23:05.535260 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:23:05.535282 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:23:05.535304 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:23:05.535380 systemd-journald[184]: Collecting audit messages is disabled. Apr 13 20:23:05.535449 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:23:05.535480 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:23:05.535510 systemd-journald[184]: Journal started Apr 13 20:23:05.546336 systemd-journald[184]: Runtime Journal (/run/log/journal/ef05d35312a4465ab42eda75c31ae2cb) is 8.0M, max 148.7M, 140.7M free. Apr 13 20:23:05.546512 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:23:05.556644 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:23:05.557333 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:23:05.618396 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:23:05.618458 kernel: Bridge firewalling registered Apr 13 20:23:05.560001 systemd-modules-load[185]: Inserted module 'overlay' Apr 13 20:23:05.593529 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:23:05.617455 systemd-modules-load[185]: Inserted module 'br_netfilter' Apr 13 20:23:05.639606 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:23:05.668347 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:23:05.697848 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:23:05.732061 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:23:05.738007 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:23:05.754266 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:23:05.759997 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:23:05.769043 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:23:05.786726 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:23:05.795923 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:23:05.821212 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:23:05.833145 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:23:05.853016 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:23:05.855960 systemd-resolved[215]: Positive Trust Anchors: Apr 13 20:23:05.855983 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:23:05.856063 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:23:05.860285 systemd-resolved[215]: Defaulting to hostname 'linux'. Apr 13 20:23:05.877498 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:23:05.898207 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:23:06.010833 dracut-cmdline[218]: dracut-dracut-053 Apr 13 20:23:06.018982 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:23:06.134618 kernel: SCSI subsystem initialized Apr 13 20:23:06.152636 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:23:06.170612 kernel: iscsi: registered transport (tcp) Apr 13 20:23:06.206428 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:23:06.206522 kernel: QLogic iSCSI HBA Driver Apr 13 20:23:06.269297 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:23:06.286014 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:23:06.367877 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:23:06.367978 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:23:06.377168 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:23:06.431607 kernel: raid6: avx2x4 gen() 17648 MB/s Apr 13 20:23:06.452600 kernel: raid6: avx2x2 gen() 17658 MB/s Apr 13 20:23:06.478663 kernel: raid6: avx2x1 gen() 13316 MB/s Apr 13 20:23:06.478744 kernel: raid6: using algorithm avx2x2 gen() 17658 MB/s Apr 13 20:23:06.505759 kernel: raid6: .... xor() 16996 MB/s, rmw enabled Apr 13 20:23:06.505862 kernel: raid6: using avx2x2 recovery algorithm Apr 13 20:23:06.537616 kernel: xor: automatically using best checksumming function avx Apr 13 20:23:06.748605 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:23:06.764844 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:23:06.780966 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:23:06.821064 systemd-udevd[400]: Using default interface naming scheme 'v255'. Apr 13 20:23:06.829900 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:23:06.865869 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:23:06.887167 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Apr 13 20:23:06.933480 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:23:06.960864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:23:07.096250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:23:07.116889 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:23:07.184093 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:23:07.205224 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:23:07.224584 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:23:07.237915 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:23:07.283767 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:23:07.283821 kernel: AES CTR mode by8 optimization enabled Apr 13 20:23:07.250776 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:23:07.277870 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:23:07.319657 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:23:07.323525 kernel: blk-mq: reduced tag depth to 10240 Apr 13 20:23:07.382622 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 13 20:23:07.389620 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:23:07.389830 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:23:07.439259 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:23:07.550938 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Apr 13 20:23:07.552221 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 13 20:23:07.553186 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 13 20:23:07.553967 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 13 20:23:07.561254 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:23:07.562096 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:23:07.562187 kernel: GPT:17805311 != 33554431 Apr 13 20:23:07.562308 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:23:07.562400 kernel: GPT:17805311 != 33554431 Apr 13 20:23:07.562478 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:23:07.562554 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:23:07.451680 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:23:07.599741 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 13 20:23:07.451831 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:23:07.474801 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:23:07.495892 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:23:07.519501 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:23:07.656624 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (464) Apr 13 20:23:07.682303 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Apr 13 20:23:07.693769 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (444) Apr 13 20:23:07.713429 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Apr 13 20:23:07.714270 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:23:07.765036 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Apr 13 20:23:07.776960 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Apr 13 20:23:07.806788 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 13 20:23:07.834846 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:23:07.852547 disk-uuid[541]: Primary Header is updated. Apr 13 20:23:07.852547 disk-uuid[541]: Secondary Entries is updated. Apr 13 20:23:07.852547 disk-uuid[541]: Secondary Header is updated. Apr 13 20:23:07.911771 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:23:07.911822 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:23:07.911858 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:23:07.871923 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:23:07.966981 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:23:08.926612 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:23:08.927728 disk-uuid[542]: The operation has completed successfully. Apr 13 20:23:09.046663 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:23:09.046896 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:23:09.070902 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:23:09.091609 sh[568]: Success Apr 13 20:23:09.120930 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 13 20:23:09.237935 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:23:09.266776 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:23:09.276399 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:23:09.346190 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:23:09.346632 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:23:09.346958 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:23:09.363155 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:23:09.363527 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:23:09.397616 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:23:09.407414 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:23:09.423051 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:23:09.427909 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:23:09.464886 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:23:09.519212 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:23:09.519409 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:23:09.519499 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:23:09.519610 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:23:09.519755 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:23:09.556631 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:23:09.555846 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:23:09.579155 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:23:09.598979 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:23:09.714544 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:23:09.761469 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:23:09.841154 ignition[667]: Ignition 2.19.0 Apr 13 20:23:09.841863 ignition[667]: Stage: fetch-offline Apr 13 20:23:09.845493 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:23:09.841941 ignition[667]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:23:09.862132 systemd-networkd[751]: lo: Link UP Apr 13 20:23:09.841960 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:23:09.862147 systemd-networkd[751]: lo: Gained carrier Apr 13 20:23:09.842169 ignition[667]: parsed url from cmdline: "" Apr 13 20:23:09.864550 systemd-networkd[751]: Enumeration completed Apr 13 20:23:09.842177 ignition[667]: no config URL provided Apr 13 20:23:09.865274 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:23:09.842195 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:23:09.865285 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:23:09.842211 ignition[667]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:23:09.866020 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:23:09.842223 ignition[667]: failed to fetch config: resource requires networking Apr 13 20:23:09.868282 systemd-networkd[751]: eth0: Link UP Apr 13 20:23:09.842802 ignition[667]: Ignition finished successfully Apr 13 20:23:09.868295 systemd-networkd[751]: eth0: Gained carrier Apr 13 20:23:09.952364 ignition[759]: Ignition 2.19.0 Apr 13 20:23:09.868306 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:23:09.952375 ignition[759]: Stage: fetch Apr 13 20:23:09.877773 systemd[1]: Reached target network.target - Network. Apr 13 20:23:09.952656 ignition[759]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:23:09.878690 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.108/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 13 20:23:09.952671 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:23:09.902859 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:23:09.952837 ignition[759]: parsed url from cmdline: "" Apr 13 20:23:09.969300 unknown[759]: fetched base config from "system" Apr 13 20:23:09.952850 ignition[759]: no config URL provided Apr 13 20:23:09.969320 unknown[759]: fetched base config from "system" Apr 13 20:23:09.952858 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:23:09.969335 unknown[759]: fetched user config from "gcp" Apr 13 20:23:09.952871 ignition[759]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:23:09.973675 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:23:09.952895 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 13 20:23:09.991881 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:23:09.959496 ignition[759]: GET result: OK Apr 13 20:23:10.035416 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:23:09.959760 ignition[759]: parsing config with SHA512: 1d47edccc2e71cefa37c07ca6f1edef40f69ffb1150892078a1d4683828fcbf90158a7d5b79913fae085c7f1611ad3b61937693a33d75b311d33fc49d97a022b Apr 13 20:23:10.063895 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:23:09.970075 ignition[759]: fetch: fetch complete Apr 13 20:23:10.110970 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:23:09.970084 ignition[759]: fetch: fetch passed Apr 13 20:23:10.122234 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:23:09.970155 ignition[759]: Ignition finished successfully Apr 13 20:23:10.140054 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:23:10.022177 ignition[766]: Ignition 2.19.0 Apr 13 20:23:10.161822 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:23:10.022187 ignition[766]: Stage: kargs Apr 13 20:23:10.178909 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:23:10.022410 ignition[766]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:23:10.200977 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:23:10.022423 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:23:10.227879 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:23:10.023714 ignition[766]: kargs: kargs passed Apr 13 20:23:10.023801 ignition[766]: Ignition finished successfully Apr 13 20:23:10.107479 ignition[773]: Ignition 2.19.0 Apr 13 20:23:10.107490 ignition[773]: Stage: disks Apr 13 20:23:10.107781 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:23:10.107796 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:23:10.109062 ignition[773]: disks: disks passed Apr 13 20:23:10.109128 ignition[773]: Ignition finished successfully Apr 13 20:23:10.303456 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 20:23:10.476777 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:23:10.512795 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:23:10.653613 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:23:10.653682 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:23:10.663990 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:23:10.692981 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:23:10.709786 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:23:10.721008 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:23:10.763588 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (789) Apr 13 20:23:10.721107 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:23:10.827000 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:23:10.827068 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:23:10.827107 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:23:10.827123 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:23:10.827139 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:23:10.721160 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:23:10.742983 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:23:10.804302 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:23:10.839108 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:23:10.983830 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:23:10.995742 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:23:11.007744 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:23:11.019810 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:23:11.191255 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:23:11.208004 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:23:11.227853 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:23:11.271959 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:23:11.288162 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:23:11.320687 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:23:11.330120 ignition[902]: INFO : Ignition 2.19.0 Apr 13 20:23:11.330120 ignition[902]: INFO : Stage: mount Apr 13 20:23:11.330120 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:23:11.330120 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:23:11.330120 ignition[902]: INFO : mount: mount passed Apr 13 20:23:11.330120 ignition[902]: INFO : Ignition finished successfully Apr 13 20:23:11.340648 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:23:11.365747 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:23:11.534968 systemd-networkd[751]: eth0: Gained IPv6LL Apr 13 20:23:11.659990 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:23:11.717685 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (913) Apr 13 20:23:11.737196 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:23:11.737620 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:23:11.737771 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:23:11.761930 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:23:11.762032 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:23:11.766068 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:23:11.814625 ignition[930]: INFO : Ignition 2.19.0 Apr 13 20:23:11.814625 ignition[930]: INFO : Stage: files Apr 13 20:23:11.814625 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:23:11.814625 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:23:11.851797 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:23:11.851797 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:23:11.851797 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:23:11.851797 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:23:11.851797 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:23:11.851797 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:23:11.851797 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:23:11.851797 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:23:11.830528 unknown[930]: wrote ssh authorized keys file for user: core Apr 13 20:23:11.972790 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:23:12.104297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:23:12.104297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 13 20:23:12.602209 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 20:23:13.041589 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:23:13.041589 ignition[930]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 20:23:13.084036 ignition[930]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:23:13.084036 ignition[930]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:23:13.084036 ignition[930]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 20:23:13.084036 ignition[930]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:23:13.084036 ignition[930]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:23:13.084036 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:23:13.084036 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:23:13.084036 ignition[930]: INFO : files: files passed Apr 13 20:23:13.084036 ignition[930]: INFO : Ignition finished successfully Apr 13 20:23:13.048569 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:23:13.078125 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:23:13.099855 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:23:13.124315 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:23:13.326024 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:23:13.326024 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:23:13.124496 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:23:13.383088 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:23:13.153954 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:23:13.155249 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:23:13.192856 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:23:13.289539 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:23:13.289764 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:23:13.318953 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:23:13.336092 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:23:13.350365 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:23:13.356123 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:23:13.421509 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:23:13.441875 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:23:13.485452 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:23:13.504296 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:23:13.526258 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:23:13.545171 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:23:13.545430 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:23:13.582292 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:23:13.603165 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:23:13.624147 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:23:13.644169 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:23:13.666094 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:23:13.686242 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:23:13.705178 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:23:13.729194 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:23:13.749268 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:23:13.768213 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:23:13.787101 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:23:13.787282 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:23:13.817342 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:23:13.837186 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:23:13.860097 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:23:13.860304 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:23:13.994019 ignition[983]: INFO : Ignition 2.19.0 Apr 13 20:23:13.994019 ignition[983]: INFO : Stage: umount Apr 13 20:23:13.994019 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:23:13.994019 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:23:13.994019 ignition[983]: INFO : umount: umount passed Apr 13 20:23:13.994019 ignition[983]: INFO : Ignition finished successfully Apr 13 20:23:13.881143 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:23:13.881332 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:23:13.908311 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:23:13.908660 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:23:13.929365 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:23:13.929664 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:23:13.955257 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:23:14.010266 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:23:14.025829 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:23:14.026228 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:23:14.039632 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:23:14.040074 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:23:14.077972 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:23:14.079170 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:23:14.079368 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:23:14.090937 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:23:14.091110 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:23:14.113104 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:23:14.113474 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:23:14.131538 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:23:14.131774 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:23:14.161215 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:23:14.161322 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:23:14.183178 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:23:14.183276 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:23:14.202190 systemd[1]: Stopped target network.target - Network. Apr 13 20:23:14.220109 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:23:14.220245 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:23:14.242224 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:23:14.260085 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:23:14.263780 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:23:14.282114 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:23:14.302086 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:23:14.323159 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:23:14.323292 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:23:14.343177 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:23:14.343342 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:23:14.362183 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:23:14.362312 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:23:14.381232 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:23:14.381342 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:23:14.400178 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:23:14.400291 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:23:14.419740 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:23:14.425706 systemd-networkd[751]: eth0: DHCPv6 lease lost Apr 13 20:23:14.439423 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:23:14.458755 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:23:14.458965 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:23:14.482537 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:23:14.482826 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:23:14.501717 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:23:14.501815 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:23:14.525044 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:23:14.545801 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:23:14.545994 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:23:14.558184 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:23:15.087772 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Apr 13 20:23:14.558305 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:23:14.577146 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:23:14.577283 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:23:14.596172 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:23:14.596282 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:23:14.617498 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:23:14.636751 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:23:14.636991 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:23:14.667064 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:23:14.667210 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:23:14.687190 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:23:14.687268 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:23:14.709935 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:23:14.710387 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:23:14.747036 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:23:14.747172 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:23:14.774127 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:23:14.774351 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:23:14.811890 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:23:14.853868 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:23:14.854111 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:23:14.876165 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 20:23:14.876289 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:23:14.898108 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:23:14.898203 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:23:14.917071 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:23:14.917161 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:23:14.938765 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:23:14.938937 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:23:14.957749 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:23:14.957924 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:23:14.979746 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:23:15.003019 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:23:15.043795 systemd[1]: Switching root. Apr 13 20:23:15.441776 systemd-journald[184]: Journal stopped Apr 13 20:23:05.505291 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:23:05.505348 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:23:05.505370 kernel: BIOS-provided physical RAM map: Apr 13 20:23:05.505400 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 13 20:23:05.505417 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 13 20:23:05.505433 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 13 20:23:05.505448 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 13 20:23:05.505467 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 13 20:23:05.505481 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Apr 13 20:23:05.505500 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Apr 13 20:23:05.505518 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Apr 13 20:23:05.505534 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Apr 13 20:23:05.508599 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 13 20:23:05.508645 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 13 20:23:05.508683 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 13 20:23:05.508706 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 13 20:23:05.508727 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 13 20:23:05.508747 kernel: NX (Execute Disable) protection: active Apr 13 20:23:05.508766 kernel: APIC: Static calls initialized Apr 13 20:23:05.508786 kernel: efi: EFI v2.7 by EDK II Apr 13 20:23:05.508806 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Apr 13 20:23:05.508826 kernel: SMBIOS 2.4 present. Apr 13 20:23:05.508846 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Apr 13 20:23:05.508870 kernel: Hypervisor detected: KVM Apr 13 20:23:05.508897 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:23:05.508918 kernel: kvm-clock: using sched offset of 13829996700 cycles Apr 13 20:23:05.508943 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:23:05.508967 kernel: tsc: Detected 2299.998 MHz processor Apr 13 20:23:05.508988 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:23:05.509012 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:23:05.509034 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 13 20:23:05.509056 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Apr 13 20:23:05.509079 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:23:05.509107 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 13 20:23:05.509128 kernel: Using GB pages for direct mapping Apr 13 20:23:05.509147 kernel: Secure boot disabled Apr 13 20:23:05.509171 kernel: ACPI: Early table checksum verification disabled Apr 13 20:23:05.509193 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 13 20:23:05.509215 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 13 20:23:05.509240 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 13 20:23:05.509275 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 13 20:23:05.509300 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 13 20:23:05.509321 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Apr 13 20:23:05.509345 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 13 20:23:05.509371 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 13 20:23:05.509404 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 13 20:23:05.509426 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 13 20:23:05.509454 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 13 20:23:05.509477 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 13 20:23:05.509499 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 13 20:23:05.509520 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 13 20:23:05.509545 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 13 20:23:05.509823 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 13 20:23:05.509847 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 13 20:23:05.509870 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 13 20:23:05.509892 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 13 20:23:05.509921 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 13 20:23:05.509943 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 13 20:23:05.509965 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 13 20:23:05.509989 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 13 20:23:05.510013 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 13 20:23:05.510037 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 13 20:23:05.510063 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Apr 13 20:23:05.510086 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Apr 13 20:23:05.510110 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Apr 13 20:23:05.510137 kernel: Zone ranges: Apr 13 20:23:05.510162 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:23:05.510185 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:23:05.510209 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 13 20:23:05.510232 kernel: Movable zone start for each node Apr 13 20:23:05.510257 kernel: Early memory node ranges Apr 13 20:23:05.510283 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 13 20:23:05.510307 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 13 20:23:05.510333 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Apr 13 20:23:05.510364 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 13 20:23:05.510393 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 13 20:23:05.510415 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 13 20:23:05.510441 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:23:05.510462 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 13 20:23:05.510481 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 13 20:23:05.510503 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 13 20:23:05.510527 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 13 20:23:05.510579 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 13 20:23:05.510614 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:23:05.510637 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:23:05.510656 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:23:05.510678 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:23:05.510702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:23:05.510725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:23:05.510747 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:23:05.510773 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:23:05.510796 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 13 20:23:05.510825 kernel: Booting paravirtualized kernel on KVM Apr 13 20:23:05.510850 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:23:05.510876 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:23:05.510902 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:23:05.510922 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:23:05.510946 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:23:05.511210 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:23:05.511233 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:23:05.511256 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:23:05.511285 kernel: random: crng init done Apr 13 20:23:05.511303 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 13 20:23:05.511323 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:23:05.511346 kernel: Fallback order for Node 0: 0 Apr 13 20:23:05.511364 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Apr 13 20:23:05.511457 kernel: Policy zone: Normal Apr 13 20:23:05.511476 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:23:05.511495 kernel: software IO TLB: area num 2. Apr 13 20:23:05.511514 kernel: Memory: 7513176K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 347148K reserved, 0K cma-reserved) Apr 13 20:23:05.511571 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:23:05.511592 kernel: Kernel/User page tables isolation: enabled Apr 13 20:23:05.511611 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:23:05.511629 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:23:05.511648 kernel: Dynamic Preempt: voluntary Apr 13 20:23:05.511666 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:23:05.511687 kernel: rcu: RCU event tracing is enabled. Apr 13 20:23:05.511707 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:23:05.511764 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:23:05.511784 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:23:05.511805 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:23:05.511829 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:23:05.511849 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:23:05.511868 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:23:05.511889 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:23:05.511910 kernel: Console: colour dummy device 80x25 Apr 13 20:23:05.511934 kernel: printk: console [ttyS0] enabled Apr 13 20:23:05.511954 kernel: ACPI: Core revision 20230628 Apr 13 20:23:05.511974 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:23:05.511994 kernel: x2apic enabled Apr 13 20:23:05.512015 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:23:05.512034 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 13 20:23:05.512054 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 13 20:23:05.512076 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 13 20:23:05.512096 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 13 20:23:05.512121 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 13 20:23:05.512140 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:23:05.512161 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 13 20:23:05.512181 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 13 20:23:05.512202 kernel: Spectre V2 : Mitigation: IBRS Apr 13 20:23:05.512222 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:23:05.512244 kernel: RETBleed: Mitigation: IBRS Apr 13 20:23:05.512264 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:23:05.512284 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Apr 13 20:23:05.512309 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:23:05.512329 kernel: MDS: Mitigation: Clear CPU buffers Apr 13 20:23:05.512594 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:23:05.512615 kernel: active return thunk: its_return_thunk Apr 13 20:23:05.512636 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 20:23:05.512656 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:23:05.512676 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:23:05.512696 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:23:05.512722 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:23:05.512748 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 13 20:23:05.512769 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:23:05.512789 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:23:05.512810 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:23:05.512830 kernel: landlock: Up and running. Apr 13 20:23:05.512850 kernel: SELinux: Initializing. Apr 13 20:23:05.512870 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:23:05.512890 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:23:05.512911 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 13 20:23:05.512935 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:23:05.512956 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:23:05.512976 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:23:05.512996 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 13 20:23:05.513015 kernel: signal: max sigframe size: 1776 Apr 13 20:23:05.513037 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:23:05.513057 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:23:05.513078 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 20:23:05.513095 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:23:05.513119 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:23:05.513137 kernel: .... node #0, CPUs: #1 Apr 13 20:23:05.513158 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 13 20:23:05.513179 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 13 20:23:05.513200 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:23:05.513220 kernel: smpboot: Max logical packages: 1 Apr 13 20:23:05.513240 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 13 20:23:05.513260 kernel: devtmpfs: initialized Apr 13 20:23:05.513284 kernel: x86/mm: Memory block size: 128MB Apr 13 20:23:05.513305 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 13 20:23:05.513326 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:23:05.513347 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:23:05.513367 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:23:05.513394 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:23:05.513414 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:23:05.513434 kernel: audit: type=2000 audit(1776111782.964:1): state=initialized audit_enabled=0 res=1 Apr 13 20:23:05.513454 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:23:05.513478 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:23:05.513498 kernel: cpuidle: using governor menu Apr 13 20:23:05.513518 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:23:05.513538 kernel: dca service started, version 1.12.1 Apr 13 20:23:05.513573 kernel: PCI: Using configuration type 1 for base access Apr 13 20:23:05.513594 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:23:05.513615 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:23:05.513636 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:23:05.513656 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:23:05.513681 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:23:05.513701 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:23:05.513722 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:23:05.513912 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:23:05.513932 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 13 20:23:05.513952 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:23:05.513972 kernel: ACPI: Interpreter enabled Apr 13 20:23:05.513993 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 20:23:05.514014 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:23:05.514039 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:23:05.514059 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 13 20:23:05.514079 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 13 20:23:05.514099 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:23:05.514425 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:23:05.514680 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 13 20:23:05.514902 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 13 20:23:05.514934 kernel: PCI host bridge to bus 0000:00 Apr 13 20:23:05.515139 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:23:05.515339 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:23:05.515565 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:23:05.515781 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 13 20:23:05.515974 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:23:05.516207 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 13 20:23:05.516460 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Apr 13 20:23:05.516706 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 13 20:23:05.516927 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 13 20:23:05.517208 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Apr 13 20:23:05.517442 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 13 20:23:05.518757 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Apr 13 20:23:05.519028 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 13 20:23:05.519255 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Apr 13 20:23:05.519513 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Apr 13 20:23:05.527957 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Apr 13 20:23:05.528270 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 13 20:23:05.528537 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Apr 13 20:23:05.528614 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:23:05.528644 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:23:05.528699 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:23:05.528729 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:23:05.528753 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 13 20:23:05.528773 kernel: iommu: Default domain type: Translated Apr 13 20:23:05.528793 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:23:05.528816 kernel: efivars: Registered efivars operations Apr 13 20:23:05.528843 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:23:05.528872 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:23:05.528899 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 13 20:23:05.528933 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 13 20:23:05.528961 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 13 20:23:05.528983 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 13 20:23:05.529002 kernel: vgaarb: loaded Apr 13 20:23:05.529023 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:23:05.529043 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:23:05.529063 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:23:05.529084 kernel: pnp: PnP ACPI init Apr 13 20:23:05.529104 kernel: pnp: PnP ACPI: found 7 devices Apr 13 20:23:05.529129 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:23:05.529149 kernel: NET: Registered PF_INET protocol family Apr 13 20:23:05.529170 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 13 20:23:05.529191 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 13 20:23:05.529211 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:23:05.529232 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:23:05.529252 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 13 20:23:05.529273 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 13 20:23:05.529298 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 13 20:23:05.529319 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 13 20:23:05.529339 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:23:05.529359 kernel: NET: Registered PF_XDP protocol family Apr 13 20:23:05.529652 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:23:05.529887 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:23:05.530126 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:23:05.530339 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 13 20:23:05.530620 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 13 20:23:05.530659 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:23:05.530689 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:23:05.530720 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Apr 13 20:23:05.530751 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 20:23:05.530781 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 13 20:23:05.530811 kernel: clocksource: Switched to clocksource tsc Apr 13 20:23:05.530841 kernel: Initialise system trusted keyrings Apr 13 20:23:05.530878 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 13 20:23:05.530908 kernel: Key type asymmetric registered Apr 13 20:23:05.530937 kernel: Asymmetric key parser 'x509' registered Apr 13 20:23:05.530967 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:23:05.530997 kernel: io scheduler mq-deadline registered Apr 13 20:23:05.531027 kernel: io scheduler kyber registered Apr 13 20:23:05.531057 kernel: io scheduler bfq registered Apr 13 20:23:05.531087 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:23:05.531115 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 13 20:23:05.531362 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 13 20:23:05.531407 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 13 20:23:05.531681 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 13 20:23:05.531718 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 13 20:23:05.531954 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 13 20:23:05.531990 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:23:05.532020 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:23:05.532051 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 13 20:23:05.532080 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 13 20:23:05.532117 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 13 20:23:05.532357 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 13 20:23:05.532404 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:23:05.532434 kernel: i8042: Warning: Keylock active Apr 13 20:23:05.532463 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:23:05.532493 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:23:05.532780 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 13 20:23:05.533011 kernel: rtc_cmos 00:00: registered as rtc0 Apr 13 20:23:05.533231 kernel: rtc_cmos 00:00: setting system clock to 2026-04-13T20:23:04 UTC (1776111784) Apr 13 20:23:05.533461 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 13 20:23:05.533496 kernel: intel_pstate: CPU model not supported Apr 13 20:23:05.533526 kernel: pstore: Using crash dump compression: deflate Apr 13 20:23:05.533595 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 20:23:05.533626 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:23:05.533657 kernel: Segment Routing with IPv6 Apr 13 20:23:05.533686 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:23:05.533723 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:23:05.533753 kernel: Key type dns_resolver registered Apr 13 20:23:05.533783 kernel: IPI shorthand broadcast: enabled Apr 13 20:23:05.533817 kernel: sched_clock: Marking stable (1067005403, 354663615)->(1633447713, -211778695) Apr 13 20:23:05.533854 kernel: registered taskstats version 1 Apr 13 20:23:05.533898 kernel: Loading compiled-in X.509 certificates Apr 13 20:23:05.533938 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:23:05.533968 kernel: Key type .fscrypt registered Apr 13 20:23:05.533997 kernel: Key type fscrypt-provisioning registered Apr 13 20:23:05.534031 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:23:05.534061 kernel: ima: No architecture policies found Apr 13 20:23:05.534091 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Apr 13 20:23:05.534121 kernel: clk: Disabling unused clocks Apr 13 20:23:05.534147 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:23:05.534177 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:23:05.534207 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:23:05.534237 kernel: Run /init as init process Apr 13 20:23:05.534268 kernel: with arguments: Apr 13 20:23:05.534302 kernel: /init Apr 13 20:23:05.534331 kernel: with environment: Apr 13 20:23:05.534360 kernel: HOME=/ Apr 13 20:23:05.534396 kernel: TERM=linux Apr 13 20:23:05.534432 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:23:05.534467 systemd[1]: Detected virtualization google. Apr 13 20:23:05.534499 systemd[1]: Detected architecture x86-64. Apr 13 20:23:05.534528 systemd[1]: Running in initrd. Apr 13 20:23:05.534545 systemd[1]: No hostname configured, using default hostname. Apr 13 20:23:05.534602 systemd[1]: Hostname set to . Apr 13 20:23:05.534625 systemd[1]: Initializing machine ID from random generator. Apr 13 20:23:05.534646 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:23:05.534667 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:23:05.534688 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:23:05.534711 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:23:05.534737 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:23:05.534758 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:23:05.534780 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:23:05.534804 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:23:05.534825 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:23:05.534849 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:23:05.534871 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:23:05.534896 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:23:05.534918 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:23:05.534961 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:23:05.534987 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:23:05.535009 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:23:05.535031 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:23:05.535057 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:23:05.535079 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:23:05.535102 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:23:05.535123 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:23:05.535144 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:23:05.535166 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:23:05.535189 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:23:05.535211 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:23:05.535234 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:23:05.535260 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:23:05.535282 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:23:05.535304 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:23:05.535380 systemd-journald[184]: Collecting audit messages is disabled. Apr 13 20:23:05.535449 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:23:05.535480 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:23:05.535510 systemd-journald[184]: Journal started Apr 13 20:23:05.546336 systemd-journald[184]: Runtime Journal (/run/log/journal/ef05d35312a4465ab42eda75c31ae2cb) is 8.0M, max 148.7M, 140.7M free. Apr 13 20:23:05.546512 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:23:05.556644 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:23:05.557333 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:23:05.618396 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:23:05.618458 kernel: Bridge firewalling registered Apr 13 20:23:05.560001 systemd-modules-load[185]: Inserted module 'overlay' Apr 13 20:23:05.593529 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:23:05.617455 systemd-modules-load[185]: Inserted module 'br_netfilter' Apr 13 20:23:05.639606 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:23:05.668347 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:23:05.697848 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:23:05.732061 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:23:05.738007 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:23:05.754266 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:23:05.759997 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:23:05.769043 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:23:05.786726 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:23:05.795923 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:23:05.821212 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:23:05.833145 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:23:05.853016 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:23:05.855960 systemd-resolved[215]: Positive Trust Anchors: Apr 13 20:23:05.855983 systemd-resolved[215]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:23:05.856063 systemd-resolved[215]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:23:05.860285 systemd-resolved[215]: Defaulting to hostname 'linux'. Apr 13 20:23:05.877498 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:23:05.898207 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:23:06.010833 dracut-cmdline[218]: dracut-dracut-053 Apr 13 20:23:06.018982 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:23:06.134618 kernel: SCSI subsystem initialized Apr 13 20:23:06.152636 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:23:06.170612 kernel: iscsi: registered transport (tcp) Apr 13 20:23:06.206428 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:23:06.206522 kernel: QLogic iSCSI HBA Driver Apr 13 20:23:06.269297 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:23:06.286014 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:23:06.367877 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:23:06.367978 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:23:06.377168 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:23:06.431607 kernel: raid6: avx2x4 gen() 17648 MB/s Apr 13 20:23:06.452600 kernel: raid6: avx2x2 gen() 17658 MB/s Apr 13 20:23:06.478663 kernel: raid6: avx2x1 gen() 13316 MB/s Apr 13 20:23:06.478744 kernel: raid6: using algorithm avx2x2 gen() 17658 MB/s Apr 13 20:23:06.505759 kernel: raid6: .... xor() 16996 MB/s, rmw enabled Apr 13 20:23:06.505862 kernel: raid6: using avx2x2 recovery algorithm Apr 13 20:23:06.537616 kernel: xor: automatically using best checksumming function avx Apr 13 20:23:06.748605 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:23:06.764844 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:23:06.780966 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:23:06.821064 systemd-udevd[400]: Using default interface naming scheme 'v255'. Apr 13 20:23:06.829900 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:23:06.865869 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:23:06.887167 dracut-pre-trigger[411]: rd.md=0: removing MD RAID activation Apr 13 20:23:06.933480 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:23:06.960864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:23:07.096250 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:23:07.116889 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:23:07.184093 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:23:07.205224 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:23:07.224584 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:23:07.237915 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:23:07.283767 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:23:07.283821 kernel: AES CTR mode by8 optimization enabled Apr 13 20:23:07.250776 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:23:07.277870 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:23:07.319657 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:23:07.323525 kernel: blk-mq: reduced tag depth to 10240 Apr 13 20:23:07.382622 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 13 20:23:07.389620 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:23:07.389830 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:23:07.439259 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:23:07.550938 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Apr 13 20:23:07.552221 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 13 20:23:07.553186 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 13 20:23:07.553967 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 13 20:23:07.561254 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:23:07.562096 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:23:07.562187 kernel: GPT:17805311 != 33554431 Apr 13 20:23:07.562308 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:23:07.562400 kernel: GPT:17805311 != 33554431 Apr 13 20:23:07.562478 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:23:07.562554 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:23:07.451680 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:23:07.599741 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 13 20:23:07.451831 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:23:07.474801 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:23:07.495892 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:23:07.519501 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:23:07.656624 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (464) Apr 13 20:23:07.682303 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Apr 13 20:23:07.693769 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (444) Apr 13 20:23:07.713429 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Apr 13 20:23:07.714270 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:23:07.765036 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Apr 13 20:23:07.776960 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Apr 13 20:23:07.806788 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 13 20:23:07.834846 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:23:07.852547 disk-uuid[541]: Primary Header is updated. Apr 13 20:23:07.852547 disk-uuid[541]: Secondary Entries is updated. Apr 13 20:23:07.852547 disk-uuid[541]: Secondary Header is updated. Apr 13 20:23:07.911771 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:23:07.911822 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:23:07.911858 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:23:07.871923 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:23:07.966981 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:23:08.926612 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:23:08.927728 disk-uuid[542]: The operation has completed successfully. Apr 13 20:23:09.046663 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:23:09.046896 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:23:09.070902 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:23:09.091609 sh[568]: Success Apr 13 20:23:09.120930 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 13 20:23:09.237935 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:23:09.266776 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:23:09.276399 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:23:09.346190 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:23:09.346632 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:23:09.346958 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:23:09.363155 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:23:09.363527 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:23:09.397616 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:23:09.407414 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:23:09.423051 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:23:09.427909 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:23:09.464886 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:23:09.519212 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:23:09.519409 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:23:09.519499 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:23:09.519610 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:23:09.519755 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:23:09.556631 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:23:09.555846 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:23:09.579155 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:23:09.598979 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:23:09.714544 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:23:09.761469 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:23:09.841154 ignition[667]: Ignition 2.19.0 Apr 13 20:23:09.841863 ignition[667]: Stage: fetch-offline Apr 13 20:23:09.845493 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:23:09.841941 ignition[667]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:23:09.862132 systemd-networkd[751]: lo: Link UP Apr 13 20:23:09.841960 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:23:09.862147 systemd-networkd[751]: lo: Gained carrier Apr 13 20:23:09.842169 ignition[667]: parsed url from cmdline: "" Apr 13 20:23:09.864550 systemd-networkd[751]: Enumeration completed Apr 13 20:23:09.842177 ignition[667]: no config URL provided Apr 13 20:23:09.865274 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:23:09.842195 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:23:09.865285 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:23:09.842211 ignition[667]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:23:09.866020 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:23:09.842223 ignition[667]: failed to fetch config: resource requires networking Apr 13 20:23:09.868282 systemd-networkd[751]: eth0: Link UP Apr 13 20:23:09.842802 ignition[667]: Ignition finished successfully Apr 13 20:23:09.868295 systemd-networkd[751]: eth0: Gained carrier Apr 13 20:23:09.952364 ignition[759]: Ignition 2.19.0 Apr 13 20:23:09.868306 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:23:09.952375 ignition[759]: Stage: fetch Apr 13 20:23:09.877773 systemd[1]: Reached target network.target - Network. Apr 13 20:23:09.952656 ignition[759]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:23:09.878690 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.108/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 13 20:23:09.952671 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:23:09.902859 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:23:09.952837 ignition[759]: parsed url from cmdline: "" Apr 13 20:23:09.969300 unknown[759]: fetched base config from "system" Apr 13 20:23:09.952850 ignition[759]: no config URL provided Apr 13 20:23:09.969320 unknown[759]: fetched base config from "system" Apr 13 20:23:09.952858 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:23:09.969335 unknown[759]: fetched user config from "gcp" Apr 13 20:23:09.952871 ignition[759]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:23:09.973675 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:23:09.952895 ignition[759]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 13 20:23:09.991881 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:23:09.959496 ignition[759]: GET result: OK Apr 13 20:23:10.035416 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:23:09.959760 ignition[759]: parsing config with SHA512: 1d47edccc2e71cefa37c07ca6f1edef40f69ffb1150892078a1d4683828fcbf90158a7d5b79913fae085c7f1611ad3b61937693a33d75b311d33fc49d97a022b Apr 13 20:23:10.063895 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:23:09.970075 ignition[759]: fetch: fetch complete Apr 13 20:23:10.110970 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:23:09.970084 ignition[759]: fetch: fetch passed Apr 13 20:23:10.122234 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:23:09.970155 ignition[759]: Ignition finished successfully Apr 13 20:23:10.140054 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:23:10.022177 ignition[766]: Ignition 2.19.0 Apr 13 20:23:10.161822 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:23:10.022187 ignition[766]: Stage: kargs Apr 13 20:23:10.178909 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:23:10.022410 ignition[766]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:23:10.200977 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:23:10.022423 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:23:10.227879 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:23:10.023714 ignition[766]: kargs: kargs passed Apr 13 20:23:10.023801 ignition[766]: Ignition finished successfully Apr 13 20:23:10.107479 ignition[773]: Ignition 2.19.0 Apr 13 20:23:10.107490 ignition[773]: Stage: disks Apr 13 20:23:10.107781 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:23:10.107796 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:23:10.109062 ignition[773]: disks: disks passed Apr 13 20:23:10.109128 ignition[773]: Ignition finished successfully Apr 13 20:23:10.303456 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 20:23:10.476777 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:23:10.512795 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:23:10.653613 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:23:10.653682 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:23:10.663990 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:23:10.692981 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:23:10.709786 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:23:10.721008 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:23:10.763588 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (789) Apr 13 20:23:10.721107 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:23:10.827000 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:23:10.827068 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:23:10.827107 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:23:10.827123 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:23:10.827139 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:23:10.721160 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:23:10.742983 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:23:10.804302 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:23:10.839108 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:23:10.983830 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:23:10.995742 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:23:11.007744 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:23:11.019810 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:23:11.191255 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:23:11.208004 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:23:11.227853 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:23:11.271959 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:23:11.288162 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:23:11.320687 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:23:11.330120 ignition[902]: INFO : Ignition 2.19.0 Apr 13 20:23:11.330120 ignition[902]: INFO : Stage: mount Apr 13 20:23:11.330120 ignition[902]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:23:11.330120 ignition[902]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:23:11.330120 ignition[902]: INFO : mount: mount passed Apr 13 20:23:11.330120 ignition[902]: INFO : Ignition finished successfully Apr 13 20:23:11.340648 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:23:11.365747 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:23:11.534968 systemd-networkd[751]: eth0: Gained IPv6LL Apr 13 20:23:11.659990 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:23:11.717685 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (913) Apr 13 20:23:11.737196 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:23:11.737620 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:23:11.737771 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:23:11.761930 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:23:11.762032 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:23:11.766068 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:23:11.814625 ignition[930]: INFO : Ignition 2.19.0 Apr 13 20:23:11.814625 ignition[930]: INFO : Stage: files Apr 13 20:23:11.814625 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:23:11.814625 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:23:11.851797 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:23:11.851797 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:23:11.851797 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:23:11.851797 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:23:11.851797 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:23:11.851797 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:23:11.851797 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:23:11.851797 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:23:11.830528 unknown[930]: wrote ssh authorized keys file for user: core Apr 13 20:23:11.972790 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:23:12.104297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:23:12.104297 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:23:12.138785 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.4-x86-64.raw: attempt #1 Apr 13 20:23:12.602209 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 20:23:13.041589 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.4-x86-64.raw" Apr 13 20:23:13.041589 ignition[930]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 20:23:13.084036 ignition[930]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:23:13.084036 ignition[930]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:23:13.084036 ignition[930]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 20:23:13.084036 ignition[930]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:23:13.084036 ignition[930]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:23:13.084036 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:23:13.084036 ignition[930]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:23:13.084036 ignition[930]: INFO : files: files passed Apr 13 20:23:13.084036 ignition[930]: INFO : Ignition finished successfully Apr 13 20:23:13.048569 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:23:13.078125 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:23:13.099855 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:23:13.124315 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:23:13.326024 initrd-setup-root-after-ignition[958]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:23:13.326024 initrd-setup-root-after-ignition[958]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:23:13.124496 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:23:13.383088 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:23:13.153954 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:23:13.155249 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:23:13.192856 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:23:13.289539 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:23:13.289764 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:23:13.318953 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:23:13.336092 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:23:13.350365 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:23:13.356123 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:23:13.421509 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:23:13.441875 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:23:13.485452 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:23:13.504296 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:23:13.526258 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:23:13.545171 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:23:13.545430 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:23:13.582292 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:23:13.603165 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:23:13.624147 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:23:13.644169 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:23:13.666094 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:23:13.686242 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:23:13.705178 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:23:13.729194 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:23:13.749268 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:23:13.768213 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:23:13.787101 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:23:13.787282 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:23:13.817342 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:23:13.837186 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:23:13.860097 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:23:13.860304 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:23:13.994019 ignition[983]: INFO : Ignition 2.19.0 Apr 13 20:23:13.994019 ignition[983]: INFO : Stage: umount Apr 13 20:23:13.994019 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:23:13.994019 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:23:13.994019 ignition[983]: INFO : umount: umount passed Apr 13 20:23:13.994019 ignition[983]: INFO : Ignition finished successfully Apr 13 20:23:13.881143 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:23:13.881332 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:23:13.908311 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:23:13.908660 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:23:13.929365 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:23:13.929664 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:23:13.955257 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:23:14.010266 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:23:14.025829 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:23:14.026228 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:23:14.039632 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:23:14.040074 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:23:14.077972 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:23:14.079170 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:23:14.079368 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:23:14.090937 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:23:14.091110 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:23:14.113104 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:23:14.113474 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:23:14.131538 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:23:14.131774 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:23:14.161215 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:23:14.161322 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:23:14.183178 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:23:14.183276 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:23:14.202190 systemd[1]: Stopped target network.target - Network. Apr 13 20:23:14.220109 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:23:14.220245 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:23:14.242224 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:23:14.260085 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:23:14.263780 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:23:14.282114 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:23:14.302086 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:23:14.323159 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:23:14.323292 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:23:14.343177 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:23:14.343342 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:23:14.362183 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:23:14.362312 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:23:14.381232 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:23:14.381342 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:23:14.400178 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:23:14.400291 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:23:14.419740 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:23:14.425706 systemd-networkd[751]: eth0: DHCPv6 lease lost Apr 13 20:23:14.439423 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:23:14.458755 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:23:14.458965 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:23:14.482537 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:23:14.482826 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:23:14.501717 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:23:14.501815 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:23:14.525044 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:23:14.545801 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:23:14.545994 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:23:14.558184 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:23:15.087772 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Apr 13 20:23:14.558305 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:23:14.577146 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:23:14.577283 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:23:14.596172 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:23:14.596282 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:23:14.617498 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:23:14.636751 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:23:14.636991 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:23:14.667064 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:23:14.667210 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:23:14.687190 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:23:14.687268 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:23:14.709935 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:23:14.710387 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:23:14.747036 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:23:14.747172 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:23:14.774127 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:23:14.774351 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:23:14.811890 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:23:14.853868 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:23:14.854111 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:23:14.876165 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 20:23:14.876289 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:23:14.898108 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:23:14.898203 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:23:14.917071 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:23:14.917161 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:23:14.938765 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:23:14.938937 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:23:14.957749 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:23:14.957924 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:23:14.979746 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:23:15.003019 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:23:15.043795 systemd[1]: Switching root. Apr 13 20:23:15.441776 systemd-journald[184]: Journal stopped Apr 13 20:23:18.136369 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 20:23:18.136455 kernel: SELinux: policy capability open_perms=1 Apr 13 20:23:18.136487 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 20:23:18.136514 kernel: SELinux: policy capability always_check_network=0 Apr 13 20:23:18.136542 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 20:23:18.136603 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 20:23:18.136636 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 20:23:18.136668 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 20:23:18.136701 kernel: audit: type=1403 audit(1776111795.731:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 20:23:18.136732 systemd[1]: Successfully loaded SELinux policy in 95.611ms. Apr 13 20:23:18.136765 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.028ms. Apr 13 20:23:18.136799 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:23:18.136834 systemd[1]: Detected virtualization google. Apr 13 20:23:18.136868 systemd[1]: Detected architecture x86-64. Apr 13 20:23:18.136904 systemd[1]: Detected first boot. Apr 13 20:23:18.136956 systemd[1]: Initializing machine ID from random generator. Apr 13 20:23:18.136989 zram_generator::config[1024]: No configuration found. Apr 13 20:23:18.137022 systemd[1]: Populated /etc with preset unit settings. Apr 13 20:23:18.137064 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 20:23:18.137101 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 20:23:18.137134 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 20:23:18.137163 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 20:23:18.137195 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 20:23:18.137227 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 20:23:18.137260 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 20:23:18.137293 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 20:23:18.137337 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 20:23:18.137372 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 20:23:18.137403 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 20:23:18.137442 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:23:18.137475 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:23:18.137498 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 20:23:18.137523 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 20:23:18.137548 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 20:23:18.137624 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:23:18.137650 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 20:23:18.137681 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:23:18.137705 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 20:23:18.137726 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 20:23:18.137748 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 20:23:18.137782 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 20:23:18.137811 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:23:18.137838 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:23:18.137865 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:23:18.137888 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:23:18.137910 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 20:23:18.137938 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 20:23:18.137965 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:23:18.137993 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:23:18.138022 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:23:18.138072 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 20:23:18.138104 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 20:23:18.138136 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 20:23:18.138168 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 20:23:18.138199 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:23:18.138236 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 20:23:18.138268 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 20:23:18.138298 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 20:23:18.138326 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 20:23:18.138359 systemd[1]: Reached target machines.target - Containers. Apr 13 20:23:18.138390 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 20:23:18.138422 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:23:18.138455 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:23:18.138495 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 20:23:18.138527 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:23:18.138590 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:23:18.138614 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:23:18.138648 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 20:23:18.138677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:23:18.138982 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 20:23:18.139020 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 20:23:18.139075 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 20:23:18.139110 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 20:23:18.139143 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 20:23:18.139178 kernel: ACPI: bus type drm_connector registered Apr 13 20:23:18.139208 kernel: fuse: init (API version 7.39) Apr 13 20:23:18.139240 kernel: loop: module loaded Apr 13 20:23:18.139269 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:23:18.139302 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:23:18.139337 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 20:23:18.139425 systemd-journald[1111]: Collecting audit messages is disabled. Apr 13 20:23:18.139496 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 20:23:18.139530 systemd-journald[1111]: Journal started Apr 13 20:23:18.139628 systemd-journald[1111]: Runtime Journal (/run/log/journal/db17d0b78d5e4f28aa532ccd5ea2ef84) is 8.0M, max 148.7M, 140.7M free. Apr 13 20:23:16.773122 systemd[1]: Queued start job for default target multi-user.target. Apr 13 20:23:16.802031 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 20:23:16.802818 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 20:23:18.163590 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:23:18.188434 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 20:23:18.188644 systemd[1]: Stopped verity-setup.service. Apr 13 20:23:18.215586 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:23:18.226635 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:23:18.238542 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 20:23:18.249075 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 20:23:18.261103 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 20:23:18.272104 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 20:23:18.283084 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 20:23:18.294099 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 20:23:18.305382 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 20:23:18.317403 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:23:18.329345 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 20:23:18.330300 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 20:23:18.343393 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:23:18.343992 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:23:18.356392 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:23:18.356754 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:23:18.368356 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:23:18.368750 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:23:18.381283 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 20:23:18.381684 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 20:23:18.393275 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:23:18.393644 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:23:18.404410 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:23:18.415305 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 20:23:18.428290 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 20:23:18.440317 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:23:18.467265 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 20:23:18.484773 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 20:23:18.508692 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 20:23:18.518818 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 20:23:18.519098 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:23:18.532372 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 20:23:18.549913 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 20:23:18.572972 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 20:23:18.583033 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:23:18.593904 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 20:23:18.613309 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 20:23:18.624823 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:23:18.632409 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 20:23:18.642969 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:23:18.655810 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:23:18.667739 systemd-journald[1111]: Time spent on flushing to /var/log/journal/db17d0b78d5e4f28aa532ccd5ea2ef84 is 207.259ms for 929 entries. Apr 13 20:23:18.667739 systemd-journald[1111]: System Journal (/var/log/journal/db17d0b78d5e4f28aa532ccd5ea2ef84) is 8.0M, max 584.8M, 576.8M free. Apr 13 20:23:18.952435 systemd-journald[1111]: Received client request to flush runtime journal. Apr 13 20:23:18.952590 kernel: loop0: detected capacity change from 0 to 142488 Apr 13 20:23:18.952671 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 20:23:18.681830 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 20:23:18.702830 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:23:18.718897 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 20:23:18.735248 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 20:23:18.746524 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 20:23:18.761231 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 20:23:18.775675 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 20:23:18.798397 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 20:23:18.821022 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 20:23:18.855898 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:23:18.878915 udevadm[1144]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 20:23:18.938014 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. Apr 13 20:23:18.938077 systemd-tmpfiles[1143]: ACLs are not supported, ignoring. Apr 13 20:23:18.955900 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:23:18.971018 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 20:23:18.972456 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 20:23:18.984316 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 20:23:18.993972 kernel: loop1: detected capacity change from 0 to 54824 Apr 13 20:23:19.022884 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 20:23:19.103852 kernel: loop2: detected capacity change from 0 to 219192 Apr 13 20:23:19.148091 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 20:23:19.168989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:23:19.225599 kernel: loop3: detected capacity change from 0 to 140768 Apr 13 20:23:19.244005 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Apr 13 20:23:19.244757 systemd-tmpfiles[1165]: ACLs are not supported, ignoring. Apr 13 20:23:19.256105 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:23:19.319252 kernel: loop4: detected capacity change from 0 to 142488 Apr 13 20:23:19.372990 kernel: loop5: detected capacity change from 0 to 54824 Apr 13 20:23:19.420606 kernel: loop6: detected capacity change from 0 to 219192 Apr 13 20:23:19.478110 kernel: loop7: detected capacity change from 0 to 140768 Apr 13 20:23:19.534466 (sd-merge)[1171]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Apr 13 20:23:19.537674 (sd-merge)[1171]: Merged extensions into '/usr'. Apr 13 20:23:19.552043 systemd[1]: Reloading requested from client PID 1142 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 20:23:19.552262 systemd[1]: Reloading... Apr 13 20:23:19.768626 zram_generator::config[1197]: No configuration found. Apr 13 20:23:19.982681 ldconfig[1137]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 20:23:20.102650 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:23:20.231801 systemd[1]: Reloading finished in 677 ms. Apr 13 20:23:20.268734 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 20:23:20.279492 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 20:23:20.305831 systemd[1]: Starting ensure-sysext.service... Apr 13 20:23:20.323883 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:23:20.347775 systemd[1]: Reloading requested from client PID 1237 ('systemctl') (unit ensure-sysext.service)... Apr 13 20:23:20.347800 systemd[1]: Reloading... Apr 13 20:23:20.390547 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 20:23:20.392404 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 20:23:20.394693 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 20:23:20.395500 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Apr 13 20:23:20.395749 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Apr 13 20:23:20.406367 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:23:20.407628 systemd-tmpfiles[1238]: Skipping /boot Apr 13 20:23:20.454744 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:23:20.454771 systemd-tmpfiles[1238]: Skipping /boot Apr 13 20:23:20.533607 zram_generator::config[1264]: No configuration found. Apr 13 20:23:20.715917 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:23:20.783956 systemd[1]: Reloading finished in 434 ms. Apr 13 20:23:20.810032 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 20:23:20.832512 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:23:20.857075 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:23:20.883728 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 20:23:20.911056 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 20:23:20.931865 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:23:20.951444 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:23:20.971171 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 20:23:20.993768 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:23:20.994411 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:23:21.008763 augenrules[1327]: No rules Apr 13 20:23:21.006903 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:23:21.025769 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:23:21.046953 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:23:21.057910 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:23:21.068110 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 20:23:21.073921 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Apr 13 20:23:21.078711 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:23:21.086334 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:23:21.098499 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 20:23:21.111667 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:23:21.111940 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:23:21.124686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:23:21.124964 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:23:21.139300 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:23:21.140372 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:23:21.157303 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 20:23:21.170508 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:23:21.183265 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 20:23:21.227280 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:23:21.229131 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:23:21.239987 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:23:21.259878 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:23:21.284760 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:23:21.295199 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:23:21.306863 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:23:21.325650 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 20:23:21.335716 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:23:21.336781 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:23:21.340229 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 20:23:21.355194 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:23:21.355512 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:23:21.367817 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:23:21.368161 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:23:21.374922 systemd-resolved[1321]: Positive Trust Anchors: Apr 13 20:23:21.374964 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:23:21.375036 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:23:21.381342 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:23:21.382936 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:23:21.417636 systemd-resolved[1321]: Defaulting to hostname 'linux'. Apr 13 20:23:21.420350 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 20:23:21.432334 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:23:21.475665 systemd[1]: Finished ensure-sysext.service. Apr 13 20:23:21.499057 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 20:23:21.499736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:23:21.510907 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:23:21.511302 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:23:21.522983 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:23:21.541866 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:23:21.560444 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:23:21.579434 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:23:21.603969 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 13 20:23:21.612920 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:23:21.613084 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 20:23:21.623820 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:23:21.623865 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:23:21.625154 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:23:21.625456 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:23:21.635229 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:23:21.635668 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:23:21.646454 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:23:21.646975 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:23:21.659342 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:23:21.659905 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:23:21.671375 systemd-networkd[1370]: lo: Link UP Apr 13 20:23:21.671392 systemd-networkd[1370]: lo: Gained carrier Apr 13 20:23:21.687954 systemd-networkd[1370]: Enumeration completed Apr 13 20:23:21.690737 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:23:21.692835 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:23:21.692864 systemd-networkd[1370]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:23:21.694378 systemd-networkd[1370]: eth0: Link UP Apr 13 20:23:21.694405 systemd-networkd[1370]: eth0: Gained carrier Apr 13 20:23:21.694442 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:23:21.709546 systemd[1]: Reached target network.target - Network. Apr 13 20:23:21.712251 systemd-networkd[1370]: eth0: DHCPv4 address 10.128.0.108/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 13 20:23:21.721896 systemd-networkd[1370]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:23:21.728024 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 20:23:21.737223 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:23:21.737359 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:23:21.738666 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 13 20:23:21.760610 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1338) Apr 13 20:23:21.766955 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Apr 13 20:23:21.830643 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 13 20:23:21.848611 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 13 20:23:21.874132 kernel: ACPI: button: Power Button [PWRF] Apr 13 20:23:21.877835 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Apr 13 20:23:21.918592 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 13 20:23:21.923590 kernel: ACPI: button: Sleep Button [SLPF] Apr 13 20:23:21.947674 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Apr 13 20:23:21.958548 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 13 20:23:21.980329 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 20:23:22.003740 kernel: EDAC MC: Ver: 3.0.0 Apr 13 20:23:22.045189 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 20:23:22.093589 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 20:23:22.106384 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:23:22.126442 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 20:23:22.145881 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 20:23:22.181679 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:23:22.228823 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 20:23:22.229662 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:23:22.235884 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 20:23:22.258749 lvm[1424]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:23:22.277584 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:23:22.289238 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:23:22.299958 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 20:23:22.311866 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 20:23:22.324086 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 20:23:22.334106 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 20:23:22.345859 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 20:23:22.357844 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 20:23:22.357974 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:23:22.366853 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:23:22.376673 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 20:23:22.389201 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 20:23:22.404761 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 20:23:22.417118 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 20:23:22.430151 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 20:23:22.441844 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:23:22.451859 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:23:22.460938 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:23:22.461055 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:23:22.466750 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 20:23:22.489912 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 20:23:22.506799 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 20:23:22.540099 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 20:23:22.556175 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 20:23:22.567270 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 20:23:22.570463 jq[1434]: false Apr 13 20:23:22.577015 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 20:23:22.599895 systemd[1]: Started ntpd.service - Network Time Service. Apr 13 20:23:22.618123 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 20:23:22.624322 coreos-metadata[1432]: Apr 13 20:23:22.624 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Apr 13 20:23:22.634592 coreos-metadata[1432]: Apr 13 20:23:22.633 INFO Fetch successful Apr 13 20:23:22.634592 coreos-metadata[1432]: Apr 13 20:23:22.633 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Apr 13 20:23:22.636597 coreos-metadata[1432]: Apr 13 20:23:22.635 INFO Fetch successful Apr 13 20:23:22.636597 coreos-metadata[1432]: Apr 13 20:23:22.636 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Apr 13 20:23:22.638806 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 20:23:22.640768 coreos-metadata[1432]: Apr 13 20:23:22.640 INFO Fetch successful Apr 13 20:23:22.640978 coreos-metadata[1432]: Apr 13 20:23:22.640 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Apr 13 20:23:22.642587 coreos-metadata[1432]: Apr 13 20:23:22.642 INFO Fetch successful Apr 13 20:23:22.659860 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 20:23:22.670619 extend-filesystems[1437]: Found loop4 Apr 13 20:23:22.670619 extend-filesystems[1437]: Found loop5 Apr 13 20:23:22.670619 extend-filesystems[1437]: Found loop6 Apr 13 20:23:22.670619 extend-filesystems[1437]: Found loop7 Apr 13 20:23:22.670619 extend-filesystems[1437]: Found sda Apr 13 20:23:22.670619 extend-filesystems[1437]: Found sda1 Apr 13 20:23:22.670619 extend-filesystems[1437]: Found sda2 Apr 13 20:23:22.670619 extend-filesystems[1437]: Found sda3 Apr 13 20:23:22.670619 extend-filesystems[1437]: Found usr Apr 13 20:23:22.670619 extend-filesystems[1437]: Found sda4 Apr 13 20:23:22.670619 extend-filesystems[1437]: Found sda6 Apr 13 20:23:22.670619 extend-filesystems[1437]: Found sda7 Apr 13 20:23:22.670619 extend-filesystems[1437]: Found sda9 Apr 13 20:23:22.670619 extend-filesystems[1437]: Checking size of /dev/sda9 Apr 13 20:23:22.839836 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 18:02:33 UTC 2026 (1): Starting Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: ---------------------------------------------------- Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: ntp-4 is maintained by Network Time Foundation, Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: corporation. Support and training for ntp-4 are Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: available at https://www.nwtime.org/support Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: ---------------------------------------------------- Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: proto: precision = 0.075 usec (-24) Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: basedate set to 2026-04-01 Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: gps base set to 2026-04-05 (week 2413) Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: Listen normally on 3 eth0 10.128.0.108:123 Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: Listen normally on 4 lo [::1]:123 Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:6c%2]:123 Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: Listening on routing socket on fd #22 for interface updates Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:23:22.839992 ntpd[1439]: 13 Apr 20:23:22 ntpd[1439]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:23:22.760328 ntpd[1439]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 18:02:33 UTC 2026 (1): Starting Apr 13 20:23:22.859455 extend-filesystems[1437]: Resized partition /dev/sda9 Apr 13 20:23:22.681857 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 20:23:22.760364 ntpd[1439]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 20:23:22.868417 extend-filesystems[1456]: resize2fs 1.47.1 (20-May-2024) Apr 13 20:23:22.956403 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Apr 13 20:23:22.956475 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1338) Apr 13 20:23:22.712483 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Apr 13 20:23:22.760381 ntpd[1439]: ---------------------------------------------------- Apr 13 20:23:22.713476 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 20:23:22.760408 ntpd[1439]: ntp-4 is maintained by Network Time Foundation, Apr 13 20:23:22.723947 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 20:23:22.760426 ntpd[1439]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 20:23:22.963616 update_engine[1458]: I20260413 20:23:22.864984 1458 main.cc:92] Flatcar Update Engine starting Apr 13 20:23:22.963616 update_engine[1458]: I20260413 20:23:22.896930 1458 update_check_scheduler.cc:74] Next update check in 2m57s Apr 13 20:23:22.751842 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 20:23:22.760441 ntpd[1439]: corporation. Support and training for ntp-4 are Apr 13 20:23:22.972510 extend-filesystems[1456]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 20:23:22.972510 extend-filesystems[1456]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 13 20:23:22.972510 extend-filesystems[1456]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Apr 13 20:23:23.023817 jq[1459]: true Apr 13 20:23:22.762235 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 20:23:22.760459 ntpd[1439]: available at https://www.nwtime.org/support Apr 13 20:23:23.024973 extend-filesystems[1437]: Resized filesystem in /dev/sda9 Apr 13 20:23:22.762606 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 20:23:22.760474 ntpd[1439]: ---------------------------------------------------- Apr 13 20:23:22.763201 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 20:23:22.771708 ntpd[1439]: proto: precision = 0.075 usec (-24) Apr 13 20:23:22.763474 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 20:23:22.775640 ntpd[1439]: basedate set to 2026-04-01 Apr 13 20:23:22.800257 systemd-networkd[1370]: eth0: Gained IPv6LL Apr 13 20:23:22.775674 ntpd[1439]: gps base set to 2026-04-05 (week 2413) Apr 13 20:23:22.801426 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 20:23:22.799979 ntpd[1439]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 20:23:22.802093 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 20:23:22.800060 ntpd[1439]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 20:23:22.814915 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 20:23:22.803185 ntpd[1439]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 20:23:23.109275 init.sh[1481]: + '[' -e /etc/default/instance_configs.cfg.template ']' Apr 13 20:23:23.109275 init.sh[1481]: + echo -e '[InstanceSetup]\nset_host_keys = false' Apr 13 20:23:23.109275 init.sh[1481]: + /usr/bin/google_instance_setup Apr 13 20:23:22.862155 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 20:23:22.803257 ntpd[1439]: Listen normally on 3 eth0 10.128.0.108:123 Apr 13 20:23:22.936749 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 20:23:22.803324 ntpd[1439]: Listen normally on 4 lo [::1]:123 Apr 13 20:23:22.946850 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 20:23:22.803397 ntpd[1439]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:6c%2]:123 Apr 13 20:23:22.971731 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:23:22.803457 ntpd[1439]: Listening on routing socket on fd #22 for interface updates Apr 13 20:23:22.973228 (ntainerd)[1471]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 20:23:22.813379 dbus-daemon[1433]: [system] SELinux support is enabled Apr 13 20:23:22.990871 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 20:23:22.823197 ntpd[1439]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:23:23.023859 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Apr 13 20:23:22.823243 ntpd[1439]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:23:23.032945 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 20:23:22.837750 dbus-daemon[1433]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1370 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 20:23:23.032997 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 20:23:22.941618 dbus-daemon[1433]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 13 20:23:23.044798 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 20:23:23.044835 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 20:23:23.057874 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 20:23:23.058687 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 20:23:23.099059 systemd-logind[1451]: Watching system buttons on /dev/input/event1 (Power Button) Apr 13 20:23:23.099096 systemd-logind[1451]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 13 20:23:23.099133 systemd-logind[1451]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 20:23:23.100801 systemd-logind[1451]: New seat seat0. Apr 13 20:23:23.102919 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 20:23:23.169242 systemd[1]: Started update-engine.service - Update Engine. Apr 13 20:23:23.189609 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 20:23:23.191129 tar[1464]: linux-amd64/LICENSE Apr 13 20:23:23.191129 tar[1464]: linux-amd64/helm Apr 13 20:23:23.208298 jq[1482]: true Apr 13 20:23:23.246508 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 20:23:23.259076 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 20:23:23.276169 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 20:23:23.335181 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 20:23:23.475660 bash[1516]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:23:23.479264 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 20:23:23.508440 systemd[1]: Starting sshkeys.service... Apr 13 20:23:23.597711 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 20:23:23.620867 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 20:23:23.781629 dbus-daemon[1433]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 20:23:23.781904 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 20:23:23.782846 dbus-daemon[1433]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1498 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 20:23:23.806182 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 20:23:23.842195 coreos-metadata[1519]: Apr 13 20:23:23.841 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Apr 13 20:23:23.843667 coreos-metadata[1519]: Apr 13 20:23:23.843 INFO Fetch failed with 404: resource not found Apr 13 20:23:23.843667 coreos-metadata[1519]: Apr 13 20:23:23.843 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Apr 13 20:23:23.855527 coreos-metadata[1519]: Apr 13 20:23:23.852 INFO Fetch successful Apr 13 20:23:23.855527 coreos-metadata[1519]: Apr 13 20:23:23.852 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Apr 13 20:23:23.856061 coreos-metadata[1519]: Apr 13 20:23:23.855 INFO Fetch failed with 404: resource not found Apr 13 20:23:23.856061 coreos-metadata[1519]: Apr 13 20:23:23.855 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Apr 13 20:23:23.862603 coreos-metadata[1519]: Apr 13 20:23:23.860 INFO Fetch failed with 404: resource not found Apr 13 20:23:23.862603 coreos-metadata[1519]: Apr 13 20:23:23.860 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Apr 13 20:23:23.865126 coreos-metadata[1519]: Apr 13 20:23:23.863 INFO Fetch successful Apr 13 20:23:23.877979 unknown[1519]: wrote ssh authorized keys file for user: core Apr 13 20:23:23.979676 update-ssh-keys[1528]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:23:23.981689 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 20:23:23.997839 systemd[1]: Finished sshkeys.service. Apr 13 20:23:24.054338 sshd_keygen[1470]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 20:23:24.067180 polkitd[1526]: Started polkitd version 121 Apr 13 20:23:24.090461 polkitd[1526]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 20:23:24.091619 polkitd[1526]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 20:23:24.095033 polkitd[1526]: Finished loading, compiling and executing 2 rules Apr 13 20:23:24.098074 dbus-daemon[1433]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 20:23:24.098462 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 20:23:24.099641 polkitd[1526]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 20:23:24.128424 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 20:23:24.156877 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 20:23:24.164217 systemd-resolved[1321]: System hostname changed to 'ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal'. Apr 13 20:23:24.165385 systemd-hostnamed[1498]: Hostname set to (transient) Apr 13 20:23:24.183410 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 20:23:24.206033 systemd[1]: Started sshd@0-10.128.0.108:22-20.229.252.112:58472.service - OpenSSH per-connection server daemon (20.229.252.112:58472). Apr 13 20:23:24.295257 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 20:23:24.295931 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 20:23:24.314489 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 20:23:24.347911 containerd[1471]: time="2026-04-13T20:23:24.345907812Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 20:23:24.389236 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 20:23:24.411492 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 20:23:24.428276 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 20:23:24.439548 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 20:23:24.493596 containerd[1471]: time="2026-04-13T20:23:24.491508899Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:23:24.498266 containerd[1471]: time="2026-04-13T20:23:24.496294753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:23:24.498266 containerd[1471]: time="2026-04-13T20:23:24.496367071Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 20:23:24.498266 containerd[1471]: time="2026-04-13T20:23:24.496420710Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 20:23:24.498266 containerd[1471]: time="2026-04-13T20:23:24.496723358Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 20:23:24.498266 containerd[1471]: time="2026-04-13T20:23:24.496768247Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 20:23:24.498266 containerd[1471]: time="2026-04-13T20:23:24.496882524Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:23:24.498266 containerd[1471]: time="2026-04-13T20:23:24.496906816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:23:24.498266 containerd[1471]: time="2026-04-13T20:23:24.497244089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:23:24.498266 containerd[1471]: time="2026-04-13T20:23:24.497280165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 20:23:24.498266 containerd[1471]: time="2026-04-13T20:23:24.497309011Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:23:24.498266 containerd[1471]: time="2026-04-13T20:23:24.497332155Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 20:23:24.499137 containerd[1471]: time="2026-04-13T20:23:24.497472818Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:23:24.499137 containerd[1471]: time="2026-04-13T20:23:24.498309888Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:23:24.499137 containerd[1471]: time="2026-04-13T20:23:24.498544208Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:23:24.499137 containerd[1471]: time="2026-04-13T20:23:24.498572248Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 20:23:24.500718 containerd[1471]: time="2026-04-13T20:23:24.500674520Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 20:23:24.500831 containerd[1471]: time="2026-04-13T20:23:24.500798351Z" level=info msg="metadata content store policy set" policy=shared Apr 13 20:23:24.515652 containerd[1471]: time="2026-04-13T20:23:24.513944974Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 20:23:24.515823 containerd[1471]: time="2026-04-13T20:23:24.515720053Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 20:23:24.515883 containerd[1471]: time="2026-04-13T20:23:24.515817779Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 20:23:24.515883 containerd[1471]: time="2026-04-13T20:23:24.515856973Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 20:23:24.515974 containerd[1471]: time="2026-04-13T20:23:24.515909384Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 20:23:24.516360 containerd[1471]: time="2026-04-13T20:23:24.516174957Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 20:23:24.518192 containerd[1471]: time="2026-04-13T20:23:24.518086419Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 20:23:24.518403 containerd[1471]: time="2026-04-13T20:23:24.518357570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 20:23:24.518403 containerd[1471]: time="2026-04-13T20:23:24.518389298Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 20:23:24.520607 containerd[1471]: time="2026-04-13T20:23:24.518414220Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 20:23:24.520607 containerd[1471]: time="2026-04-13T20:23:24.518444014Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 20:23:24.520607 containerd[1471]: time="2026-04-13T20:23:24.518473727Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 20:23:24.520607 containerd[1471]: time="2026-04-13T20:23:24.518501923Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 20:23:24.520607 containerd[1471]: time="2026-04-13T20:23:24.518536472Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 20:23:24.520607 containerd[1471]: time="2026-04-13T20:23:24.519547577Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 20:23:24.520607 containerd[1471]: time="2026-04-13T20:23:24.519619485Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 20:23:24.520607 containerd[1471]: time="2026-04-13T20:23:24.519649759Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 20:23:24.520607 containerd[1471]: time="2026-04-13T20:23:24.519679303Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 20:23:24.520607 containerd[1471]: time="2026-04-13T20:23:24.519723710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.520607 containerd[1471]: time="2026-04-13T20:23:24.519753483Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.520607 containerd[1471]: time="2026-04-13T20:23:24.519780677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.520607 containerd[1471]: time="2026-04-13T20:23:24.519809282Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.520607 containerd[1471]: time="2026-04-13T20:23:24.519843913Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.521227 containerd[1471]: time="2026-04-13T20:23:24.519872655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.521227 containerd[1471]: time="2026-04-13T20:23:24.519931677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.521227 containerd[1471]: time="2026-04-13T20:23:24.519962018Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.521227 containerd[1471]: time="2026-04-13T20:23:24.519995368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.521227 containerd[1471]: time="2026-04-13T20:23:24.520044415Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.521227 containerd[1471]: time="2026-04-13T20:23:24.520073133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.521227 containerd[1471]: time="2026-04-13T20:23:24.520100802Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.521227 containerd[1471]: time="2026-04-13T20:23:24.520127759Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.521227 containerd[1471]: time="2026-04-13T20:23:24.520178330Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 20:23:24.521227 containerd[1471]: time="2026-04-13T20:23:24.520222392Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.521227 containerd[1471]: time="2026-04-13T20:23:24.520247816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.521227 containerd[1471]: time="2026-04-13T20:23:24.520271853Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 20:23:24.522489 containerd[1471]: time="2026-04-13T20:23:24.521279845Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 20:23:24.522489 containerd[1471]: time="2026-04-13T20:23:24.521433306Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 20:23:24.522489 containerd[1471]: time="2026-04-13T20:23:24.521457178Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 20:23:24.522489 containerd[1471]: time="2026-04-13T20:23:24.521480543Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 20:23:24.522489 containerd[1471]: time="2026-04-13T20:23:24.521499902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.522489 containerd[1471]: time="2026-04-13T20:23:24.521524506Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 20:23:24.522489 containerd[1471]: time="2026-04-13T20:23:24.521543351Z" level=info msg="NRI interface is disabled by configuration." Apr 13 20:23:24.522489 containerd[1471]: time="2026-04-13T20:23:24.521592614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 20:23:24.523545 containerd[1471]: time="2026-04-13T20:23:24.522112018Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 20:23:24.523545 containerd[1471]: time="2026-04-13T20:23:24.522221959Z" level=info msg="Connect containerd service" Apr 13 20:23:24.525874 containerd[1471]: time="2026-04-13T20:23:24.523688872Z" level=info msg="using legacy CRI server" Apr 13 20:23:24.525874 containerd[1471]: time="2026-04-13T20:23:24.523716229Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 20:23:24.525874 containerd[1471]: time="2026-04-13T20:23:24.523912580Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 20:23:24.539587 containerd[1471]: time="2026-04-13T20:23:24.536528388Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:23:24.542282 containerd[1471]: time="2026-04-13T20:23:24.541219349Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 20:23:24.542282 containerd[1471]: time="2026-04-13T20:23:24.541340374Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 20:23:24.542282 containerd[1471]: time="2026-04-13T20:23:24.541409967Z" level=info msg="Start subscribing containerd event" Apr 13 20:23:24.542282 containerd[1471]: time="2026-04-13T20:23:24.541480396Z" level=info msg="Start recovering state" Apr 13 20:23:24.542282 containerd[1471]: time="2026-04-13T20:23:24.541639501Z" level=info msg="Start event monitor" Apr 13 20:23:24.542282 containerd[1471]: time="2026-04-13T20:23:24.541669628Z" level=info msg="Start snapshots syncer" Apr 13 20:23:24.542282 containerd[1471]: time="2026-04-13T20:23:24.541688899Z" level=info msg="Start cni network conf syncer for default" Apr 13 20:23:24.542282 containerd[1471]: time="2026-04-13T20:23:24.541713828Z" level=info msg="Start streaming server" Apr 13 20:23:24.542282 containerd[1471]: time="2026-04-13T20:23:24.541822913Z" level=info msg="containerd successfully booted in 0.198215s" Apr 13 20:23:24.542843 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 20:23:24.946806 instance-setup[1484]: INFO Running google_set_multiqueue. Apr 13 20:23:24.977917 instance-setup[1484]: INFO Set channels for eth0 to 2. Apr 13 20:23:24.986124 instance-setup[1484]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Apr 13 20:23:24.989354 instance-setup[1484]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Apr 13 20:23:24.990294 instance-setup[1484]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Apr 13 20:23:24.994223 instance-setup[1484]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Apr 13 20:23:24.994524 instance-setup[1484]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Apr 13 20:23:24.997242 instance-setup[1484]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Apr 13 20:23:24.999660 instance-setup[1484]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Apr 13 20:23:25.004959 instance-setup[1484]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Apr 13 20:23:25.012975 tar[1464]: linux-amd64/README.md Apr 13 20:23:25.029445 instance-setup[1484]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Apr 13 20:23:25.045138 instance-setup[1484]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Apr 13 20:23:25.046721 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 20:23:25.049124 sshd[1553]: Accepted publickey for core from 20.229.252.112 port 58472 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:23:25.050718 instance-setup[1484]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Apr 13 20:23:25.050787 instance-setup[1484]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Apr 13 20:23:25.052407 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:23:25.082150 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 20:23:25.089687 init.sh[1481]: + /usr/bin/google_metadata_script_runner --script-type startup Apr 13 20:23:25.105133 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 20:23:25.121340 systemd-logind[1451]: New session 1 of user core. Apr 13 20:23:25.157696 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 20:23:25.186367 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 20:23:25.227673 (systemd)[1601]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 20:23:25.379779 startup-script[1598]: INFO Starting startup scripts. Apr 13 20:23:25.389489 startup-script[1598]: INFO No startup scripts found in metadata. Apr 13 20:23:25.389591 startup-script[1598]: INFO Finished running startup scripts. Apr 13 20:23:25.443668 init.sh[1481]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Apr 13 20:23:25.443668 init.sh[1481]: + daemon_pids=() Apr 13 20:23:25.443668 init.sh[1481]: + for d in accounts clock_skew network Apr 13 20:23:25.443668 init.sh[1481]: + daemon_pids+=($!) Apr 13 20:23:25.443668 init.sh[1481]: + for d in accounts clock_skew network Apr 13 20:23:25.443668 init.sh[1481]: + daemon_pids+=($!) Apr 13 20:23:25.443668 init.sh[1481]: + for d in accounts clock_skew network Apr 13 20:23:25.443668 init.sh[1481]: + daemon_pids+=($!) Apr 13 20:23:25.444169 init.sh[1481]: + NOTIFY_SOCKET=/run/systemd/notify Apr 13 20:23:25.444169 init.sh[1481]: + /usr/bin/systemd-notify --ready Apr 13 20:23:25.449887 init.sh[1609]: + /usr/bin/google_accounts_daemon Apr 13 20:23:25.450418 init.sh[1610]: + /usr/bin/google_clock_skew_daemon Apr 13 20:23:25.452885 init.sh[1611]: + /usr/bin/google_network_daemon Apr 13 20:23:25.467256 systemd[1]: Started oem-gce.service - GCE Linux Agent. Apr 13 20:23:25.485848 init.sh[1481]: + wait -n 1609 1610 1611 Apr 13 20:23:25.562758 systemd[1601]: Queued start job for default target default.target. Apr 13 20:23:25.571064 systemd[1601]: Created slice app.slice - User Application Slice. Apr 13 20:23:25.571127 systemd[1601]: Reached target paths.target - Paths. Apr 13 20:23:25.571156 systemd[1601]: Reached target timers.target - Timers. Apr 13 20:23:25.582777 systemd[1601]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 20:23:25.622942 systemd[1601]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 20:23:25.623190 systemd[1601]: Reached target sockets.target - Sockets. Apr 13 20:23:25.623241 systemd[1601]: Reached target basic.target - Basic System. Apr 13 20:23:25.623333 systemd[1601]: Reached target default.target - Main User Target. Apr 13 20:23:25.623394 systemd[1601]: Startup finished in 376ms. Apr 13 20:23:25.624321 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 20:23:25.642212 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 20:23:26.078429 google-clock-skew[1610]: INFO Starting Google Clock Skew daemon. Apr 13 20:23:26.112987 google-clock-skew[1610]: INFO Clock drift token has changed: 0. Apr 13 20:23:26.179962 systemd[1]: Started sshd@1-10.128.0.108:22-20.229.252.112:41848.service - OpenSSH per-connection server daemon (20.229.252.112:41848). Apr 13 20:23:26.197837 google-networking[1611]: INFO Starting Google Networking daemon. Apr 13 20:23:26.207131 groupadd[1623]: group added to /etc/group: name=google-sudoers, GID=1000 Apr 13 20:23:26.215030 groupadd[1623]: group added to /etc/gshadow: name=google-sudoers Apr 13 20:23:26.000322 systemd-resolved[1321]: Clock change detected. Flushing caches. Apr 13 20:23:26.025958 systemd-journald[1111]: Time jumped backwards, rotating. Apr 13 20:23:26.000727 google-clock-skew[1610]: INFO Synced system time with hardware clock. Apr 13 20:23:26.063949 groupadd[1623]: new group: name=google-sudoers, GID=1000 Apr 13 20:23:26.112028 google-accounts[1609]: INFO Starting Google Accounts daemon. Apr 13 20:23:26.124085 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:23:26.131007 google-accounts[1609]: WARNING OS Login not installed. Apr 13 20:23:26.133619 google-accounts[1609]: INFO Creating a new user account for 0. Apr 13 20:23:26.139949 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 20:23:26.140863 init.sh[1642]: useradd: invalid user name '0': use --badname to ignore Apr 13 20:23:26.141300 google-accounts[1609]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Apr 13 20:23:26.150827 systemd[1]: Startup finished in 1.274s (kernel) + 10.897s (initrd) + 10.758s (userspace) = 22.930s. Apr 13 20:23:26.158913 (kubelet)[1640]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:23:26.647821 sshd[1626]: Accepted publickey for core from 20.229.252.112 port 41848 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:23:26.650880 sshd[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:23:26.661536 systemd-logind[1451]: New session 2 of user core. Apr 13 20:23:26.669151 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 20:23:27.137365 kubelet[1640]: E0413 20:23:27.136460 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:23:27.137701 sshd[1626]: pam_unix(sshd:session): session closed for user core Apr 13 20:23:27.143792 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:23:27.144355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:23:27.145346 systemd[1]: kubelet.service: Consumed 1.310s CPU time. Apr 13 20:23:27.150052 systemd[1]: sshd@1-10.128.0.108:22-20.229.252.112:41848.service: Deactivated successfully. Apr 13 20:23:27.155475 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 20:23:27.158485 systemd-logind[1451]: Session 2 logged out. Waiting for processes to exit. Apr 13 20:23:27.161828 systemd-logind[1451]: Removed session 2. Apr 13 20:23:27.269383 systemd[1]: Started sshd@2-10.128.0.108:22-20.229.252.112:41860.service - OpenSSH per-connection server daemon (20.229.252.112:41860). Apr 13 20:23:27.993219 sshd[1660]: Accepted publickey for core from 20.229.252.112 port 41860 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:23:27.996699 sshd[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:23:28.004148 systemd-logind[1451]: New session 3 of user core. Apr 13 20:23:28.014180 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 20:23:28.486230 sshd[1660]: pam_unix(sshd:session): session closed for user core Apr 13 20:23:28.493600 systemd[1]: sshd@2-10.128.0.108:22-20.229.252.112:41860.service: Deactivated successfully. Apr 13 20:23:28.497478 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 20:23:28.500155 systemd-logind[1451]: Session 3 logged out. Waiting for processes to exit. Apr 13 20:23:28.502276 systemd-logind[1451]: Removed session 3. Apr 13 20:23:28.618332 systemd[1]: Started sshd@3-10.128.0.108:22-20.229.252.112:41874.service - OpenSSH per-connection server daemon (20.229.252.112:41874). Apr 13 20:23:29.344532 sshd[1667]: Accepted publickey for core from 20.229.252.112 port 41874 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:23:29.346697 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:23:29.354627 systemd-logind[1451]: New session 4 of user core. Apr 13 20:23:29.362142 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 20:23:29.848451 sshd[1667]: pam_unix(sshd:session): session closed for user core Apr 13 20:23:29.854506 systemd[1]: sshd@3-10.128.0.108:22-20.229.252.112:41874.service: Deactivated successfully. Apr 13 20:23:29.857233 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 20:23:29.858363 systemd-logind[1451]: Session 4 logged out. Waiting for processes to exit. Apr 13 20:23:29.860119 systemd-logind[1451]: Removed session 4. Apr 13 20:23:29.975228 systemd[1]: Started sshd@4-10.128.0.108:22-20.229.252.112:41886.service - OpenSSH per-connection server daemon (20.229.252.112:41886). Apr 13 20:23:30.689845 sshd[1674]: Accepted publickey for core from 20.229.252.112 port 41886 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:23:30.691363 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:23:30.697600 systemd-logind[1451]: New session 5 of user core. Apr 13 20:23:30.709089 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 20:23:31.098139 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 20:23:31.098819 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:23:31.117527 sudo[1677]: pam_unix(sudo:session): session closed for user root Apr 13 20:23:31.231472 sshd[1674]: pam_unix(sshd:session): session closed for user core Apr 13 20:23:31.238124 systemd[1]: sshd@4-10.128.0.108:22-20.229.252.112:41886.service: Deactivated successfully. Apr 13 20:23:31.241046 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 20:23:31.243560 systemd-logind[1451]: Session 5 logged out. Waiting for processes to exit. Apr 13 20:23:31.245385 systemd-logind[1451]: Removed session 5. Apr 13 20:23:31.358330 systemd[1]: Started sshd@5-10.128.0.108:22-20.229.252.112:41892.service - OpenSSH per-connection server daemon (20.229.252.112:41892). Apr 13 20:23:32.073847 sshd[1682]: Accepted publickey for core from 20.229.252.112 port 41892 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:23:32.076070 sshd[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:23:32.084134 systemd-logind[1451]: New session 6 of user core. Apr 13 20:23:32.091174 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 20:23:32.466425 sudo[1686]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 20:23:32.467473 sudo[1686]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:23:32.473403 sudo[1686]: pam_unix(sudo:session): session closed for user root Apr 13 20:23:32.488607 sudo[1685]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 20:23:32.489252 sudo[1685]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:23:32.508196 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 20:23:32.513002 auditctl[1689]: No rules Apr 13 20:23:32.513695 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 20:23:32.514066 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 20:23:32.521491 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:23:32.571917 augenrules[1707]: No rules Apr 13 20:23:32.574190 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:23:32.576787 sudo[1685]: pam_unix(sudo:session): session closed for user root Apr 13 20:23:32.690487 sshd[1682]: pam_unix(sshd:session): session closed for user core Apr 13 20:23:32.695950 systemd[1]: sshd@5-10.128.0.108:22-20.229.252.112:41892.service: Deactivated successfully. Apr 13 20:23:32.698675 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 20:23:32.701203 systemd-logind[1451]: Session 6 logged out. Waiting for processes to exit. Apr 13 20:23:32.703194 systemd-logind[1451]: Removed session 6. Apr 13 20:23:32.817790 systemd[1]: Started sshd@6-10.128.0.108:22-20.229.252.112:41904.service - OpenSSH per-connection server daemon (20.229.252.112:41904). Apr 13 20:23:33.536368 sshd[1715]: Accepted publickey for core from 20.229.252.112 port 41904 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:23:33.538724 sshd[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:23:33.547456 systemd-logind[1451]: New session 7 of user core. Apr 13 20:23:33.558177 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 20:23:33.931000 sudo[1718]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 20:23:33.931673 sudo[1718]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:23:34.447315 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 20:23:34.447489 (dockerd)[1734]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 20:23:34.931172 dockerd[1734]: time="2026-04-13T20:23:34.930973310Z" level=info msg="Starting up" Apr 13 20:23:35.063911 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2636554242-merged.mount: Deactivated successfully. Apr 13 20:23:35.099056 dockerd[1734]: time="2026-04-13T20:23:35.098942495Z" level=info msg="Loading containers: start." Apr 13 20:23:35.287789 kernel: Initializing XFRM netlink socket Apr 13 20:23:35.435493 systemd-networkd[1370]: docker0: Link UP Apr 13 20:23:35.458478 dockerd[1734]: time="2026-04-13T20:23:35.458403850Z" level=info msg="Loading containers: done." Apr 13 20:23:35.486381 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2065699212-merged.mount: Deactivated successfully. Apr 13 20:23:35.488500 dockerd[1734]: time="2026-04-13T20:23:35.488424546Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 20:23:35.488637 dockerd[1734]: time="2026-04-13T20:23:35.488597284Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 20:23:35.488863 dockerd[1734]: time="2026-04-13T20:23:35.488813184Z" level=info msg="Daemon has completed initialization" Apr 13 20:23:35.536725 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 20:23:35.536976 dockerd[1734]: time="2026-04-13T20:23:35.536818809Z" level=info msg="API listen on /run/docker.sock" Apr 13 20:23:36.512450 containerd[1471]: time="2026-04-13T20:23:36.512368646Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\"" Apr 13 20:23:37.210809 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 20:23:37.219122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:23:37.244409 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3682495126.mount: Deactivated successfully. Apr 13 20:23:37.667207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:23:37.679840 (kubelet)[1895]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:23:37.806708 kubelet[1895]: E0413 20:23:37.805977 1895 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:23:37.815655 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:23:37.816507 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:23:39.365320 containerd[1471]: time="2026-04-13T20:23:39.365222344Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:39.372772 containerd[1471]: time="2026-04-13T20:23:39.371147666Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.6: active requests=0, bytes read=26947748" Apr 13 20:23:39.375782 containerd[1471]: time="2026-04-13T20:23:39.375705141Z" level=info msg="ImageCreate event name:\"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:39.382892 containerd[1471]: time="2026-04-13T20:23:39.382805306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:39.384426 containerd[1471]: time="2026-04-13T20:23:39.384365550Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.6\" with image id \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:698dcff68850a9b3a276ae22d304679828cf8b87e9c5e3a73304f0ea03f91570\", size \"26944341\" in 2.871925146s" Apr 13 20:23:39.384564 containerd[1471]: time="2026-04-13T20:23:39.384436005Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.6\" returns image reference \"sha256:ca3b750bba3873cd164ef1e32130ad132f425a828d81ce137baf0dc62b638d3d\"" Apr 13 20:23:39.385532 containerd[1471]: time="2026-04-13T20:23:39.385495639Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\"" Apr 13 20:23:41.179906 containerd[1471]: time="2026-04-13T20:23:41.179794211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:41.182017 containerd[1471]: time="2026-04-13T20:23:41.181933565Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.6: active requests=0, bytes read=21165818" Apr 13 20:23:41.183390 containerd[1471]: time="2026-04-13T20:23:41.183305447Z" level=info msg="ImageCreate event name:\"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:41.188198 containerd[1471]: time="2026-04-13T20:23:41.188138463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:41.190328 containerd[1471]: time="2026-04-13T20:23:41.190112119Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.6\" with image id \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ba0a07668e2cfac6b1cac60e759411962dba0e40bdd1585242c4358d840095d0\", size \"22695997\" in 1.804552951s" Apr 13 20:23:41.190328 containerd[1471]: time="2026-04-13T20:23:41.190174398Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.6\" returns image reference \"sha256:062810119a58956a36eff21ecb9999104025d0131ee628f8624a43f7149eb318\"" Apr 13 20:23:41.191360 containerd[1471]: time="2026-04-13T20:23:41.191083269Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\"" Apr 13 20:23:47.965861 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 20:23:47.977805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:23:48.348581 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:23:48.364524 (kubelet)[1958]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:23:48.423916 kubelet[1958]: E0413 20:23:48.423815 1958 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:23:48.428624 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:23:48.429011 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:23:53.918583 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 20:23:53.963257 containerd[1471]: time="2026-04-13T20:23:53.963168983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:53.965215 containerd[1471]: time="2026-04-13T20:23:53.965118940Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.6: active requests=0, bytes read=15729853" Apr 13 20:23:53.967490 containerd[1471]: time="2026-04-13T20:23:53.967039085Z" level=info msg="ImageCreate event name:\"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:53.972072 containerd[1471]: time="2026-04-13T20:23:53.971969773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:53.974399 containerd[1471]: time="2026-04-13T20:23:53.974120561Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.6\" with image id \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:5034a9ecf42eb967e5c9f6faace4ec20747a8e16a170ebdaf2eb31878b2da74a\", size \"17260050\" in 12.78298212s" Apr 13 20:23:53.974399 containerd[1471]: time="2026-04-13T20:23:53.974181745Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.6\" returns image reference \"sha256:c598f9d55481b2b69a3bdbae358c0d6f51a05344edf4c9ed7d4a2c1e248823b3\"" Apr 13 20:23:53.974937 containerd[1471]: time="2026-04-13T20:23:53.974866272Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\"" Apr 13 20:23:55.430499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111601338.mount: Deactivated successfully. Apr 13 20:23:55.964624 containerd[1471]: time="2026-04-13T20:23:55.964267562Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.6: active requests=0, bytes read=25861780" Apr 13 20:23:55.965497 containerd[1471]: time="2026-04-13T20:23:55.965012982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:55.967070 containerd[1471]: time="2026-04-13T20:23:55.966314196Z" level=info msg="ImageCreate event name:\"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:55.971184 containerd[1471]: time="2026-04-13T20:23:55.971085016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:55.972981 containerd[1471]: time="2026-04-13T20:23:55.972254721Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.6\" with image id \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\", repo tag \"registry.k8s.io/kube-proxy:v1.34.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:d0921102f744d15133bc3a1cb54d8cbf323e00f2f73ea5a79c763202c6db18aa\", size \"25860793\" in 1.997317515s" Apr 13 20:23:55.972981 containerd[1471]: time="2026-04-13T20:23:55.972312538Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.6\" returns image reference \"sha256:6aec52d4adc8d0a6a397bdec1614d94e59c8e1720b80d72933691489106ece1e\"" Apr 13 20:23:55.973504 containerd[1471]: time="2026-04-13T20:23:55.973469040Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Apr 13 20:23:56.546894 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3432089353.mount: Deactivated successfully. Apr 13 20:23:58.034281 containerd[1471]: time="2026-04-13T20:23:58.034183859Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:58.036403 containerd[1471]: time="2026-04-13T20:23:58.036323071Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=22388013" Apr 13 20:23:58.038797 containerd[1471]: time="2026-04-13T20:23:58.038382631Z" level=info msg="ImageCreate event name:\"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:58.046770 containerd[1471]: time="2026-04-13T20:23:58.046317207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:58.048275 containerd[1471]: time="2026-04-13T20:23:58.048033994Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"22384805\" in 2.074402938s" Apr 13 20:23:58.048275 containerd[1471]: time="2026-04-13T20:23:58.048091886Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969\"" Apr 13 20:23:58.050099 containerd[1471]: time="2026-04-13T20:23:58.049994237Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 20:23:58.465726 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 20:23:58.474601 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:23:58.857251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1063079316.mount: Deactivated successfully. Apr 13 20:23:58.866727 containerd[1471]: time="2026-04-13T20:23:58.866641985Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:58.869189 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:23:58.872087 containerd[1471]: time="2026-04-13T20:23:58.871993746Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321224" Apr 13 20:23:58.873458 containerd[1471]: time="2026-04-13T20:23:58.873404541Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:58.878511 containerd[1471]: time="2026-04-13T20:23:58.878426883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:23:58.881123 containerd[1471]: time="2026-04-13T20:23:58.879806571Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 829.729912ms" Apr 13 20:23:58.881123 containerd[1471]: time="2026-04-13T20:23:58.879864477Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 13 20:23:58.879536 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:23:58.882370 containerd[1471]: time="2026-04-13T20:23:58.882293260Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\"" Apr 13 20:23:58.962656 kubelet[2044]: E0413 20:23:58.962589 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:23:58.967364 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:23:58.967694 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:23:59.455339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount23662488.mount: Deactivated successfully. Apr 13 20:24:00.907256 containerd[1471]: time="2026-04-13T20:24:00.907147309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.5-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:00.909734 containerd[1471]: time="2026-04-13T20:24:00.909633154Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.5-0: active requests=0, bytes read=22874237" Apr 13 20:24:00.912850 containerd[1471]: time="2026-04-13T20:24:00.911185515Z" level=info msg="ImageCreate event name:\"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:00.918427 containerd[1471]: time="2026-04-13T20:24:00.918357582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:00.920625 containerd[1471]: time="2026-04-13T20:24:00.920393600Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.5-0\" with image id \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\", repo tag \"registry.k8s.io/etcd:3.6.5-0\", repo digest \"registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534\", size \"22871747\" in 2.038048106s" Apr 13 20:24:00.920625 containerd[1471]: time="2026-04-13T20:24:00.920456982Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.5-0\" returns image reference \"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1\"" Apr 13 20:24:04.542939 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:24:04.551536 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:24:04.615936 systemd[1]: Reloading requested from client PID 2142 ('systemctl') (unit session-7.scope)... Apr 13 20:24:04.615964 systemd[1]: Reloading... Apr 13 20:24:04.817828 zram_generator::config[2182]: No configuration found. Apr 13 20:24:05.023975 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:24:05.135794 systemd[1]: Reloading finished in 518 ms. Apr 13 20:24:05.206940 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:24:05.207118 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:24:05.207563 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:24:05.215326 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:24:05.595041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:24:05.607514 (kubelet)[2232]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:24:05.677389 kubelet[2232]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:24:05.677854 kubelet[2232]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:24:05.678019 kubelet[2232]: I0413 20:24:05.677948 2232 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:24:06.369040 kubelet[2232]: I0413 20:24:06.368974 2232 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 20:24:06.369040 kubelet[2232]: I0413 20:24:06.369025 2232 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:24:06.369307 kubelet[2232]: I0413 20:24:06.369069 2232 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:24:06.369307 kubelet[2232]: I0413 20:24:06.369086 2232 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:24:06.369513 kubelet[2232]: I0413 20:24:06.369468 2232 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:24:06.380165 kubelet[2232]: I0413 20:24:06.380065 2232 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:24:06.380595 kubelet[2232]: E0413 20:24:06.380541 2232 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.108:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:24:06.391772 kubelet[2232]: E0413 20:24:06.391706 2232 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:24:06.391919 kubelet[2232]: I0413 20:24:06.391799 2232 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:24:06.395311 kubelet[2232]: I0413 20:24:06.395249 2232 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:24:06.396982 kubelet[2232]: I0413 20:24:06.396871 2232 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:24:06.397267 kubelet[2232]: I0413 20:24:06.396979 2232 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:24:06.397267 kubelet[2232]: I0413 20:24:06.397266 2232 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:24:06.397530 kubelet[2232]: I0413 20:24:06.397284 2232 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 20:24:06.397530 kubelet[2232]: I0413 20:24:06.397438 2232 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:24:06.400506 kubelet[2232]: I0413 20:24:06.400434 2232 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:24:06.400989 kubelet[2232]: I0413 20:24:06.400936 2232 kubelet.go:475] "Attempting to sync node with API server" Apr 13 20:24:06.401102 kubelet[2232]: I0413 20:24:06.401027 2232 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:24:06.401102 kubelet[2232]: I0413 20:24:06.401073 2232 kubelet.go:387] "Adding apiserver pod source" Apr 13 20:24:06.401224 kubelet[2232]: I0413 20:24:06.401113 2232 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:24:06.406352 kubelet[2232]: E0413 20:24:06.406128 2232 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:24:06.406352 kubelet[2232]: E0413 20:24:06.406350 2232 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:24:06.406615 kubelet[2232]: I0413 20:24:06.406507 2232 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:24:06.408074 kubelet[2232]: I0413 20:24:06.407508 2232 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:24:06.408074 kubelet[2232]: I0413 20:24:06.407579 2232 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:24:06.408074 kubelet[2232]: W0413 20:24:06.407676 2232 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:24:06.427569 kubelet[2232]: I0413 20:24:06.427334 2232 server.go:1262] "Started kubelet" Apr 13 20:24:06.429834 kubelet[2232]: I0413 20:24:06.429768 2232 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:24:06.435790 kubelet[2232]: I0413 20:24:06.433262 2232 server.go:310] "Adding debug handlers to kubelet server" Apr 13 20:24:06.435790 kubelet[2232]: I0413 20:24:06.434300 2232 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:24:06.437555 kubelet[2232]: I0413 20:24:06.437511 2232 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:24:06.440648 kubelet[2232]: I0413 20:24:06.440572 2232 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:24:06.440997 kubelet[2232]: I0413 20:24:06.440974 2232 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:24:06.441734 kubelet[2232]: I0413 20:24:06.441712 2232 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:24:06.443597 kubelet[2232]: I0413 20:24:06.443569 2232 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 20:24:06.443923 kubelet[2232]: E0413 20:24:06.443852 2232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" Apr 13 20:24:06.445278 kubelet[2232]: I0413 20:24:06.444901 2232 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:24:06.445278 kubelet[2232]: I0413 20:24:06.444974 2232 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:24:06.445709 kubelet[2232]: E0413 20:24:06.443228 2232 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.108:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.108:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal.18a6045067a8a512 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal,UID:ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal,},FirstTimestamp:2026-04-13 20:24:06.427239698 +0000 UTC m=+0.813473055,LastTimestamp:2026-04-13 20:24:06.427239698 +0000 UTC m=+0.813473055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal,}" Apr 13 20:24:06.447178 kubelet[2232]: E0413 20:24:06.446010 2232 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:24:06.447178 kubelet[2232]: E0413 20:24:06.446113 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.108:6443: connect: connection refused" interval="200ms" Apr 13 20:24:06.447712 kubelet[2232]: I0413 20:24:06.447683 2232 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:24:06.448385 kubelet[2232]: I0413 20:24:06.448354 2232 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:24:06.452425 kubelet[2232]: E0413 20:24:06.452360 2232 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:24:06.454773 kubelet[2232]: I0413 20:24:06.452964 2232 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:24:06.456374 kubelet[2232]: I0413 20:24:06.456325 2232 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:24:06.497007 kubelet[2232]: I0413 20:24:06.496951 2232 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:24:06.497007 kubelet[2232]: I0413 20:24:06.497007 2232 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 20:24:06.497235 kubelet[2232]: I0413 20:24:06.497050 2232 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 20:24:06.497235 kubelet[2232]: E0413 20:24:06.497124 2232 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:24:06.506973 kubelet[2232]: E0413 20:24:06.506927 2232 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:24:06.515514 kubelet[2232]: I0413 20:24:06.515473 2232 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:24:06.515855 kubelet[2232]: I0413 20:24:06.515823 2232 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:24:06.516089 kubelet[2232]: I0413 20:24:06.516064 2232 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:24:06.518718 kubelet[2232]: I0413 20:24:06.518682 2232 policy_none.go:49] "None policy: Start" Apr 13 20:24:06.518718 kubelet[2232]: I0413 20:24:06.518719 2232 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:24:06.518897 kubelet[2232]: I0413 20:24:06.518758 2232 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:24:06.522534 kubelet[2232]: I0413 20:24:06.521825 2232 policy_none.go:47] "Start" Apr 13 20:24:06.528770 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 20:24:06.544572 kubelet[2232]: E0413 20:24:06.544523 2232 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" Apr 13 20:24:06.546642 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 20:24:06.551684 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 20:24:06.561896 kubelet[2232]: E0413 20:24:06.561563 2232 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:24:06.562352 kubelet[2232]: I0413 20:24:06.562110 2232 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:24:06.562352 kubelet[2232]: I0413 20:24:06.562169 2232 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:24:06.563775 kubelet[2232]: I0413 20:24:06.563717 2232 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:24:06.566133 kubelet[2232]: E0413 20:24:06.566099 2232 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:24:06.566541 kubelet[2232]: E0413 20:24:06.566505 2232 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" Apr 13 20:24:06.627491 systemd[1]: Created slice kubepods-burstable-pod6892734398bed4a46b2f0f828fd04e0f.slice - libcontainer container kubepods-burstable-pod6892734398bed4a46b2f0f828fd04e0f.slice. Apr 13 20:24:06.639215 kubelet[2232]: E0413 20:24:06.639154 2232 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.646203 kubelet[2232]: I0413 20:24:06.645825 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6892734398bed4a46b2f0f828fd04e0f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"6892734398bed4a46b2f0f828fd04e0f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.646203 kubelet[2232]: I0413 20:24:06.645992 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad21eb2c934d18e0688e3da8d7746c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"b2ad21eb2c934d18e0688e3da8d7746c\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.646203 kubelet[2232]: I0413 20:24:06.646033 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2ad21eb2c934d18e0688e3da8d7746c-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"b2ad21eb2c934d18e0688e3da8d7746c\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.646203 kubelet[2232]: I0413 20:24:06.646080 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2ad21eb2c934d18e0688e3da8d7746c-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"b2ad21eb2c934d18e0688e3da8d7746c\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.647269 kubelet[2232]: I0413 20:24:06.646161 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b95fea2c44b7c81890496cf01a8e6836-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"b95fea2c44b7c81890496cf01a8e6836\") " pod="kube-system/kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.647269 kubelet[2232]: I0413 20:24:06.646204 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6892734398bed4a46b2f0f828fd04e0f-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"6892734398bed4a46b2f0f828fd04e0f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.647269 kubelet[2232]: I0413 20:24:06.646252 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6892734398bed4a46b2f0f828fd04e0f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"6892734398bed4a46b2f0f828fd04e0f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.647269 kubelet[2232]: I0413 20:24:06.646279 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2ad21eb2c934d18e0688e3da8d7746c-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"b2ad21eb2c934d18e0688e3da8d7746c\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.647537 kubelet[2232]: I0413 20:24:06.646321 2232 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2ad21eb2c934d18e0688e3da8d7746c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"b2ad21eb2c934d18e0688e3da8d7746c\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.647537 kubelet[2232]: E0413 20:24:06.646888 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.108:6443: connect: connection refused" interval="400ms" Apr 13 20:24:06.649620 systemd[1]: Created slice kubepods-burstable-podb2ad21eb2c934d18e0688e3da8d7746c.slice - libcontainer container kubepods-burstable-podb2ad21eb2c934d18e0688e3da8d7746c.slice. Apr 13 20:24:06.653248 kubelet[2232]: E0413 20:24:06.653195 2232 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.657591 systemd[1]: Created slice kubepods-burstable-podb95fea2c44b7c81890496cf01a8e6836.slice - libcontainer container kubepods-burstable-podb95fea2c44b7c81890496cf01a8e6836.slice. Apr 13 20:24:06.660559 kubelet[2232]: E0413 20:24:06.660503 2232 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.671775 kubelet[2232]: I0413 20:24:06.671068 2232 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.671775 kubelet[2232]: E0413 20:24:06.671553 2232 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.108:6443/api/v1/nodes\": dial tcp 10.128.0.108:6443: connect: connection refused" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.877454 kubelet[2232]: I0413 20:24:06.877410 2232 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.878542 kubelet[2232]: E0413 20:24:06.878072 2232 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.108:6443/api/v1/nodes\": dial tcp 10.128.0.108:6443: connect: connection refused" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:06.944586 containerd[1471]: time="2026-04-13T20:24:06.944523755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal,Uid:6892734398bed4a46b2f0f828fd04e0f,Namespace:kube-system,Attempt:0,}" Apr 13 20:24:06.961315 containerd[1471]: time="2026-04-13T20:24:06.961114002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal,Uid:b2ad21eb2c934d18e0688e3da8d7746c,Namespace:kube-system,Attempt:0,}" Apr 13 20:24:06.964370 containerd[1471]: time="2026-04-13T20:24:06.964247355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal,Uid:b95fea2c44b7c81890496cf01a8e6836,Namespace:kube-system,Attempt:0,}" Apr 13 20:24:07.047674 kubelet[2232]: E0413 20:24:07.047602 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.108:6443: connect: connection refused" interval="800ms" Apr 13 20:24:07.284201 kubelet[2232]: I0413 20:24:07.284136 2232 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:07.284681 kubelet[2232]: E0413 20:24:07.284643 2232 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.108:6443/api/v1/nodes\": dial tcp 10.128.0.108:6443: connect: connection refused" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:07.426290 kubelet[2232]: E0413 20:24:07.426225 2232 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.108:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:24:07.526536 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3734460505.mount: Deactivated successfully. Apr 13 20:24:07.537361 containerd[1471]: time="2026-04-13T20:24:07.537173817Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:24:07.538909 containerd[1471]: time="2026-04-13T20:24:07.538849874Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:24:07.540354 containerd[1471]: time="2026-04-13T20:24:07.540288122Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:24:07.541664 containerd[1471]: time="2026-04-13T20:24:07.541594813Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312062" Apr 13 20:24:07.543807 containerd[1471]: time="2026-04-13T20:24:07.543506954Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:24:07.545949 containerd[1471]: time="2026-04-13T20:24:07.545787079Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:24:07.550358 containerd[1471]: time="2026-04-13T20:24:07.550292420Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 589.055945ms" Apr 13 20:24:07.552139 containerd[1471]: time="2026-04-13T20:24:07.552074404Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:24:07.554886 containerd[1471]: time="2026-04-13T20:24:07.554824550Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 610.174349ms" Apr 13 20:24:07.557407 containerd[1471]: time="2026-04-13T20:24:07.557337639Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:24:07.559907 containerd[1471]: time="2026-04-13T20:24:07.559837447Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 595.47182ms" Apr 13 20:24:07.616273 kubelet[2232]: E0413 20:24:07.616170 2232 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.108:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:24:07.789168 containerd[1471]: time="2026-04-13T20:24:07.783698666Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:24:07.789168 containerd[1471]: time="2026-04-13T20:24:07.784891714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:24:07.789168 containerd[1471]: time="2026-04-13T20:24:07.784915915Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:24:07.789168 containerd[1471]: time="2026-04-13T20:24:07.785081420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:24:07.793407 containerd[1471]: time="2026-04-13T20:24:07.793228389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:24:07.793565 containerd[1471]: time="2026-04-13T20:24:07.793519560Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:24:07.793650 containerd[1471]: time="2026-04-13T20:24:07.793592946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:24:07.798245 containerd[1471]: time="2026-04-13T20:24:07.796486769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:24:07.798245 containerd[1471]: time="2026-04-13T20:24:07.796559512Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:24:07.798245 containerd[1471]: time="2026-04-13T20:24:07.796577756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:24:07.798245 containerd[1471]: time="2026-04-13T20:24:07.796685763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:24:07.798245 containerd[1471]: time="2026-04-13T20:24:07.796068296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:24:07.836248 systemd[1]: Started cri-containerd-59c2f7aec9742b52c09d9b7007ec95fd642f17018f43bdd04ecefd7754a23524.scope - libcontainer container 59c2f7aec9742b52c09d9b7007ec95fd642f17018f43bdd04ecefd7754a23524. Apr 13 20:24:07.845444 kubelet[2232]: E0413 20:24:07.845364 2232 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.108:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:24:07.850800 kubelet[2232]: E0413 20:24:07.850504 2232 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.108:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.108:6443: connect: connection refused" interval="1.6s" Apr 13 20:24:07.877118 systemd[1]: Started cri-containerd-7b6b15a7c0d794ea18fd23a58e99ebeda7f234f660953b48032b3aaf7f930b48.scope - libcontainer container 7b6b15a7c0d794ea18fd23a58e99ebeda7f234f660953b48032b3aaf7f930b48. Apr 13 20:24:07.880869 systemd[1]: Started cri-containerd-cc64e3b282f19cafe672074482dd53904faed0b09e5ac995e5b9a03cb65b5aeb.scope - libcontainer container cc64e3b282f19cafe672074482dd53904faed0b09e5ac995e5b9a03cb65b5aeb. Apr 13 20:24:07.974738 containerd[1471]: time="2026-04-13T20:24:07.973866325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal,Uid:b2ad21eb2c934d18e0688e3da8d7746c,Namespace:kube-system,Attempt:0,} returns sandbox id \"59c2f7aec9742b52c09d9b7007ec95fd642f17018f43bdd04ecefd7754a23524\"" Apr 13 20:24:07.980917 kubelet[2232]: E0413 20:24:07.980135 2232 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flat" Apr 13 20:24:07.989426 kubelet[2232]: E0413 20:24:07.989365 2232 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.108:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.108:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:24:07.994323 containerd[1471]: time="2026-04-13T20:24:07.994253106Z" level=info msg="CreateContainer within sandbox \"59c2f7aec9742b52c09d9b7007ec95fd642f17018f43bdd04ecefd7754a23524\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:24:08.027441 containerd[1471]: time="2026-04-13T20:24:08.025694006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal,Uid:b95fea2c44b7c81890496cf01a8e6836,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b6b15a7c0d794ea18fd23a58e99ebeda7f234f660953b48032b3aaf7f930b48\"" Apr 13 20:24:08.028579 kubelet[2232]: E0413 20:24:08.028533 2232 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-21291" Apr 13 20:24:08.037084 containerd[1471]: time="2026-04-13T20:24:08.036989026Z" level=info msg="CreateContainer within sandbox \"7b6b15a7c0d794ea18fd23a58e99ebeda7f234f660953b48032b3aaf7f930b48\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:24:08.042106 containerd[1471]: time="2026-04-13T20:24:08.040153830Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal,Uid:6892734398bed4a46b2f0f828fd04e0f,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc64e3b282f19cafe672074482dd53904faed0b09e5ac995e5b9a03cb65b5aeb\"" Apr 13 20:24:08.045379 containerd[1471]: time="2026-04-13T20:24:08.045216415Z" level=info msg="CreateContainer within sandbox \"59c2f7aec9742b52c09d9b7007ec95fd642f17018f43bdd04ecefd7754a23524\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f0b992636d4096b96ebbb665c1a85dc831da18d3e850209bd262a00bc424f987\"" Apr 13 20:24:08.046725 kubelet[2232]: E0413 20:24:08.046685 2232 kubelet_pods.go:556] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-21291" Apr 13 20:24:08.048041 containerd[1471]: time="2026-04-13T20:24:08.047979517Z" level=info msg="StartContainer for \"f0b992636d4096b96ebbb665c1a85dc831da18d3e850209bd262a00bc424f987\"" Apr 13 20:24:08.053472 containerd[1471]: time="2026-04-13T20:24:08.053418216Z" level=info msg="CreateContainer within sandbox \"cc64e3b282f19cafe672074482dd53904faed0b09e5ac995e5b9a03cb65b5aeb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:24:08.073589 containerd[1471]: time="2026-04-13T20:24:08.073422150Z" level=info msg="CreateContainer within sandbox \"7b6b15a7c0d794ea18fd23a58e99ebeda7f234f660953b48032b3aaf7f930b48\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"91971cad59c62af82d5dfc0026784331ab4eeef740f7958150f0cc798b492646\"" Apr 13 20:24:08.074523 containerd[1471]: time="2026-04-13T20:24:08.074340260Z" level=info msg="StartContainer for \"91971cad59c62af82d5dfc0026784331ab4eeef740f7958150f0cc798b492646\"" Apr 13 20:24:08.092412 kubelet[2232]: I0413 20:24:08.092177 2232 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:08.092958 kubelet[2232]: E0413 20:24:08.092734 2232 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.108:6443/api/v1/nodes\": dial tcp 10.128.0.108:6443: connect: connection refused" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:08.105067 containerd[1471]: time="2026-04-13T20:24:08.104871541Z" level=info msg="CreateContainer within sandbox \"cc64e3b282f19cafe672074482dd53904faed0b09e5ac995e5b9a03cb65b5aeb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f4206293b2a48ce60c7f120f2b0a984b9cd1cb61c04cd03e127a4aecc9225c70\"" Apr 13 20:24:08.106817 containerd[1471]: time="2026-04-13T20:24:08.105927661Z" level=info msg="StartContainer for \"f4206293b2a48ce60c7f120f2b0a984b9cd1cb61c04cd03e127a4aecc9225c70\"" Apr 13 20:24:08.133060 systemd[1]: Started cri-containerd-f0b992636d4096b96ebbb665c1a85dc831da18d3e850209bd262a00bc424f987.scope - libcontainer container f0b992636d4096b96ebbb665c1a85dc831da18d3e850209bd262a00bc424f987. Apr 13 20:24:08.176054 systemd[1]: Started cri-containerd-91971cad59c62af82d5dfc0026784331ab4eeef740f7958150f0cc798b492646.scope - libcontainer container 91971cad59c62af82d5dfc0026784331ab4eeef740f7958150f0cc798b492646. Apr 13 20:24:08.200047 systemd[1]: Started cri-containerd-f4206293b2a48ce60c7f120f2b0a984b9cd1cb61c04cd03e127a4aecc9225c70.scope - libcontainer container f4206293b2a48ce60c7f120f2b0a984b9cd1cb61c04cd03e127a4aecc9225c70. Apr 13 20:24:08.221677 update_engine[1458]: I20260413 20:24:08.221596 1458 update_attempter.cc:509] Updating boot flags... Apr 13 20:24:08.289132 containerd[1471]: time="2026-04-13T20:24:08.289044430Z" level=info msg="StartContainer for \"f0b992636d4096b96ebbb665c1a85dc831da18d3e850209bd262a00bc424f987\" returns successfully" Apr 13 20:24:08.342656 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2492) Apr 13 20:24:08.412073 containerd[1471]: time="2026-04-13T20:24:08.410440460Z" level=info msg="StartContainer for \"f4206293b2a48ce60c7f120f2b0a984b9cd1cb61c04cd03e127a4aecc9225c70\" returns successfully" Apr 13 20:24:08.424534 kubelet[2232]: E0413 20:24:08.424469 2232 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.108:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.108:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:24:08.539386 containerd[1471]: time="2026-04-13T20:24:08.539151633Z" level=info msg="StartContainer for \"91971cad59c62af82d5dfc0026784331ab4eeef740f7958150f0cc798b492646\" returns successfully" Apr 13 20:24:08.575136 kubelet[2232]: E0413 20:24:08.575075 2232 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:08.584700 kubelet[2232]: E0413 20:24:08.584645 2232 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:08.709794 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2497) Apr 13 20:24:09.588306 kubelet[2232]: E0413 20:24:09.587436 2232 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:09.590364 kubelet[2232]: E0413 20:24:09.590167 2232 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:09.700415 kubelet[2232]: I0413 20:24:09.699479 2232 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:10.592384 kubelet[2232]: E0413 20:24:10.592331 2232 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:10.593872 kubelet[2232]: E0413 20:24:10.593833 2232 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:11.597945 kubelet[2232]: E0413 20:24:11.597897 2232 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:12.414772 kubelet[2232]: I0413 20:24:12.414251 2232 apiserver.go:52] "Watching apiserver" Apr 13 20:24:12.545825 kubelet[2232]: I0413 20:24:12.545774 2232 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:24:12.597800 kubelet[2232]: E0413 20:24:12.597488 2232 kubelet.go:3216] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:12.687707 kubelet[2232]: E0413 20:24:12.687045 2232 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:12.756418 kubelet[2232]: E0413 20:24:12.756060 2232 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal.18a6045067a8a512 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal,UID:ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal,},FirstTimestamp:2026-04-13 20:24:06.427239698 +0000 UTC m=+0.813473055,LastTimestamp:2026-04-13 20:24:06.427239698 +0000 UTC m=+0.813473055,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal,}" Apr 13 20:24:12.801624 kubelet[2232]: I0413 20:24:12.801569 2232 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:12.844520 kubelet[2232]: I0413 20:24:12.844468 2232 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:12.887464 kubelet[2232]: E0413 20:24:12.887415 2232 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:12.887464 kubelet[2232]: I0413 20:24:12.887463 2232 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:12.894789 kubelet[2232]: E0413 20:24:12.894475 2232 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:12.894789 kubelet[2232]: I0413 20:24:12.894523 2232 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:12.900715 kubelet[2232]: E0413 20:24:12.900653 2232 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:15.121834 systemd[1]: Reloading requested from client PID 2541 ('systemctl') (unit session-7.scope)... Apr 13 20:24:15.121880 systemd[1]: Reloading... Apr 13 20:24:15.316832 zram_generator::config[2577]: No configuration found. Apr 13 20:24:15.512896 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:24:15.658191 systemd[1]: Reloading finished in 535 ms. Apr 13 20:24:15.735804 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:24:15.737683 kubelet[2232]: I0413 20:24:15.735332 2232 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:24:15.754190 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:24:15.754567 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:24:15.754663 systemd[1]: kubelet.service: Consumed 1.545s CPU time, 127.6M memory peak, 0B memory swap peak. Apr 13 20:24:15.764399 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:24:16.084345 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:24:16.099028 (kubelet)[2629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:24:16.181671 kubelet[2629]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:24:16.181671 kubelet[2629]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:24:16.185003 kubelet[2629]: I0413 20:24:16.181731 2629 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:24:16.199820 kubelet[2629]: I0413 20:24:16.198567 2629 server.go:529] "Kubelet version" kubeletVersion="v1.34.4" Apr 13 20:24:16.199820 kubelet[2629]: I0413 20:24:16.198620 2629 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:24:16.199820 kubelet[2629]: I0413 20:24:16.198666 2629 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:24:16.199820 kubelet[2629]: I0413 20:24:16.198677 2629 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:24:16.199820 kubelet[2629]: I0413 20:24:16.199119 2629 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:24:16.201474 kubelet[2629]: I0413 20:24:16.201437 2629 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:24:16.205700 kubelet[2629]: I0413 20:24:16.205656 2629 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:24:16.221107 kubelet[2629]: E0413 20:24:16.220777 2629 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:24:16.221107 kubelet[2629]: I0413 20:24:16.220907 2629 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:24:16.232282 kubelet[2629]: I0413 20:24:16.232216 2629 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:24:16.232837 kubelet[2629]: I0413 20:24:16.232777 2629 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:24:16.233208 kubelet[2629]: I0413 20:24:16.232840 2629 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:24:16.233407 kubelet[2629]: I0413 20:24:16.233229 2629 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:24:16.233407 kubelet[2629]: I0413 20:24:16.233248 2629 container_manager_linux.go:306] "Creating device plugin manager" Apr 13 20:24:16.233407 kubelet[2629]: I0413 20:24:16.233297 2629 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:24:16.234820 kubelet[2629]: I0413 20:24:16.233626 2629 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:24:16.234820 kubelet[2629]: I0413 20:24:16.233959 2629 kubelet.go:475] "Attempting to sync node with API server" Apr 13 20:24:16.234820 kubelet[2629]: I0413 20:24:16.234730 2629 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:24:16.234820 kubelet[2629]: I0413 20:24:16.234826 2629 kubelet.go:387] "Adding apiserver pod source" Apr 13 20:24:16.235080 kubelet[2629]: I0413 20:24:16.234855 2629 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:24:16.240701 kubelet[2629]: I0413 20:24:16.240653 2629 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:24:16.242000 kubelet[2629]: I0413 20:24:16.241965 2629 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:24:16.242148 kubelet[2629]: I0413 20:24:16.242022 2629 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:24:16.293303 kubelet[2629]: I0413 20:24:16.293196 2629 server.go:1262] "Started kubelet" Apr 13 20:24:16.297239 kubelet[2629]: I0413 20:24:16.296160 2629 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:24:16.297239 kubelet[2629]: I0413 20:24:16.296222 2629 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:24:16.297239 kubelet[2629]: I0413 20:24:16.296563 2629 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:24:16.297239 kubelet[2629]: I0413 20:24:16.296654 2629 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:24:16.298009 kubelet[2629]: I0413 20:24:16.297782 2629 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:24:16.302803 kubelet[2629]: I0413 20:24:16.302507 2629 server.go:310] "Adding debug handlers to kubelet server" Apr 13 20:24:16.307609 kubelet[2629]: I0413 20:24:16.306939 2629 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:24:16.311726 kubelet[2629]: I0413 20:24:16.311680 2629 volume_manager.go:313] "Starting Kubelet Volume Manager" Apr 13 20:24:16.314233 kubelet[2629]: I0413 20:24:16.314197 2629 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:24:16.314536 kubelet[2629]: I0413 20:24:16.314415 2629 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:24:16.325076 kubelet[2629]: I0413 20:24:16.325031 2629 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:24:16.325282 kubelet[2629]: I0413 20:24:16.325235 2629 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:24:16.333699 kubelet[2629]: E0413 20:24:16.333436 2629 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:24:16.341010 kubelet[2629]: I0413 20:24:16.340873 2629 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:24:16.378588 kubelet[2629]: I0413 20:24:16.378345 2629 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:24:16.383267 kubelet[2629]: I0413 20:24:16.383220 2629 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:24:16.383267 kubelet[2629]: I0413 20:24:16.383270 2629 status_manager.go:244] "Starting to sync pod status with apiserver" Apr 13 20:24:16.383648 kubelet[2629]: I0413 20:24:16.383313 2629 kubelet.go:2428] "Starting kubelet main sync loop" Apr 13 20:24:16.383648 kubelet[2629]: E0413 20:24:16.383399 2629 kubelet.go:2452] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:24:16.446049 kubelet[2629]: I0413 20:24:16.446009 2629 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:24:16.446049 kubelet[2629]: I0413 20:24:16.446040 2629 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:24:16.446293 kubelet[2629]: I0413 20:24:16.446077 2629 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:24:16.446293 kubelet[2629]: I0413 20:24:16.446285 2629 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 20:24:16.446418 kubelet[2629]: I0413 20:24:16.446302 2629 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 20:24:16.446418 kubelet[2629]: I0413 20:24:16.446333 2629 policy_none.go:49] "None policy: Start" Apr 13 20:24:16.446418 kubelet[2629]: I0413 20:24:16.446350 2629 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:24:16.446418 kubelet[2629]: I0413 20:24:16.446366 2629 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:24:16.446606 kubelet[2629]: I0413 20:24:16.446527 2629 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 20:24:16.446606 kubelet[2629]: I0413 20:24:16.446550 2629 policy_none.go:47] "Start" Apr 13 20:24:16.457809 kubelet[2629]: E0413 20:24:16.457774 2629 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:24:16.462785 kubelet[2629]: I0413 20:24:16.461035 2629 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:24:16.462785 kubelet[2629]: I0413 20:24:16.461063 2629 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:24:16.462785 kubelet[2629]: I0413 20:24:16.461881 2629 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:24:16.469008 kubelet[2629]: E0413 20:24:16.468839 2629 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:24:16.487343 kubelet[2629]: I0413 20:24:16.486084 2629 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:16.488045 kubelet[2629]: I0413 20:24:16.488000 2629 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:16.492449 kubelet[2629]: I0413 20:24:16.492410 2629 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:16.506780 kubelet[2629]: I0413 20:24:16.505511 2629 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:24:16.507781 kubelet[2629]: I0413 20:24:16.507689 2629 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:24:16.509239 kubelet[2629]: I0413 20:24:16.509195 2629 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:24:16.588371 kubelet[2629]: I0413 20:24:16.588182 2629 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:16.605290 kubelet[2629]: I0413 20:24:16.604505 2629 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:16.605290 kubelet[2629]: I0413 20:24:16.604626 2629 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:16.617230 kubelet[2629]: I0413 20:24:16.615893 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6892734398bed4a46b2f0f828fd04e0f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"6892734398bed4a46b2f0f828fd04e0f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:16.617230 kubelet[2629]: I0413 20:24:16.615973 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b2ad21eb2c934d18e0688e3da8d7746c-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"b2ad21eb2c934d18e0688e3da8d7746c\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:16.617230 kubelet[2629]: I0413 20:24:16.616027 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b2ad21eb2c934d18e0688e3da8d7746c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"b2ad21eb2c934d18e0688e3da8d7746c\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:16.617230 kubelet[2629]: I0413 20:24:16.616126 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b2ad21eb2c934d18e0688e3da8d7746c-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"b2ad21eb2c934d18e0688e3da8d7746c\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:16.617601 kubelet[2629]: I0413 20:24:16.616168 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b95fea2c44b7c81890496cf01a8e6836-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"b95fea2c44b7c81890496cf01a8e6836\") " pod="kube-system/kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:16.617601 kubelet[2629]: I0413 20:24:16.616210 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6892734398bed4a46b2f0f828fd04e0f-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"6892734398bed4a46b2f0f828fd04e0f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:16.617601 kubelet[2629]: I0413 20:24:16.616249 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b2ad21eb2c934d18e0688e3da8d7746c-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"b2ad21eb2c934d18e0688e3da8d7746c\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:16.617601 kubelet[2629]: I0413 20:24:16.616289 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b2ad21eb2c934d18e0688e3da8d7746c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"b2ad21eb2c934d18e0688e3da8d7746c\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:16.617842 kubelet[2629]: I0413 20:24:16.616327 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6892734398bed4a46b2f0f828fd04e0f-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" (UID: \"6892734398bed4a46b2f0f828fd04e0f\") " pod="kube-system/kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:17.251016 kubelet[2629]: I0413 20:24:17.250901 2629 apiserver.go:52] "Watching apiserver" Apr 13 20:24:17.315033 kubelet[2629]: I0413 20:24:17.314938 2629 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:24:17.426784 kubelet[2629]: I0413 20:24:17.424316 2629 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:17.426784 kubelet[2629]: I0413 20:24:17.426658 2629 kubelet.go:3220] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:17.447080 kubelet[2629]: I0413 20:24:17.447037 2629 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:24:17.447448 kubelet[2629]: E0413 20:24:17.447416 2629 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:17.454237 kubelet[2629]: I0413 20:24:17.454188 2629 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:24:17.454574 kubelet[2629]: E0413 20:24:17.454543 2629 kubelet.go:3222] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:24:17.511569 kubelet[2629]: I0413 20:24:17.511250 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" podStartSLOduration=1.5112241069999999 podStartE2EDuration="1.511224107s" podCreationTimestamp="2026-04-13 20:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:24:17.485008694 +0000 UTC m=+1.377978231" watchObservedRunningTime="2026-04-13 20:24:17.511224107 +0000 UTC m=+1.404193648" Apr 13 20:24:17.542701 kubelet[2629]: I0413 20:24:17.542451 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" podStartSLOduration=1.5424292450000001 podStartE2EDuration="1.542429245s" podCreationTimestamp="2026-04-13 20:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:24:17.514360666 +0000 UTC m=+1.407330208" watchObservedRunningTime="2026-04-13 20:24:17.542429245 +0000 UTC m=+1.435398785" Apr 13 20:24:17.566291 kubelet[2629]: I0413 20:24:17.566192 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" podStartSLOduration=1.566168927 podStartE2EDuration="1.566168927s" podCreationTimestamp="2026-04-13 20:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:24:17.543864783 +0000 UTC m=+1.436834320" watchObservedRunningTime="2026-04-13 20:24:17.566168927 +0000 UTC m=+1.459138468" Apr 13 20:24:22.400395 kubelet[2629]: I0413 20:24:22.400280 2629 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:24:22.401600 kubelet[2629]: I0413 20:24:22.401142 2629 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:24:22.401718 containerd[1471]: time="2026-04-13T20:24:22.400783257Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:24:23.502259 systemd[1]: Created slice kubepods-besteffort-pod6cee4009_3a7e_40c3_be15_1933e2ace3b3.slice - libcontainer container kubepods-besteffort-pod6cee4009_3a7e_40c3_be15_1933e2ace3b3.slice. Apr 13 20:24:23.566820 kubelet[2629]: I0413 20:24:23.566728 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-547sn\" (UniqueName: \"kubernetes.io/projected/6cee4009-3a7e-40c3-be15-1933e2ace3b3-kube-api-access-547sn\") pod \"kube-proxy-l4rz9\" (UID: \"6cee4009-3a7e-40c3-be15-1933e2ace3b3\") " pod="kube-system/kube-proxy-l4rz9" Apr 13 20:24:23.566820 kubelet[2629]: I0413 20:24:23.566818 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6cee4009-3a7e-40c3-be15-1933e2ace3b3-kube-proxy\") pod \"kube-proxy-l4rz9\" (UID: \"6cee4009-3a7e-40c3-be15-1933e2ace3b3\") " pod="kube-system/kube-proxy-l4rz9" Apr 13 20:24:23.567539 kubelet[2629]: I0413 20:24:23.566851 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6cee4009-3a7e-40c3-be15-1933e2ace3b3-xtables-lock\") pod \"kube-proxy-l4rz9\" (UID: \"6cee4009-3a7e-40c3-be15-1933e2ace3b3\") " pod="kube-system/kube-proxy-l4rz9" Apr 13 20:24:23.567539 kubelet[2629]: I0413 20:24:23.566885 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6cee4009-3a7e-40c3-be15-1933e2ace3b3-lib-modules\") pod \"kube-proxy-l4rz9\" (UID: \"6cee4009-3a7e-40c3-be15-1933e2ace3b3\") " pod="kube-system/kube-proxy-l4rz9" Apr 13 20:24:23.631720 systemd[1]: Created slice kubepods-besteffort-pod077cfc5b_15b9_4f32_b002_6d5a201c24e8.slice - libcontainer container kubepods-besteffort-pod077cfc5b_15b9_4f32_b002_6d5a201c24e8.slice. Apr 13 20:24:23.669535 kubelet[2629]: I0413 20:24:23.668221 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2hkk\" (UniqueName: \"kubernetes.io/projected/077cfc5b-15b9-4f32-b002-6d5a201c24e8-kube-api-access-z2hkk\") pod \"tigera-operator-5588576f44-kqt9r\" (UID: \"077cfc5b-15b9-4f32-b002-6d5a201c24e8\") " pod="tigera-operator/tigera-operator-5588576f44-kqt9r" Apr 13 20:24:23.669535 kubelet[2629]: I0413 20:24:23.668294 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/077cfc5b-15b9-4f32-b002-6d5a201c24e8-var-lib-calico\") pod \"tigera-operator-5588576f44-kqt9r\" (UID: \"077cfc5b-15b9-4f32-b002-6d5a201c24e8\") " pod="tigera-operator/tigera-operator-5588576f44-kqt9r" Apr 13 20:24:23.816964 containerd[1471]: time="2026-04-13T20:24:23.815652983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l4rz9,Uid:6cee4009-3a7e-40c3-be15-1933e2ace3b3,Namespace:kube-system,Attempt:0,}" Apr 13 20:24:23.864694 containerd[1471]: time="2026-04-13T20:24:23.864526215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:24:23.864918 containerd[1471]: time="2026-04-13T20:24:23.864713329Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:24:23.864918 containerd[1471]: time="2026-04-13T20:24:23.864815979Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:24:23.865065 containerd[1471]: time="2026-04-13T20:24:23.865004968Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:24:23.912057 systemd[1]: Started cri-containerd-a92b3ef70e0fb18556810fc386b0e77e2281d34acaf3f335a206b879c5c1da94.scope - libcontainer container a92b3ef70e0fb18556810fc386b0e77e2281d34acaf3f335a206b879c5c1da94. Apr 13 20:24:23.941528 containerd[1471]: time="2026-04-13T20:24:23.941165736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-kqt9r,Uid:077cfc5b-15b9-4f32-b002-6d5a201c24e8,Namespace:tigera-operator,Attempt:0,}" Apr 13 20:24:23.968028 containerd[1471]: time="2026-04-13T20:24:23.967967880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l4rz9,Uid:6cee4009-3a7e-40c3-be15-1933e2ace3b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a92b3ef70e0fb18556810fc386b0e77e2281d34acaf3f335a206b879c5c1da94\"" Apr 13 20:24:23.985471 containerd[1471]: time="2026-04-13T20:24:23.985403447Z" level=info msg="CreateContainer within sandbox \"a92b3ef70e0fb18556810fc386b0e77e2281d34acaf3f335a206b879c5c1da94\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:24:24.002800 containerd[1471]: time="2026-04-13T20:24:23.999098507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:24:24.002800 containerd[1471]: time="2026-04-13T20:24:23.999272450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:24:24.002800 containerd[1471]: time="2026-04-13T20:24:23.999305129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:24:24.002800 containerd[1471]: time="2026-04-13T20:24:23.999549139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:24:24.035823 systemd[1]: Started cri-containerd-724ce86975122c65256ff50af85ba1a86a2b933a4aa16fa796d974441637f04b.scope - libcontainer container 724ce86975122c65256ff50af85ba1a86a2b933a4aa16fa796d974441637f04b. Apr 13 20:24:24.038626 containerd[1471]: time="2026-04-13T20:24:24.038572028Z" level=info msg="CreateContainer within sandbox \"a92b3ef70e0fb18556810fc386b0e77e2281d34acaf3f335a206b879c5c1da94\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ada42026a5caf09e3b27ecfc9ec9cb3096615f2c59ffc39217c31354fc415440\"" Apr 13 20:24:24.041313 containerd[1471]: time="2026-04-13T20:24:24.041258865Z" level=info msg="StartContainer for \"ada42026a5caf09e3b27ecfc9ec9cb3096615f2c59ffc39217c31354fc415440\"" Apr 13 20:24:24.106044 systemd[1]: Started cri-containerd-ada42026a5caf09e3b27ecfc9ec9cb3096615f2c59ffc39217c31354fc415440.scope - libcontainer container ada42026a5caf09e3b27ecfc9ec9cb3096615f2c59ffc39217c31354fc415440. Apr 13 20:24:24.141925 containerd[1471]: time="2026-04-13T20:24:24.141869122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5588576f44-kqt9r,Uid:077cfc5b-15b9-4f32-b002-6d5a201c24e8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"724ce86975122c65256ff50af85ba1a86a2b933a4aa16fa796d974441637f04b\"" Apr 13 20:24:24.148941 containerd[1471]: time="2026-04-13T20:24:24.148738832Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 20:24:24.176125 containerd[1471]: time="2026-04-13T20:24:24.176062654Z" level=info msg="StartContainer for \"ada42026a5caf09e3b27ecfc9ec9cb3096615f2c59ffc39217c31354fc415440\" returns successfully" Apr 13 20:24:24.510087 kubelet[2629]: I0413 20:24:24.509445 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-l4rz9" podStartSLOduration=1.509419361 podStartE2EDuration="1.509419361s" podCreationTimestamp="2026-04-13 20:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:24:24.488221088 +0000 UTC m=+8.381190629" watchObservedRunningTime="2026-04-13 20:24:24.509419361 +0000 UTC m=+8.402388903" Apr 13 20:24:25.328623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2432257290.mount: Deactivated successfully. Apr 13 20:24:27.795505 containerd[1471]: time="2026-04-13T20:24:27.795415596Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:27.797672 containerd[1471]: time="2026-04-13T20:24:27.797580132Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 13 20:24:27.799604 containerd[1471]: time="2026-04-13T20:24:27.798973444Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:27.805611 containerd[1471]: time="2026-04-13T20:24:27.805522885Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:27.807547 containerd[1471]: time="2026-04-13T20:24:27.806651772Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.657832115s" Apr 13 20:24:27.807547 containerd[1471]: time="2026-04-13T20:24:27.806711456Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 13 20:24:27.814189 containerd[1471]: time="2026-04-13T20:24:27.814134978Z" level=info msg="CreateContainer within sandbox \"724ce86975122c65256ff50af85ba1a86a2b933a4aa16fa796d974441637f04b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 20:24:27.834476 containerd[1471]: time="2026-04-13T20:24:27.834386423Z" level=info msg="CreateContainer within sandbox \"724ce86975122c65256ff50af85ba1a86a2b933a4aa16fa796d974441637f04b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a2f103fc555a1a5235508b82f9428023e4a61ff552c09b6359e525158cdea277\"" Apr 13 20:24:27.836195 containerd[1471]: time="2026-04-13T20:24:27.836155109Z" level=info msg="StartContainer for \"a2f103fc555a1a5235508b82f9428023e4a61ff552c09b6359e525158cdea277\"" Apr 13 20:24:27.903120 systemd[1]: Started cri-containerd-a2f103fc555a1a5235508b82f9428023e4a61ff552c09b6359e525158cdea277.scope - libcontainer container a2f103fc555a1a5235508b82f9428023e4a61ff552c09b6359e525158cdea277. Apr 13 20:24:27.953025 containerd[1471]: time="2026-04-13T20:24:27.952932230Z" level=info msg="StartContainer for \"a2f103fc555a1a5235508b82f9428023e4a61ff552c09b6359e525158cdea277\" returns successfully" Apr 13 20:24:35.459599 sudo[1718]: pam_unix(sudo:session): session closed for user root Apr 13 20:24:35.574528 sshd[1715]: pam_unix(sshd:session): session closed for user core Apr 13 20:24:35.584355 systemd[1]: sshd@6-10.128.0.108:22-20.229.252.112:41904.service: Deactivated successfully. Apr 13 20:24:35.590845 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 20:24:35.591827 systemd[1]: session-7.scope: Consumed 7.052s CPU time, 160.2M memory peak, 0B memory swap peak. Apr 13 20:24:35.595807 systemd-logind[1451]: Session 7 logged out. Waiting for processes to exit. Apr 13 20:24:35.600833 systemd-logind[1451]: Removed session 7. Apr 13 20:24:39.314234 kubelet[2629]: I0413 20:24:39.313614 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5588576f44-kqt9r" podStartSLOduration=12.651148752 podStartE2EDuration="16.313519923s" podCreationTimestamp="2026-04-13 20:24:23 +0000 UTC" firstStartedPulling="2026-04-13 20:24:24.145642578 +0000 UTC m=+8.038612109" lastFinishedPulling="2026-04-13 20:24:27.80801375 +0000 UTC m=+11.700983280" observedRunningTime="2026-04-13 20:24:28.47951544 +0000 UTC m=+12.372484981" watchObservedRunningTime="2026-04-13 20:24:39.313519923 +0000 UTC m=+23.206489466" Apr 13 20:24:39.341354 systemd[1]: Created slice kubepods-besteffort-pod875678be_4fe8_49e3_95d2_a1c26edff869.slice - libcontainer container kubepods-besteffort-pod875678be_4fe8_49e3_95d2_a1c26edff869.slice. Apr 13 20:24:39.386400 kubelet[2629]: I0413 20:24:39.384054 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/875678be-4fe8-49e3-95d2-a1c26edff869-tigera-ca-bundle\") pod \"calico-typha-76fdb76fbb-5xvdl\" (UID: \"875678be-4fe8-49e3-95d2-a1c26edff869\") " pod="calico-system/calico-typha-76fdb76fbb-5xvdl" Apr 13 20:24:39.386400 kubelet[2629]: I0413 20:24:39.385832 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/875678be-4fe8-49e3-95d2-a1c26edff869-typha-certs\") pod \"calico-typha-76fdb76fbb-5xvdl\" (UID: \"875678be-4fe8-49e3-95d2-a1c26edff869\") " pod="calico-system/calico-typha-76fdb76fbb-5xvdl" Apr 13 20:24:39.386400 kubelet[2629]: I0413 20:24:39.386121 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vszfb\" (UniqueName: \"kubernetes.io/projected/875678be-4fe8-49e3-95d2-a1c26edff869-kube-api-access-vszfb\") pod \"calico-typha-76fdb76fbb-5xvdl\" (UID: \"875678be-4fe8-49e3-95d2-a1c26edff869\") " pod="calico-system/calico-typha-76fdb76fbb-5xvdl" Apr 13 20:24:39.530611 systemd[1]: Created slice kubepods-besteffort-podce58dca1_e395_4dfb_8767_24e5979ce28a.slice - libcontainer container kubepods-besteffort-podce58dca1_e395_4dfb_8767_24e5979ce28a.slice. Apr 13 20:24:39.586869 kubelet[2629]: I0413 20:24:39.586647 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ce58dca1-e395-4dfb-8767-24e5979ce28a-node-certs\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.586869 kubelet[2629]: I0413 20:24:39.586732 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/ce58dca1-e395-4dfb-8767-24e5979ce28a-sys-fs\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.586869 kubelet[2629]: I0413 20:24:39.586781 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ce58dca1-e395-4dfb-8767-24e5979ce28a-var-lib-calico\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.586869 kubelet[2629]: I0413 20:24:39.586814 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/ce58dca1-e395-4dfb-8767-24e5979ce28a-bpffs\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.586869 kubelet[2629]: I0413 20:24:39.586848 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ce58dca1-e395-4dfb-8767-24e5979ce28a-cni-bin-dir\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.587434 kubelet[2629]: I0413 20:24:39.587195 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ce58dca1-e395-4dfb-8767-24e5979ce28a-flexvol-driver-host\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.587434 kubelet[2629]: I0413 20:24:39.587242 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ce58dca1-e395-4dfb-8767-24e5979ce28a-xtables-lock\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.587434 kubelet[2629]: I0413 20:24:39.587282 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/ce58dca1-e395-4dfb-8767-24e5979ce28a-nodeproc\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.587434 kubelet[2629]: I0413 20:24:39.587310 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ce58dca1-e395-4dfb-8767-24e5979ce28a-var-run-calico\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.587434 kubelet[2629]: I0413 20:24:39.587340 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ce58dca1-e395-4dfb-8767-24e5979ce28a-cni-log-dir\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.587694 kubelet[2629]: I0413 20:24:39.587367 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ce58dca1-e395-4dfb-8767-24e5979ce28a-policysync\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.587694 kubelet[2629]: I0413 20:24:39.587403 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ce58dca1-e395-4dfb-8767-24e5979ce28a-cni-net-dir\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.587694 kubelet[2629]: I0413 20:24:39.587434 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ce58dca1-e395-4dfb-8767-24e5979ce28a-lib-modules\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.587694 kubelet[2629]: I0413 20:24:39.587467 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7tjv\" (UniqueName: \"kubernetes.io/projected/ce58dca1-e395-4dfb-8767-24e5979ce28a-kube-api-access-z7tjv\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.587694 kubelet[2629]: I0413 20:24:39.587501 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ce58dca1-e395-4dfb-8767-24e5979ce28a-tigera-ca-bundle\") pod \"calico-node-sg8gs\" (UID: \"ce58dca1-e395-4dfb-8767-24e5979ce28a\") " pod="calico-system/calico-node-sg8gs" Apr 13 20:24:39.644735 kubelet[2629]: E0413 20:24:39.644650 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjsrs" podUID="616bbb20-6acc-4142-9ccc-5584aac07844" Apr 13 20:24:39.651931 containerd[1471]: time="2026-04-13T20:24:39.651863448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76fdb76fbb-5xvdl,Uid:875678be-4fe8-49e3-95d2-a1c26edff869,Namespace:calico-system,Attempt:0,}" Apr 13 20:24:39.690115 kubelet[2629]: I0413 20:24:39.688053 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/616bbb20-6acc-4142-9ccc-5584aac07844-socket-dir\") pod \"csi-node-driver-bjsrs\" (UID: \"616bbb20-6acc-4142-9ccc-5584aac07844\") " pod="calico-system/csi-node-driver-bjsrs" Apr 13 20:24:39.690115 kubelet[2629]: I0413 20:24:39.688185 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/616bbb20-6acc-4142-9ccc-5584aac07844-kubelet-dir\") pod \"csi-node-driver-bjsrs\" (UID: \"616bbb20-6acc-4142-9ccc-5584aac07844\") " pod="calico-system/csi-node-driver-bjsrs" Apr 13 20:24:39.690115 kubelet[2629]: I0413 20:24:39.688291 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/616bbb20-6acc-4142-9ccc-5584aac07844-registration-dir\") pod \"csi-node-driver-bjsrs\" (UID: \"616bbb20-6acc-4142-9ccc-5584aac07844\") " pod="calico-system/csi-node-driver-bjsrs" Apr 13 20:24:39.690115 kubelet[2629]: I0413 20:24:39.688326 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/616bbb20-6acc-4142-9ccc-5584aac07844-varrun\") pod \"csi-node-driver-bjsrs\" (UID: \"616bbb20-6acc-4142-9ccc-5584aac07844\") " pod="calico-system/csi-node-driver-bjsrs" Apr 13 20:24:39.690115 kubelet[2629]: I0413 20:24:39.688376 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7h9g\" (UniqueName: \"kubernetes.io/projected/616bbb20-6acc-4142-9ccc-5584aac07844-kube-api-access-q7h9g\") pod \"csi-node-driver-bjsrs\" (UID: \"616bbb20-6acc-4142-9ccc-5584aac07844\") " pod="calico-system/csi-node-driver-bjsrs" Apr 13 20:24:39.694377 kubelet[2629]: E0413 20:24:39.694317 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.694638 kubelet[2629]: W0413 20:24:39.694606 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.694944 kubelet[2629]: E0413 20:24:39.694884 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.695913 kubelet[2629]: E0413 20:24:39.695889 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.696186 kubelet[2629]: W0413 20:24:39.696161 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.696444 kubelet[2629]: E0413 20:24:39.696422 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.702062 kubelet[2629]: E0413 20:24:39.701867 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.702699 kubelet[2629]: W0413 20:24:39.702276 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.702699 kubelet[2629]: E0413 20:24:39.702320 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.706966 kubelet[2629]: E0413 20:24:39.706796 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.707396 kubelet[2629]: W0413 20:24:39.707154 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.707674 kubelet[2629]: E0413 20:24:39.707489 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.710587 kubelet[2629]: E0413 20:24:39.709994 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.710587 kubelet[2629]: W0413 20:24:39.710022 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.710587 kubelet[2629]: E0413 20:24:39.710076 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.711083 kubelet[2629]: E0413 20:24:39.711063 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.711231 kubelet[2629]: W0413 20:24:39.711211 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.711422 kubelet[2629]: E0413 20:24:39.711326 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.715147 kubelet[2629]: E0413 20:24:39.715008 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.715147 kubelet[2629]: W0413 20:24:39.715034 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.715147 kubelet[2629]: E0413 20:24:39.715061 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.719289 kubelet[2629]: E0413 20:24:39.718982 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.719289 kubelet[2629]: W0413 20:24:39.719008 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.719289 kubelet[2629]: E0413 20:24:39.719035 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.722764 kubelet[2629]: E0413 20:24:39.722568 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.722764 kubelet[2629]: W0413 20:24:39.722596 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.729181 kubelet[2629]: E0413 20:24:39.726736 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.729181 kubelet[2629]: E0413 20:24:39.727617 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.729181 kubelet[2629]: W0413 20:24:39.727636 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.729181 kubelet[2629]: E0413 20:24:39.727662 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.729995 kubelet[2629]: E0413 20:24:39.729514 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.729995 kubelet[2629]: W0413 20:24:39.729533 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.729995 kubelet[2629]: E0413 20:24:39.729555 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.730884 kubelet[2629]: E0413 20:24:39.730258 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.730884 kubelet[2629]: W0413 20:24:39.730275 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.730884 kubelet[2629]: E0413 20:24:39.730294 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.734431 kubelet[2629]: E0413 20:24:39.731988 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.734431 kubelet[2629]: W0413 20:24:39.732011 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.734431 kubelet[2629]: E0413 20:24:39.732031 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.734431 kubelet[2629]: E0413 20:24:39.732468 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.734431 kubelet[2629]: W0413 20:24:39.732482 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.734431 kubelet[2629]: E0413 20:24:39.732499 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.734431 kubelet[2629]: E0413 20:24:39.734356 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.734431 kubelet[2629]: W0413 20:24:39.734373 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.734431 kubelet[2629]: E0413 20:24:39.734393 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.735886 kubelet[2629]: E0413 20:24:39.735860 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.735886 kubelet[2629]: W0413 20:24:39.735885 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.736145 kubelet[2629]: E0413 20:24:39.735905 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.737782 kubelet[2629]: E0413 20:24:39.737732 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.737782 kubelet[2629]: W0413 20:24:39.737778 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.737987 kubelet[2629]: E0413 20:24:39.737800 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.739317 kubelet[2629]: E0413 20:24:39.739251 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.739317 kubelet[2629]: W0413 20:24:39.739277 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.739317 kubelet[2629]: E0413 20:24:39.739311 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.740089 kubelet[2629]: E0413 20:24:39.739824 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.740089 kubelet[2629]: W0413 20:24:39.739842 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.740089 kubelet[2629]: E0413 20:24:39.739861 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.740785 containerd[1471]: time="2026-04-13T20:24:39.739246753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:24:39.740785 containerd[1471]: time="2026-04-13T20:24:39.740300286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:24:39.742281 kubelet[2629]: E0413 20:24:39.741468 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.742281 kubelet[2629]: W0413 20:24:39.741484 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.742281 kubelet[2629]: E0413 20:24:39.741501 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.742484 kubelet[2629]: E0413 20:24:39.742455 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.742484 kubelet[2629]: W0413 20:24:39.742472 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.742604 kubelet[2629]: E0413 20:24:39.742491 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.743489 containerd[1471]: time="2026-04-13T20:24:39.742868447Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:24:39.743723 containerd[1471]: time="2026-04-13T20:24:39.743434145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:24:39.744286 kubelet[2629]: E0413 20:24:39.743853 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.744286 kubelet[2629]: W0413 20:24:39.743868 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.744286 kubelet[2629]: E0413 20:24:39.743892 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.745000 kubelet[2629]: E0413 20:24:39.744812 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.745000 kubelet[2629]: W0413 20:24:39.744838 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.745000 kubelet[2629]: E0413 20:24:39.744864 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.746313 kubelet[2629]: E0413 20:24:39.745590 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.746313 kubelet[2629]: W0413 20:24:39.745606 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.746313 kubelet[2629]: E0413 20:24:39.745625 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.747076 kubelet[2629]: E0413 20:24:39.746908 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.747076 kubelet[2629]: W0413 20:24:39.746927 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.747076 kubelet[2629]: E0413 20:24:39.746946 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.748770 kubelet[2629]: E0413 20:24:39.748076 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.748770 kubelet[2629]: W0413 20:24:39.748097 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.748770 kubelet[2629]: E0413 20:24:39.748128 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.749639 kubelet[2629]: E0413 20:24:39.749613 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.749639 kubelet[2629]: W0413 20:24:39.749638 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.749898 kubelet[2629]: E0413 20:24:39.749658 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.752777 kubelet[2629]: E0413 20:24:39.751030 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.752777 kubelet[2629]: W0413 20:24:39.751050 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.752777 kubelet[2629]: E0413 20:24:39.751068 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.753225 kubelet[2629]: E0413 20:24:39.753197 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.753311 kubelet[2629]: W0413 20:24:39.753226 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.753311 kubelet[2629]: E0413 20:24:39.753250 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.761348 kubelet[2629]: E0413 20:24:39.761226 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.761348 kubelet[2629]: W0413 20:24:39.761265 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.761348 kubelet[2629]: E0413 20:24:39.761305 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.794214 kubelet[2629]: E0413 20:24:39.793861 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.794214 kubelet[2629]: W0413 20:24:39.793909 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.794214 kubelet[2629]: E0413 20:24:39.793943 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.798088 kubelet[2629]: E0413 20:24:39.797462 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.798088 kubelet[2629]: W0413 20:24:39.797489 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.798088 kubelet[2629]: E0413 20:24:39.797519 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.806536 kubelet[2629]: E0413 20:24:39.804794 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.806536 kubelet[2629]: W0413 20:24:39.804827 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.806536 kubelet[2629]: E0413 20:24:39.804867 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.806536 kubelet[2629]: E0413 20:24:39.805925 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.806536 kubelet[2629]: W0413 20:24:39.805944 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.806536 kubelet[2629]: E0413 20:24:39.805976 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.806536 kubelet[2629]: E0413 20:24:39.806427 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.806536 kubelet[2629]: W0413 20:24:39.806445 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.806536 kubelet[2629]: E0413 20:24:39.806463 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.809540 kubelet[2629]: E0413 20:24:39.809003 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.809540 kubelet[2629]: W0413 20:24:39.809026 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.809540 kubelet[2629]: E0413 20:24:39.809051 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.809930 systemd[1]: Started cri-containerd-9b6716bdcd6c900cad524125d495843bda3b0c786bc3d34a39d3257000a6f2d8.scope - libcontainer container 9b6716bdcd6c900cad524125d495843bda3b0c786bc3d34a39d3257000a6f2d8. Apr 13 20:24:39.813010 kubelet[2629]: E0413 20:24:39.812986 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.814777 kubelet[2629]: W0413 20:24:39.813124 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.814777 kubelet[2629]: E0413 20:24:39.813153 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.815317 kubelet[2629]: E0413 20:24:39.815297 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.816062 kubelet[2629]: W0413 20:24:39.816034 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.816517 kubelet[2629]: E0413 20:24:39.816243 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.818257 kubelet[2629]: E0413 20:24:39.818070 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.818257 kubelet[2629]: W0413 20:24:39.818092 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.818257 kubelet[2629]: E0413 20:24:39.818115 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.820678 kubelet[2629]: E0413 20:24:39.820443 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.820678 kubelet[2629]: W0413 20:24:39.820465 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.820678 kubelet[2629]: E0413 20:24:39.820487 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.822913 kubelet[2629]: E0413 20:24:39.822835 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.822913 kubelet[2629]: W0413 20:24:39.822857 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.822913 kubelet[2629]: E0413 20:24:39.822883 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.824392 kubelet[2629]: E0413 20:24:39.824138 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.824392 kubelet[2629]: W0413 20:24:39.824159 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.824392 kubelet[2629]: E0413 20:24:39.824180 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.825987 kubelet[2629]: E0413 20:24:39.825817 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.825987 kubelet[2629]: W0413 20:24:39.825836 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.825987 kubelet[2629]: E0413 20:24:39.825857 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.828448 kubelet[2629]: E0413 20:24:39.827458 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.828448 kubelet[2629]: W0413 20:24:39.827477 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.828448 kubelet[2629]: E0413 20:24:39.827497 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.831674 kubelet[2629]: E0413 20:24:39.831397 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.831674 kubelet[2629]: W0413 20:24:39.831423 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.831674 kubelet[2629]: E0413 20:24:39.831446 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.832986 kubelet[2629]: E0413 20:24:39.832688 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.832986 kubelet[2629]: W0413 20:24:39.832798 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.832986 kubelet[2629]: E0413 20:24:39.832819 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.840463 kubelet[2629]: E0413 20:24:39.837505 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.840463 kubelet[2629]: W0413 20:24:39.837526 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.840463 kubelet[2629]: E0413 20:24:39.837547 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.840463 kubelet[2629]: E0413 20:24:39.837991 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.840463 kubelet[2629]: W0413 20:24:39.838009 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.840463 kubelet[2629]: E0413 20:24:39.838029 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.840463 kubelet[2629]: E0413 20:24:39.838621 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.840463 kubelet[2629]: W0413 20:24:39.839791 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.840463 kubelet[2629]: E0413 20:24:39.839813 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.841155 kubelet[2629]: E0413 20:24:39.841133 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.841430 kubelet[2629]: W0413 20:24:39.841252 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.841430 kubelet[2629]: E0413 20:24:39.841277 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.842251 kubelet[2629]: E0413 20:24:39.842226 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.842772 kubelet[2629]: W0413 20:24:39.842401 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.842772 kubelet[2629]: E0413 20:24:39.842429 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.845534 kubelet[2629]: E0413 20:24:39.845193 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.845534 kubelet[2629]: W0413 20:24:39.845215 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.845534 kubelet[2629]: E0413 20:24:39.845237 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.847650 kubelet[2629]: E0413 20:24:39.847077 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.847650 kubelet[2629]: W0413 20:24:39.847098 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.847650 kubelet[2629]: E0413 20:24:39.847118 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.849516 kubelet[2629]: E0413 20:24:39.849189 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.849516 kubelet[2629]: W0413 20:24:39.849219 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.849516 kubelet[2629]: E0413 20:24:39.849240 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.850136 kubelet[2629]: E0413 20:24:39.850065 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.850136 kubelet[2629]: W0413 20:24:39.850084 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.850136 kubelet[2629]: E0413 20:24:39.850105 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.851112 kubelet[2629]: E0413 20:24:39.850868 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.851112 kubelet[2629]: W0413 20:24:39.850898 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.851112 kubelet[2629]: E0413 20:24:39.850920 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.859486 containerd[1471]: time="2026-04-13T20:24:39.858761619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sg8gs,Uid:ce58dca1-e395-4dfb-8767-24e5979ce28a,Namespace:calico-system,Attempt:0,}" Apr 13 20:24:39.882576 kubelet[2629]: E0413 20:24:39.882534 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:39.882576 kubelet[2629]: W0413 20:24:39.882571 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:39.882893 kubelet[2629]: E0413 20:24:39.882608 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:39.922952 containerd[1471]: time="2026-04-13T20:24:39.918731245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:24:39.922952 containerd[1471]: time="2026-04-13T20:24:39.918917876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:24:39.922952 containerd[1471]: time="2026-04-13T20:24:39.918947213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:24:39.922952 containerd[1471]: time="2026-04-13T20:24:39.919303566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:24:39.975117 systemd[1]: Started cri-containerd-77872e87b556eafb36bbcadf58ed79b6a7d342cbfdeeda8f79e069ce25303bc4.scope - libcontainer container 77872e87b556eafb36bbcadf58ed79b6a7d342cbfdeeda8f79e069ce25303bc4. Apr 13 20:24:39.993953 containerd[1471]: time="2026-04-13T20:24:39.993641360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76fdb76fbb-5xvdl,Uid:875678be-4fe8-49e3-95d2-a1c26edff869,Namespace:calico-system,Attempt:0,} returns sandbox id \"9b6716bdcd6c900cad524125d495843bda3b0c786bc3d34a39d3257000a6f2d8\"" Apr 13 20:24:39.999495 containerd[1471]: time="2026-04-13T20:24:39.998345616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 20:24:40.043304 containerd[1471]: time="2026-04-13T20:24:40.043224063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sg8gs,Uid:ce58dca1-e395-4dfb-8767-24e5979ce28a,Namespace:calico-system,Attempt:0,} returns sandbox id \"77872e87b556eafb36bbcadf58ed79b6a7d342cbfdeeda8f79e069ce25303bc4\"" Apr 13 20:24:41.111685 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2890443739.mount: Deactivated successfully. Apr 13 20:24:41.386513 kubelet[2629]: E0413 20:24:41.384416 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjsrs" podUID="616bbb20-6acc-4142-9ccc-5584aac07844" Apr 13 20:24:42.273621 containerd[1471]: time="2026-04-13T20:24:42.273539306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:42.275276 containerd[1471]: time="2026-04-13T20:24:42.275205704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 13 20:24:42.277266 containerd[1471]: time="2026-04-13T20:24:42.277162999Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:42.282217 containerd[1471]: time="2026-04-13T20:24:42.280812731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:42.282217 containerd[1471]: time="2026-04-13T20:24:42.282018091Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.283612349s" Apr 13 20:24:42.282217 containerd[1471]: time="2026-04-13T20:24:42.282066709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 13 20:24:42.284003 containerd[1471]: time="2026-04-13T20:24:42.283943512Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 20:24:42.314777 containerd[1471]: time="2026-04-13T20:24:42.314444398Z" level=info msg="CreateContainer within sandbox \"9b6716bdcd6c900cad524125d495843bda3b0c786bc3d34a39d3257000a6f2d8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 20:24:42.344602 containerd[1471]: time="2026-04-13T20:24:42.344531486Z" level=info msg="CreateContainer within sandbox \"9b6716bdcd6c900cad524125d495843bda3b0c786bc3d34a39d3257000a6f2d8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4160599aeedf56d35f31d78d335dd70a0303c0d864bb3f8ecfe6d19a064a9c34\"" Apr 13 20:24:42.346135 containerd[1471]: time="2026-04-13T20:24:42.346092735Z" level=info msg="StartContainer for \"4160599aeedf56d35f31d78d335dd70a0303c0d864bb3f8ecfe6d19a064a9c34\"" Apr 13 20:24:42.401124 systemd[1]: Started cri-containerd-4160599aeedf56d35f31d78d335dd70a0303c0d864bb3f8ecfe6d19a064a9c34.scope - libcontainer container 4160599aeedf56d35f31d78d335dd70a0303c0d864bb3f8ecfe6d19a064a9c34. Apr 13 20:24:42.472966 containerd[1471]: time="2026-04-13T20:24:42.472901757Z" level=info msg="StartContainer for \"4160599aeedf56d35f31d78d335dd70a0303c0d864bb3f8ecfe6d19a064a9c34\" returns successfully" Apr 13 20:24:42.561482 kubelet[2629]: I0413 20:24:42.561238 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76fdb76fbb-5xvdl" podStartSLOduration=1.274836782 podStartE2EDuration="3.561170646s" podCreationTimestamp="2026-04-13 20:24:39 +0000 UTC" firstStartedPulling="2026-04-13 20:24:39.997354778 +0000 UTC m=+23.890324313" lastFinishedPulling="2026-04-13 20:24:42.283688641 +0000 UTC m=+26.176658177" observedRunningTime="2026-04-13 20:24:42.553899597 +0000 UTC m=+26.446869140" watchObservedRunningTime="2026-04-13 20:24:42.561170646 +0000 UTC m=+26.454140186" Apr 13 20:24:42.610041 kubelet[2629]: E0413 20:24:42.609975 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.610041 kubelet[2629]: W0413 20:24:42.610026 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.610302 kubelet[2629]: E0413 20:24:42.610065 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.610778 kubelet[2629]: E0413 20:24:42.610532 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.610778 kubelet[2629]: W0413 20:24:42.610552 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.610778 kubelet[2629]: E0413 20:24:42.610574 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.612806 kubelet[2629]: E0413 20:24:42.612093 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.612806 kubelet[2629]: W0413 20:24:42.612121 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.612806 kubelet[2629]: E0413 20:24:42.612180 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.613064 kubelet[2629]: E0413 20:24:42.613047 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.613132 kubelet[2629]: W0413 20:24:42.613069 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.613132 kubelet[2629]: E0413 20:24:42.613093 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.615913 kubelet[2629]: E0413 20:24:42.615874 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.615913 kubelet[2629]: W0413 20:24:42.615908 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.616160 kubelet[2629]: E0413 20:24:42.615931 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.617772 kubelet[2629]: E0413 20:24:42.616347 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.617772 kubelet[2629]: W0413 20:24:42.616368 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.617772 kubelet[2629]: E0413 20:24:42.616388 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.617772 kubelet[2629]: E0413 20:24:42.616806 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.617772 kubelet[2629]: W0413 20:24:42.616826 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.617772 kubelet[2629]: E0413 20:24:42.616846 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.617772 kubelet[2629]: E0413 20:24:42.617340 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.617772 kubelet[2629]: W0413 20:24:42.617358 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.617772 kubelet[2629]: E0413 20:24:42.617377 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.619400 kubelet[2629]: E0413 20:24:42.619361 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.619400 kubelet[2629]: W0413 20:24:42.619395 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.619619 kubelet[2629]: E0413 20:24:42.619419 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.620782 kubelet[2629]: E0413 20:24:42.619863 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.620782 kubelet[2629]: W0413 20:24:42.619885 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.620782 kubelet[2629]: E0413 20:24:42.619905 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.620782 kubelet[2629]: E0413 20:24:42.620292 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.620782 kubelet[2629]: W0413 20:24:42.620308 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.620782 kubelet[2629]: E0413 20:24:42.620326 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.620782 kubelet[2629]: E0413 20:24:42.620693 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.620782 kubelet[2629]: W0413 20:24:42.620711 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.620782 kubelet[2629]: E0413 20:24:42.620730 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.622588 kubelet[2629]: E0413 20:24:42.622549 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.622588 kubelet[2629]: W0413 20:24:42.622579 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.622835 kubelet[2629]: E0413 20:24:42.622603 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.623081 kubelet[2629]: E0413 20:24:42.623053 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.623081 kubelet[2629]: W0413 20:24:42.623079 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.623227 kubelet[2629]: E0413 20:24:42.623102 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.623525 kubelet[2629]: E0413 20:24:42.623489 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.623525 kubelet[2629]: W0413 20:24:42.623514 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.623701 kubelet[2629]: E0413 20:24:42.623546 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.651393 kubelet[2629]: E0413 20:24:42.651343 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.651393 kubelet[2629]: W0413 20:24:42.651387 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.651599 kubelet[2629]: E0413 20:24:42.651422 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.653792 kubelet[2629]: E0413 20:24:42.652141 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.653792 kubelet[2629]: W0413 20:24:42.652182 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.653792 kubelet[2629]: E0413 20:24:42.652211 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.653792 kubelet[2629]: E0413 20:24:42.652671 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.653792 kubelet[2629]: W0413 20:24:42.652689 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.653792 kubelet[2629]: E0413 20:24:42.652712 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.655309 kubelet[2629]: E0413 20:24:42.655286 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.655570 kubelet[2629]: W0413 20:24:42.655442 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.655570 kubelet[2629]: E0413 20:24:42.655475 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.656304 kubelet[2629]: E0413 20:24:42.656110 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.656304 kubelet[2629]: W0413 20:24:42.656128 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.656304 kubelet[2629]: E0413 20:24:42.656146 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.656975 kubelet[2629]: E0413 20:24:42.656841 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.656975 kubelet[2629]: W0413 20:24:42.656860 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.656975 kubelet[2629]: E0413 20:24:42.656878 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.658200 kubelet[2629]: E0413 20:24:42.657621 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.658200 kubelet[2629]: W0413 20:24:42.657639 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.658200 kubelet[2629]: E0413 20:24:42.657678 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.658826 kubelet[2629]: E0413 20:24:42.658804 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.658939 kubelet[2629]: W0413 20:24:42.658920 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.659025 kubelet[2629]: E0413 20:24:42.659010 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.659612 kubelet[2629]: E0413 20:24:42.659593 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.659732 kubelet[2629]: W0413 20:24:42.659716 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.659947 kubelet[2629]: E0413 20:24:42.659852 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.660573 kubelet[2629]: E0413 20:24:42.660428 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.660573 kubelet[2629]: W0413 20:24:42.660447 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.660573 kubelet[2629]: E0413 20:24:42.660464 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.662270 kubelet[2629]: E0413 20:24:42.662248 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.662410 kubelet[2629]: W0413 20:24:42.662389 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.662590 kubelet[2629]: E0413 20:24:42.662492 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.663376 kubelet[2629]: E0413 20:24:42.663190 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.663376 kubelet[2629]: W0413 20:24:42.663208 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.663376 kubelet[2629]: E0413 20:24:42.663227 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.663928 kubelet[2629]: E0413 20:24:42.663888 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.664185 kubelet[2629]: W0413 20:24:42.664058 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.664185 kubelet[2629]: E0413 20:24:42.664093 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.664882 kubelet[2629]: E0413 20:24:42.664734 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.664882 kubelet[2629]: W0413 20:24:42.664772 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.664882 kubelet[2629]: E0413 20:24:42.664791 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.666028 kubelet[2629]: E0413 20:24:42.665844 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.666028 kubelet[2629]: W0413 20:24:42.665864 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.666028 kubelet[2629]: E0413 20:24:42.665900 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.668032 kubelet[2629]: E0413 20:24:42.667870 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.668032 kubelet[2629]: W0413 20:24:42.667892 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.668032 kubelet[2629]: E0413 20:24:42.667912 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.668718 kubelet[2629]: E0413 20:24:42.668553 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.668718 kubelet[2629]: W0413 20:24:42.668573 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.668718 kubelet[2629]: E0413 20:24:42.668593 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:42.669668 kubelet[2629]: E0413 20:24:42.669585 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:42.669668 kubelet[2629]: W0413 20:24:42.669602 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:42.669668 kubelet[2629]: E0413 20:24:42.669622 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.385307 kubelet[2629]: E0413 20:24:43.384498 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjsrs" podUID="616bbb20-6acc-4142-9ccc-5584aac07844" Apr 13 20:24:43.501784 containerd[1471]: time="2026-04-13T20:24:43.501625410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:43.503552 containerd[1471]: time="2026-04-13T20:24:43.503467307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 13 20:24:43.505170 containerd[1471]: time="2026-04-13T20:24:43.505090582Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:43.509300 containerd[1471]: time="2026-04-13T20:24:43.509216283Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:43.510993 containerd[1471]: time="2026-04-13T20:24:43.510493402Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.22629868s" Apr 13 20:24:43.510993 containerd[1471]: time="2026-04-13T20:24:43.510549416Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 13 20:24:43.517351 containerd[1471]: time="2026-04-13T20:24:43.517287011Z" level=info msg="CreateContainer within sandbox \"77872e87b556eafb36bbcadf58ed79b6a7d342cbfdeeda8f79e069ce25303bc4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 20:24:43.550558 containerd[1471]: time="2026-04-13T20:24:43.549807146Z" level=info msg="CreateContainer within sandbox \"77872e87b556eafb36bbcadf58ed79b6a7d342cbfdeeda8f79e069ce25303bc4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6525f7a11669c6394d252a7736d76e0d658c6a5308fc48dbed35f42a1e5e1e3a\"" Apr 13 20:24:43.555059 containerd[1471]: time="2026-04-13T20:24:43.555003908Z" level=info msg="StartContainer for \"6525f7a11669c6394d252a7736d76e0d658c6a5308fc48dbed35f42a1e5e1e3a\"" Apr 13 20:24:43.634255 kubelet[2629]: E0413 20:24:43.634049 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.634255 kubelet[2629]: W0413 20:24:43.634089 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.634255 kubelet[2629]: E0413 20:24:43.634129 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.638105 kubelet[2629]: E0413 20:24:43.635368 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.638105 kubelet[2629]: W0413 20:24:43.635395 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.638105 kubelet[2629]: E0413 20:24:43.635424 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.638105 kubelet[2629]: E0413 20:24:43.636478 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.638105 kubelet[2629]: W0413 20:24:43.636496 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.638105 kubelet[2629]: E0413 20:24:43.636519 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.638105 kubelet[2629]: E0413 20:24:43.637702 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.638105 kubelet[2629]: W0413 20:24:43.637725 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.639239 kubelet[2629]: E0413 20:24:43.638623 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.640765 kubelet[2629]: E0413 20:24:43.639903 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.640765 kubelet[2629]: W0413 20:24:43.639928 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.640765 kubelet[2629]: E0413 20:24:43.639952 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.641053 kubelet[2629]: E0413 20:24:43.641040 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.641136 kubelet[2629]: W0413 20:24:43.641056 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.641136 kubelet[2629]: E0413 20:24:43.641076 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.642055 kubelet[2629]: E0413 20:24:43.642000 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.642055 kubelet[2629]: W0413 20:24:43.642026 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.642055 kubelet[2629]: E0413 20:24:43.642047 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.643351 kubelet[2629]: E0413 20:24:43.642794 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.643351 kubelet[2629]: W0413 20:24:43.642840 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.643351 kubelet[2629]: E0413 20:24:43.642861 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.643639 kubelet[2629]: E0413 20:24:43.643379 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.643639 kubelet[2629]: W0413 20:24:43.643395 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.643639 kubelet[2629]: E0413 20:24:43.643413 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.648483 kubelet[2629]: E0413 20:24:43.645025 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.648483 kubelet[2629]: W0413 20:24:43.645047 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.648483 kubelet[2629]: E0413 20:24:43.645065 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.648483 kubelet[2629]: E0413 20:24:43.646876 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.648483 kubelet[2629]: W0413 20:24:43.646893 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.648483 kubelet[2629]: E0413 20:24:43.647032 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.648483 kubelet[2629]: E0413 20:24:43.648121 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.648483 kubelet[2629]: W0413 20:24:43.648276 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.648483 kubelet[2629]: E0413 20:24:43.648299 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.649218 kubelet[2629]: E0413 20:24:43.648829 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.649218 kubelet[2629]: W0413 20:24:43.648872 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.649218 kubelet[2629]: E0413 20:24:43.648893 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.649419 kubelet[2629]: E0413 20:24:43.649364 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.649419 kubelet[2629]: W0413 20:24:43.649406 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.649560 kubelet[2629]: E0413 20:24:43.649423 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.653090 kubelet[2629]: E0413 20:24:43.649928 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.653090 kubelet[2629]: W0413 20:24:43.649954 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.653090 kubelet[2629]: E0413 20:24:43.649973 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.655107 systemd[1]: Started cri-containerd-6525f7a11669c6394d252a7736d76e0d658c6a5308fc48dbed35f42a1e5e1e3a.scope - libcontainer container 6525f7a11669c6394d252a7736d76e0d658c6a5308fc48dbed35f42a1e5e1e3a. Apr 13 20:24:43.666779 kubelet[2629]: E0413 20:24:43.665128 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.666779 kubelet[2629]: W0413 20:24:43.665191 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.666779 kubelet[2629]: E0413 20:24:43.665224 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.666779 kubelet[2629]: E0413 20:24:43.666383 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.666779 kubelet[2629]: W0413 20:24:43.666402 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.666779 kubelet[2629]: E0413 20:24:43.666449 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.667305 kubelet[2629]: E0413 20:24:43.667073 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.667305 kubelet[2629]: W0413 20:24:43.667116 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.667305 kubelet[2629]: E0413 20:24:43.667138 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.667680 kubelet[2629]: E0413 20:24:43.667657 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.667680 kubelet[2629]: W0413 20:24:43.667680 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.667872 kubelet[2629]: E0413 20:24:43.667721 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.668288 kubelet[2629]: E0413 20:24:43.668260 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.668288 kubelet[2629]: W0413 20:24:43.668285 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.668467 kubelet[2629]: E0413 20:24:43.668304 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.669090 kubelet[2629]: E0413 20:24:43.669064 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.669090 kubelet[2629]: W0413 20:24:43.669088 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.669250 kubelet[2629]: E0413 20:24:43.669107 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.670707 kubelet[2629]: E0413 20:24:43.670681 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.670707 kubelet[2629]: W0413 20:24:43.670706 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.670903 kubelet[2629]: E0413 20:24:43.670725 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.671522 kubelet[2629]: E0413 20:24:43.671485 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.671522 kubelet[2629]: W0413 20:24:43.671511 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.671682 kubelet[2629]: E0413 20:24:43.671530 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.672180 kubelet[2629]: E0413 20:24:43.672140 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.672180 kubelet[2629]: W0413 20:24:43.672167 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.672368 kubelet[2629]: E0413 20:24:43.672186 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.672627 kubelet[2629]: E0413 20:24:43.672603 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.672627 kubelet[2629]: W0413 20:24:43.672626 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.672818 kubelet[2629]: E0413 20:24:43.672646 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.673186 kubelet[2629]: E0413 20:24:43.673160 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.673186 kubelet[2629]: W0413 20:24:43.673185 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.673361 kubelet[2629]: E0413 20:24:43.673203 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.673637 kubelet[2629]: E0413 20:24:43.673592 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.673716 kubelet[2629]: W0413 20:24:43.673647 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.673716 kubelet[2629]: E0413 20:24:43.673666 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.674314 kubelet[2629]: E0413 20:24:43.674276 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.674314 kubelet[2629]: W0413 20:24:43.674298 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.674466 kubelet[2629]: E0413 20:24:43.674330 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.675495 kubelet[2629]: E0413 20:24:43.674867 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.675495 kubelet[2629]: W0413 20:24:43.674920 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.675495 kubelet[2629]: E0413 20:24:43.674943 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.676191 kubelet[2629]: E0413 20:24:43.676172 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.676315 kubelet[2629]: W0413 20:24:43.676297 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.676426 kubelet[2629]: E0413 20:24:43.676409 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.677065 kubelet[2629]: E0413 20:24:43.677046 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.677191 kubelet[2629]: W0413 20:24:43.677174 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.677304 kubelet[2629]: E0413 20:24:43.677285 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.678093 kubelet[2629]: E0413 20:24:43.678072 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.678404 kubelet[2629]: W0413 20:24:43.678263 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.678404 kubelet[2629]: E0413 20:24:43.678290 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.678916 kubelet[2629]: E0413 20:24:43.678886 2629 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:24:43.679039 kubelet[2629]: W0413 20:24:43.678907 2629 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:24:43.679039 kubelet[2629]: E0413 20:24:43.678942 2629 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:24:43.722119 containerd[1471]: time="2026-04-13T20:24:43.722021389Z" level=info msg="StartContainer for \"6525f7a11669c6394d252a7736d76e0d658c6a5308fc48dbed35f42a1e5e1e3a\" returns successfully" Apr 13 20:24:43.744676 systemd[1]: cri-containerd-6525f7a11669c6394d252a7736d76e0d658c6a5308fc48dbed35f42a1e5e1e3a.scope: Deactivated successfully. Apr 13 20:24:43.791620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6525f7a11669c6394d252a7736d76e0d658c6a5308fc48dbed35f42a1e5e1e3a-rootfs.mount: Deactivated successfully. Apr 13 20:24:44.670517 containerd[1471]: time="2026-04-13T20:24:44.670425764Z" level=info msg="shim disconnected" id=6525f7a11669c6394d252a7736d76e0d658c6a5308fc48dbed35f42a1e5e1e3a namespace=k8s.io Apr 13 20:24:44.670517 containerd[1471]: time="2026-04-13T20:24:44.670508563Z" level=warning msg="cleaning up after shim disconnected" id=6525f7a11669c6394d252a7736d76e0d658c6a5308fc48dbed35f42a1e5e1e3a namespace=k8s.io Apr 13 20:24:44.670517 containerd[1471]: time="2026-04-13T20:24:44.670524627Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:24:45.384267 kubelet[2629]: E0413 20:24:45.384169 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjsrs" podUID="616bbb20-6acc-4142-9ccc-5584aac07844" Apr 13 20:24:45.544484 containerd[1471]: time="2026-04-13T20:24:45.544430129Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 20:24:47.385210 kubelet[2629]: E0413 20:24:47.384367 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjsrs" podUID="616bbb20-6acc-4142-9ccc-5584aac07844" Apr 13 20:24:49.383889 kubelet[2629]: E0413 20:24:49.383713 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjsrs" podUID="616bbb20-6acc-4142-9ccc-5584aac07844" Apr 13 20:24:51.384830 kubelet[2629]: E0413 20:24:51.384703 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjsrs" podUID="616bbb20-6acc-4142-9ccc-5584aac07844" Apr 13 20:24:53.384508 kubelet[2629]: E0413 20:24:53.384131 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjsrs" podUID="616bbb20-6acc-4142-9ccc-5584aac07844" Apr 13 20:24:54.477145 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2958412595.mount: Deactivated successfully. Apr 13 20:24:54.521385 containerd[1471]: time="2026-04-13T20:24:54.521291144Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:54.523708 containerd[1471]: time="2026-04-13T20:24:54.523613443Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 13 20:24:54.526203 containerd[1471]: time="2026-04-13T20:24:54.526124908Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:54.531770 containerd[1471]: time="2026-04-13T20:24:54.530029451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:24:54.531770 containerd[1471]: time="2026-04-13T20:24:54.531532279Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 8.987034695s" Apr 13 20:24:54.531770 containerd[1471]: time="2026-04-13T20:24:54.531587996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 13 20:24:54.540443 containerd[1471]: time="2026-04-13T20:24:54.540372551Z" level=info msg="CreateContainer within sandbox \"77872e87b556eafb36bbcadf58ed79b6a7d342cbfdeeda8f79e069ce25303bc4\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 20:24:54.573563 containerd[1471]: time="2026-04-13T20:24:54.572961093Z" level=info msg="CreateContainer within sandbox \"77872e87b556eafb36bbcadf58ed79b6a7d342cbfdeeda8f79e069ce25303bc4\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"1d8cc8e4ff62be0a230c9d7d4026db76f7489000a7abe40448423a93fab14c01\"" Apr 13 20:24:54.575046 containerd[1471]: time="2026-04-13T20:24:54.574984021Z" level=info msg="StartContainer for \"1d8cc8e4ff62be0a230c9d7d4026db76f7489000a7abe40448423a93fab14c01\"" Apr 13 20:24:54.657127 systemd[1]: Started cri-containerd-1d8cc8e4ff62be0a230c9d7d4026db76f7489000a7abe40448423a93fab14c01.scope - libcontainer container 1d8cc8e4ff62be0a230c9d7d4026db76f7489000a7abe40448423a93fab14c01. Apr 13 20:24:54.746439 containerd[1471]: time="2026-04-13T20:24:54.745427287Z" level=info msg="StartContainer for \"1d8cc8e4ff62be0a230c9d7d4026db76f7489000a7abe40448423a93fab14c01\" returns successfully" Apr 13 20:24:54.838117 systemd[1]: cri-containerd-1d8cc8e4ff62be0a230c9d7d4026db76f7489000a7abe40448423a93fab14c01.scope: Deactivated successfully. Apr 13 20:24:55.385029 kubelet[2629]: E0413 20:24:55.384891 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjsrs" podUID="616bbb20-6acc-4142-9ccc-5584aac07844" Apr 13 20:24:55.479559 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d8cc8e4ff62be0a230c9d7d4026db76f7489000a7abe40448423a93fab14c01-rootfs.mount: Deactivated successfully. Apr 13 20:24:56.476119 containerd[1471]: time="2026-04-13T20:24:56.475989945Z" level=info msg="shim disconnected" id=1d8cc8e4ff62be0a230c9d7d4026db76f7489000a7abe40448423a93fab14c01 namespace=k8s.io Apr 13 20:24:56.476119 containerd[1471]: time="2026-04-13T20:24:56.476083160Z" level=warning msg="cleaning up after shim disconnected" id=1d8cc8e4ff62be0a230c9d7d4026db76f7489000a7abe40448423a93fab14c01 namespace=k8s.io Apr 13 20:24:56.476119 containerd[1471]: time="2026-04-13T20:24:56.476102439Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:24:56.606188 containerd[1471]: time="2026-04-13T20:24:56.606074050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 20:24:57.383926 kubelet[2629]: E0413 20:24:57.383853 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjsrs" podUID="616bbb20-6acc-4142-9ccc-5584aac07844" Apr 13 20:24:59.386603 kubelet[2629]: E0413 20:24:59.384799 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjsrs" podUID="616bbb20-6acc-4142-9ccc-5584aac07844" Apr 13 20:25:00.623213 containerd[1471]: time="2026-04-13T20:25:00.623096816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:00.626092 containerd[1471]: time="2026-04-13T20:25:00.625710907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 13 20:25:00.629904 containerd[1471]: time="2026-04-13T20:25:00.628157249Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:00.633209 containerd[1471]: time="2026-04-13T20:25:00.633146704Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:00.634610 containerd[1471]: time="2026-04-13T20:25:00.634557078Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 4.028391217s" Apr 13 20:25:00.634834 containerd[1471]: time="2026-04-13T20:25:00.634804353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 13 20:25:00.642814 containerd[1471]: time="2026-04-13T20:25:00.642697635Z" level=info msg="CreateContainer within sandbox \"77872e87b556eafb36bbcadf58ed79b6a7d342cbfdeeda8f79e069ce25303bc4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 20:25:00.672740 containerd[1471]: time="2026-04-13T20:25:00.672654452Z" level=info msg="CreateContainer within sandbox \"77872e87b556eafb36bbcadf58ed79b6a7d342cbfdeeda8f79e069ce25303bc4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"972f44d9d2996fc3a809b01aca159e25d42b86442fd8662a360db8c2f511b805\"" Apr 13 20:25:00.676035 containerd[1471]: time="2026-04-13T20:25:00.675968365Z" level=info msg="StartContainer for \"972f44d9d2996fc3a809b01aca159e25d42b86442fd8662a360db8c2f511b805\"" Apr 13 20:25:00.756926 systemd[1]: run-containerd-runc-k8s.io-972f44d9d2996fc3a809b01aca159e25d42b86442fd8662a360db8c2f511b805-runc.PPUg7G.mount: Deactivated successfully. Apr 13 20:25:00.767080 systemd[1]: Started cri-containerd-972f44d9d2996fc3a809b01aca159e25d42b86442fd8662a360db8c2f511b805.scope - libcontainer container 972f44d9d2996fc3a809b01aca159e25d42b86442fd8662a360db8c2f511b805. Apr 13 20:25:00.849701 containerd[1471]: time="2026-04-13T20:25:00.849619078Z" level=info msg="StartContainer for \"972f44d9d2996fc3a809b01aca159e25d42b86442fd8662a360db8c2f511b805\" returns successfully" Apr 13 20:25:01.384643 kubelet[2629]: E0413 20:25:01.384562 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bjsrs" podUID="616bbb20-6acc-4142-9ccc-5584aac07844" Apr 13 20:25:02.512936 containerd[1471]: time="2026-04-13T20:25:02.512825943Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:25:02.516893 systemd[1]: cri-containerd-972f44d9d2996fc3a809b01aca159e25d42b86442fd8662a360db8c2f511b805.scope: Deactivated successfully. Apr 13 20:25:02.518655 systemd[1]: cri-containerd-972f44d9d2996fc3a809b01aca159e25d42b86442fd8662a360db8c2f511b805.scope: Consumed 1.304s CPU time. Apr 13 20:25:02.557966 kubelet[2629]: I0413 20:25:02.555601 2629 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Apr 13 20:25:02.665906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-972f44d9d2996fc3a809b01aca159e25d42b86442fd8662a360db8c2f511b805-rootfs.mount: Deactivated successfully. Apr 13 20:25:02.773466 systemd[1]: Created slice kubepods-burstable-pod19e38d3b_6f87_4768_8075_3c82e0d91d00.slice - libcontainer container kubepods-burstable-pod19e38d3b_6f87_4768_8075_3c82e0d91d00.slice. Apr 13 20:25:02.835402 kubelet[2629]: I0413 20:25:02.835291 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/19e38d3b-6f87-4768-8075-3c82e0d91d00-config-volume\") pod \"coredns-66bc5c9577-rgwfm\" (UID: \"19e38d3b-6f87-4768-8075-3c82e0d91d00\") " pod="kube-system/coredns-66bc5c9577-rgwfm" Apr 13 20:25:02.854077 kubelet[2629]: I0413 20:25:02.835414 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w54n6\" (UniqueName: \"kubernetes.io/projected/19e38d3b-6f87-4768-8075-3c82e0d91d00-kube-api-access-w54n6\") pod \"coredns-66bc5c9577-rgwfm\" (UID: \"19e38d3b-6f87-4768-8075-3c82e0d91d00\") " pod="kube-system/coredns-66bc5c9577-rgwfm" Apr 13 20:25:02.868833 systemd[1]: Created slice kubepods-burstable-pod07973dbc_15b9_4935_84bf_81b38774c1cf.slice - libcontainer container kubepods-burstable-pod07973dbc_15b9_4935_84bf_81b38774c1cf.slice. Apr 13 20:25:02.933002 systemd[1]: Created slice kubepods-besteffort-poda5b2719d_ae8e_4020_9d59_65852e11ae8d.slice - libcontainer container kubepods-besteffort-poda5b2719d_ae8e_4020_9d59_65852e11ae8d.slice. Apr 13 20:25:02.937503 kubelet[2629]: I0413 20:25:02.936343 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/07973dbc-15b9-4935-84bf-81b38774c1cf-config-volume\") pod \"coredns-66bc5c9577-lzkq6\" (UID: \"07973dbc-15b9-4935-84bf-81b38774c1cf\") " pod="kube-system/coredns-66bc5c9577-lzkq6" Apr 13 20:25:02.937503 kubelet[2629]: I0413 20:25:02.936443 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-776rd\" (UniqueName: \"kubernetes.io/projected/07973dbc-15b9-4935-84bf-81b38774c1cf-kube-api-access-776rd\") pod \"coredns-66bc5c9577-lzkq6\" (UID: \"07973dbc-15b9-4935-84bf-81b38774c1cf\") " pod="kube-system/coredns-66bc5c9577-lzkq6" Apr 13 20:25:03.129325 kubelet[2629]: I0413 20:25:03.037602 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a5b2719d-ae8e-4020-9d59-65852e11ae8d-tigera-ca-bundle\") pod \"calico-kube-controllers-6865ddd44-nxqpx\" (UID: \"a5b2719d-ae8e-4020-9d59-65852e11ae8d\") " pod="calico-system/calico-kube-controllers-6865ddd44-nxqpx" Apr 13 20:25:03.129325 kubelet[2629]: I0413 20:25:03.038447 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb7pt\" (UniqueName: \"kubernetes.io/projected/a5b2719d-ae8e-4020-9d59-65852e11ae8d-kube-api-access-sb7pt\") pod \"calico-kube-controllers-6865ddd44-nxqpx\" (UID: \"a5b2719d-ae8e-4020-9d59-65852e11ae8d\") " pod="calico-system/calico-kube-controllers-6865ddd44-nxqpx" Apr 13 20:25:03.194904 systemd[1]: Created slice kubepods-besteffort-podad3ffed3_72b8_4b25_b898_ef75c4c8b3c1.slice - libcontainer container kubepods-besteffort-podad3ffed3_72b8_4b25_b898_ef75c4c8b3c1.slice. Apr 13 20:25:03.229398 containerd[1471]: time="2026-04-13T20:25:03.229021467Z" level=info msg="shim disconnected" id=972f44d9d2996fc3a809b01aca159e25d42b86442fd8662a360db8c2f511b805 namespace=k8s.io Apr 13 20:25:03.229398 containerd[1471]: time="2026-04-13T20:25:03.229116988Z" level=warning msg="cleaning up after shim disconnected" id=972f44d9d2996fc3a809b01aca159e25d42b86442fd8662a360db8c2f511b805 namespace=k8s.io Apr 13 20:25:03.229398 containerd[1471]: time="2026-04-13T20:25:03.229132103Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:25:03.246586 containerd[1471]: time="2026-04-13T20:25:03.244231814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lzkq6,Uid:07973dbc-15b9-4935-84bf-81b38774c1cf,Namespace:kube-system,Attempt:0,}" Apr 13 20:25:03.246794 kubelet[2629]: I0413 20:25:03.244557 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpr46\" (UniqueName: \"kubernetes.io/projected/ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1-kube-api-access-jpr46\") pod \"calico-apiserver-6769499dcc-lkthr\" (UID: \"ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1\") " pod="calico-system/calico-apiserver-6769499dcc-lkthr" Apr 13 20:25:03.246794 kubelet[2629]: I0413 20:25:03.244611 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1-calico-apiserver-certs\") pod \"calico-apiserver-6769499dcc-lkthr\" (UID: \"ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1\") " pod="calico-system/calico-apiserver-6769499dcc-lkthr" Apr 13 20:25:03.280768 systemd[1]: Created slice kubepods-besteffort-pod5538c62e_5813_4ee8_9c45_fed02ec42082.slice - libcontainer container kubepods-besteffort-pod5538c62e_5813_4ee8_9c45_fed02ec42082.slice. Apr 13 20:25:03.323968 systemd[1]: Created slice kubepods-besteffort-pod585a18a3_2006_4f0c_a63c_f101aa142823.slice - libcontainer container kubepods-besteffort-pod585a18a3_2006_4f0c_a63c_f101aa142823.slice. Apr 13 20:25:03.342161 systemd[1]: Created slice kubepods-besteffort-podbe2f09be_e384_4b88_a802_0ae6bc590ea7.slice - libcontainer container kubepods-besteffort-podbe2f09be_e384_4b88_a802_0ae6bc590ea7.slice. Apr 13 20:25:03.345526 kubelet[2629]: I0413 20:25:03.345472 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2f09be-e384-4b88-a802-0ae6bc590ea7-config\") pod \"goldmane-cccfbd5cf-w2rcq\" (UID: \"be2f09be-e384-4b88-a802-0ae6bc590ea7\") " pod="calico-system/goldmane-cccfbd5cf-w2rcq" Apr 13 20:25:03.345526 kubelet[2629]: I0413 20:25:03.345544 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be2f09be-e384-4b88-a802-0ae6bc590ea7-goldmane-ca-bundle\") pod \"goldmane-cccfbd5cf-w2rcq\" (UID: \"be2f09be-e384-4b88-a802-0ae6bc590ea7\") " pod="calico-system/goldmane-cccfbd5cf-w2rcq" Apr 13 20:25:03.345526 kubelet[2629]: I0413 20:25:03.345578 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7whk\" (UniqueName: \"kubernetes.io/projected/be2f09be-e384-4b88-a802-0ae6bc590ea7-kube-api-access-q7whk\") pod \"goldmane-cccfbd5cf-w2rcq\" (UID: \"be2f09be-e384-4b88-a802-0ae6bc590ea7\") " pod="calico-system/goldmane-cccfbd5cf-w2rcq" Apr 13 20:25:03.345526 kubelet[2629]: I0413 20:25:03.345625 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx2gk\" (UniqueName: \"kubernetes.io/projected/5538c62e-5813-4ee8-9c45-fed02ec42082-kube-api-access-nx2gk\") pod \"whisker-68478d5f94-jn8p5\" (UID: \"5538c62e-5813-4ee8-9c45-fed02ec42082\") " pod="calico-system/whisker-68478d5f94-jn8p5" Apr 13 20:25:03.345526 kubelet[2629]: I0413 20:25:03.345702 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/be2f09be-e384-4b88-a802-0ae6bc590ea7-goldmane-key-pair\") pod \"goldmane-cccfbd5cf-w2rcq\" (UID: \"be2f09be-e384-4b88-a802-0ae6bc590ea7\") " pod="calico-system/goldmane-cccfbd5cf-w2rcq" Apr 13 20:25:03.347773 kubelet[2629]: I0413 20:25:03.347139 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5538c62e-5813-4ee8-9c45-fed02ec42082-whisker-backend-key-pair\") pod \"whisker-68478d5f94-jn8p5\" (UID: \"5538c62e-5813-4ee8-9c45-fed02ec42082\") " pod="calico-system/whisker-68478d5f94-jn8p5" Apr 13 20:25:03.347773 kubelet[2629]: I0413 20:25:03.347263 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5538c62e-5813-4ee8-9c45-fed02ec42082-nginx-config\") pod \"whisker-68478d5f94-jn8p5\" (UID: \"5538c62e-5813-4ee8-9c45-fed02ec42082\") " pod="calico-system/whisker-68478d5f94-jn8p5" Apr 13 20:25:03.347773 kubelet[2629]: I0413 20:25:03.347339 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/585a18a3-2006-4f0c-a63c-f101aa142823-calico-apiserver-certs\") pod \"calico-apiserver-6769499dcc-cptll\" (UID: \"585a18a3-2006-4f0c-a63c-f101aa142823\") " pod="calico-system/calico-apiserver-6769499dcc-cptll" Apr 13 20:25:03.347773 kubelet[2629]: I0413 20:25:03.347420 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5538c62e-5813-4ee8-9c45-fed02ec42082-whisker-ca-bundle\") pod \"whisker-68478d5f94-jn8p5\" (UID: \"5538c62e-5813-4ee8-9c45-fed02ec42082\") " pod="calico-system/whisker-68478d5f94-jn8p5" Apr 13 20:25:03.347773 kubelet[2629]: I0413 20:25:03.347671 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rvdc\" (UniqueName: \"kubernetes.io/projected/585a18a3-2006-4f0c-a63c-f101aa142823-kube-api-access-5rvdc\") pod \"calico-apiserver-6769499dcc-cptll\" (UID: \"585a18a3-2006-4f0c-a63c-f101aa142823\") " pod="calico-system/calico-apiserver-6769499dcc-cptll" Apr 13 20:25:03.394216 containerd[1471]: time="2026-04-13T20:25:03.393159957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rgwfm,Uid:19e38d3b-6f87-4768-8075-3c82e0d91d00,Namespace:kube-system,Attempt:0,}" Apr 13 20:25:03.396636 systemd[1]: Created slice kubepods-besteffort-pod616bbb20_6acc_4142_9ccc_5584aac07844.slice - libcontainer container kubepods-besteffort-pod616bbb20_6acc_4142_9ccc_5584aac07844.slice. Apr 13 20:25:03.406637 containerd[1471]: time="2026-04-13T20:25:03.406167261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bjsrs,Uid:616bbb20-6acc-4142-9ccc-5584aac07844,Namespace:calico-system,Attempt:0,}" Apr 13 20:25:03.545939 containerd[1471]: time="2026-04-13T20:25:03.545883474Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6769499dcc-lkthr,Uid:ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1,Namespace:calico-system,Attempt:0,}" Apr 13 20:25:03.548806 containerd[1471]: time="2026-04-13T20:25:03.548698351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6865ddd44-nxqpx,Uid:a5b2719d-ae8e-4020-9d59-65852e11ae8d,Namespace:calico-system,Attempt:0,}" Apr 13 20:25:03.565195 containerd[1471]: time="2026-04-13T20:25:03.565009176Z" level=error msg="Failed to destroy network for sandbox \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:03.566593 containerd[1471]: time="2026-04-13T20:25:03.566490684Z" level=error msg="encountered an error cleaning up failed sandbox \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:03.566728 containerd[1471]: time="2026-04-13T20:25:03.566594118Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lzkq6,Uid:07973dbc-15b9-4935-84bf-81b38774c1cf,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:03.568552 kubelet[2629]: E0413 20:25:03.568011 2629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:03.568552 kubelet[2629]: E0413 20:25:03.568143 2629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lzkq6" Apr 13 20:25:03.568552 kubelet[2629]: E0413 20:25:03.568204 2629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-lzkq6" Apr 13 20:25:03.569981 kubelet[2629]: E0413 20:25:03.568302 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-lzkq6_kube-system(07973dbc-15b9-4935-84bf-81b38774c1cf)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-lzkq6_kube-system(07973dbc-15b9-4935-84bf-81b38774c1cf)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lzkq6" podUID="07973dbc-15b9-4935-84bf-81b38774c1cf" Apr 13 20:25:03.605906 containerd[1471]: time="2026-04-13T20:25:03.605289002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68478d5f94-jn8p5,Uid:5538c62e-5813-4ee8-9c45-fed02ec42082,Namespace:calico-system,Attempt:0,}" Apr 13 20:25:03.638835 containerd[1471]: time="2026-04-13T20:25:03.638395155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6769499dcc-cptll,Uid:585a18a3-2006-4f0c-a63c-f101aa142823,Namespace:calico-system,Attempt:0,}" Apr 13 20:25:03.673362 containerd[1471]: time="2026-04-13T20:25:03.671708552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-w2rcq,Uid:be2f09be-e384-4b88-a802-0ae6bc590ea7,Namespace:calico-system,Attempt:0,}" Apr 13 20:25:03.686803 kubelet[2629]: I0413 20:25:03.686460 2629 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Apr 13 20:25:03.701938 containerd[1471]: time="2026-04-13T20:25:03.699123373Z" level=info msg="StopPodSandbox for \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\"" Apr 13 20:25:03.702178 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80-shm.mount: Deactivated successfully. Apr 13 20:25:03.719773 containerd[1471]: time="2026-04-13T20:25:03.717028872Z" level=info msg="Ensure that sandbox b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80 in task-service has been cleanup successfully" Apr 13 20:25:03.769141 containerd[1471]: time="2026-04-13T20:25:03.768886537Z" level=error msg="Failed to destroy network for sandbox \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:03.777909 containerd[1471]: time="2026-04-13T20:25:03.776641444Z" level=error msg="encountered an error cleaning up failed sandbox \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:03.778142 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5-shm.mount: Deactivated successfully. Apr 13 20:25:03.783889 containerd[1471]: time="2026-04-13T20:25:03.783826175Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rgwfm,Uid:19e38d3b-6f87-4768-8075-3c82e0d91d00,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:03.795097 kubelet[2629]: E0413 20:25:03.793809 2629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:03.795097 kubelet[2629]: E0413 20:25:03.794470 2629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rgwfm" Apr 13 20:25:03.795097 kubelet[2629]: E0413 20:25:03.794791 2629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-rgwfm" Apr 13 20:25:03.810028 kubelet[2629]: E0413 20:25:03.796910 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-rgwfm_kube-system(19e38d3b-6f87-4768-8075-3c82e0d91d00)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-rgwfm_kube-system(19e38d3b-6f87-4768-8075-3c82e0d91d00)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-rgwfm" podUID="19e38d3b-6f87-4768-8075-3c82e0d91d00" Apr 13 20:25:03.808179 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e-shm.mount: Deactivated successfully. Apr 13 20:25:03.810305 containerd[1471]: time="2026-04-13T20:25:03.799618283Z" level=error msg="Failed to destroy network for sandbox \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:03.849413 containerd[1471]: time="2026-04-13T20:25:03.844832673Z" level=error msg="encountered an error cleaning up failed sandbox \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:03.849413 containerd[1471]: time="2026-04-13T20:25:03.844957287Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bjsrs,Uid:616bbb20-6acc-4142-9ccc-5584aac07844,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:03.850865 kubelet[2629]: E0413 20:25:03.850792 2629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:03.852046 kubelet[2629]: E0413 20:25:03.851898 2629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bjsrs" Apr 13 20:25:03.852857 kubelet[2629]: E0413 20:25:03.852149 2629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bjsrs" Apr 13 20:25:03.852857 kubelet[2629]: E0413 20:25:03.852253 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bjsrs_calico-system(616bbb20-6acc-4142-9ccc-5584aac07844)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bjsrs_calico-system(616bbb20-6acc-4142-9ccc-5584aac07844)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bjsrs" podUID="616bbb20-6acc-4142-9ccc-5584aac07844" Apr 13 20:25:03.889540 containerd[1471]: time="2026-04-13T20:25:03.889227357Z" level=info msg="CreateContainer within sandbox \"77872e87b556eafb36bbcadf58ed79b6a7d342cbfdeeda8f79e069ce25303bc4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 20:25:04.054496 containerd[1471]: time="2026-04-13T20:25:04.053737142Z" level=info msg="CreateContainer within sandbox \"77872e87b556eafb36bbcadf58ed79b6a7d342cbfdeeda8f79e069ce25303bc4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c84e352f59989e4744919376bff74ba7c365296ede6b2fa8515b66d24eae3745\"" Apr 13 20:25:04.061037 containerd[1471]: time="2026-04-13T20:25:04.060178072Z" level=info msg="StartContainer for \"c84e352f59989e4744919376bff74ba7c365296ede6b2fa8515b66d24eae3745\"" Apr 13 20:25:04.092708 containerd[1471]: time="2026-04-13T20:25:04.092540467Z" level=error msg="StopPodSandbox for \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\" failed" error="failed to destroy network for sandbox \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.093889 kubelet[2629]: E0413 20:25:04.093817 2629 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Apr 13 20:25:04.094049 kubelet[2629]: E0413 20:25:04.093925 2629 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80"} Apr 13 20:25:04.094049 kubelet[2629]: E0413 20:25:04.094026 2629 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"07973dbc-15b9-4935-84bf-81b38774c1cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:25:04.094244 kubelet[2629]: E0413 20:25:04.094069 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"07973dbc-15b9-4935-84bf-81b38774c1cf\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-lzkq6" podUID="07973dbc-15b9-4935-84bf-81b38774c1cf" Apr 13 20:25:04.171802 containerd[1471]: time="2026-04-13T20:25:04.171546300Z" level=error msg="Failed to destroy network for sandbox \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.172653 containerd[1471]: time="2026-04-13T20:25:04.172374335Z" level=error msg="encountered an error cleaning up failed sandbox \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.173159 containerd[1471]: time="2026-04-13T20:25:04.173093866Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6865ddd44-nxqpx,Uid:a5b2719d-ae8e-4020-9d59-65852e11ae8d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.174073 kubelet[2629]: E0413 20:25:04.173913 2629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.174073 kubelet[2629]: E0413 20:25:04.174022 2629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6865ddd44-nxqpx" Apr 13 20:25:04.174260 kubelet[2629]: E0413 20:25:04.174088 2629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6865ddd44-nxqpx" Apr 13 20:25:04.177571 kubelet[2629]: E0413 20:25:04.174199 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6865ddd44-nxqpx_calico-system(a5b2719d-ae8e-4020-9d59-65852e11ae8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6865ddd44-nxqpx_calico-system(a5b2719d-ae8e-4020-9d59-65852e11ae8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6865ddd44-nxqpx" podUID="a5b2719d-ae8e-4020-9d59-65852e11ae8d" Apr 13 20:25:04.201067 containerd[1471]: time="2026-04-13T20:25:04.200392832Z" level=error msg="Failed to destroy network for sandbox \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.206904 containerd[1471]: time="2026-04-13T20:25:04.205216089Z" level=error msg="encountered an error cleaning up failed sandbox \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.207235 containerd[1471]: time="2026-04-13T20:25:04.207190435Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68478d5f94-jn8p5,Uid:5538c62e-5813-4ee8-9c45-fed02ec42082,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.207889 kubelet[2629]: E0413 20:25:04.207812 2629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.209118 kubelet[2629]: E0413 20:25:04.208887 2629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68478d5f94-jn8p5" Apr 13 20:25:04.209118 kubelet[2629]: E0413 20:25:04.208945 2629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68478d5f94-jn8p5" Apr 13 20:25:04.209118 kubelet[2629]: E0413 20:25:04.209054 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68478d5f94-jn8p5_calico-system(5538c62e-5813-4ee8-9c45-fed02ec42082)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68478d5f94-jn8p5_calico-system(5538c62e-5813-4ee8-9c45-fed02ec42082)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68478d5f94-jn8p5" podUID="5538c62e-5813-4ee8-9c45-fed02ec42082" Apr 13 20:25:04.222941 containerd[1471]: time="2026-04-13T20:25:04.222856245Z" level=error msg="Failed to destroy network for sandbox \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.224774 containerd[1471]: time="2026-04-13T20:25:04.224585795Z" level=error msg="encountered an error cleaning up failed sandbox \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.225207 containerd[1471]: time="2026-04-13T20:25:04.225064163Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6769499dcc-lkthr,Uid:ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.226857 kubelet[2629]: E0413 20:25:04.226172 2629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.226857 kubelet[2629]: E0413 20:25:04.226267 2629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6769499dcc-lkthr" Apr 13 20:25:04.226857 kubelet[2629]: E0413 20:25:04.226315 2629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6769499dcc-lkthr" Apr 13 20:25:04.227106 kubelet[2629]: E0413 20:25:04.226413 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6769499dcc-lkthr_calico-system(ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6769499dcc-lkthr_calico-system(ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6769499dcc-lkthr" podUID="ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1" Apr 13 20:25:04.257697 systemd[1]: Started cri-containerd-c84e352f59989e4744919376bff74ba7c365296ede6b2fa8515b66d24eae3745.scope - libcontainer container c84e352f59989e4744919376bff74ba7c365296ede6b2fa8515b66d24eae3745. Apr 13 20:25:04.271342 containerd[1471]: time="2026-04-13T20:25:04.271265156Z" level=error msg="Failed to destroy network for sandbox \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.273011 containerd[1471]: time="2026-04-13T20:25:04.272016492Z" level=error msg="encountered an error cleaning up failed sandbox \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.273011 containerd[1471]: time="2026-04-13T20:25:04.272181218Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6769499dcc-cptll,Uid:585a18a3-2006-4f0c-a63c-f101aa142823,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.273299 kubelet[2629]: E0413 20:25:04.272756 2629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.273299 kubelet[2629]: E0413 20:25:04.272866 2629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6769499dcc-cptll" Apr 13 20:25:04.273299 kubelet[2629]: E0413 20:25:04.272915 2629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-6769499dcc-cptll" Apr 13 20:25:04.273962 kubelet[2629]: E0413 20:25:04.273016 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6769499dcc-cptll_calico-system(585a18a3-2006-4f0c-a63c-f101aa142823)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6769499dcc-cptll_calico-system(585a18a3-2006-4f0c-a63c-f101aa142823)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-6769499dcc-cptll" podUID="585a18a3-2006-4f0c-a63c-f101aa142823" Apr 13 20:25:04.304324 containerd[1471]: time="2026-04-13T20:25:04.304056848Z" level=error msg="Failed to destroy network for sandbox \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.305153 containerd[1471]: time="2026-04-13T20:25:04.304935664Z" level=error msg="encountered an error cleaning up failed sandbox \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.305290 containerd[1471]: time="2026-04-13T20:25:04.305180597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-w2rcq,Uid:be2f09be-e384-4b88-a802-0ae6bc590ea7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.308138 kubelet[2629]: E0413 20:25:04.308036 2629 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:25:04.308286 kubelet[2629]: E0413 20:25:04.308179 2629 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-w2rcq" Apr 13 20:25:04.308286 kubelet[2629]: E0413 20:25:04.308222 2629 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-cccfbd5cf-w2rcq" Apr 13 20:25:04.310924 kubelet[2629]: E0413 20:25:04.310833 2629 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-cccfbd5cf-w2rcq_calico-system(be2f09be-e384-4b88-a802-0ae6bc590ea7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-cccfbd5cf-w2rcq_calico-system(be2f09be-e384-4b88-a802-0ae6bc590ea7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-cccfbd5cf-w2rcq" podUID="be2f09be-e384-4b88-a802-0ae6bc590ea7" Apr 13 20:25:04.346885 containerd[1471]: time="2026-04-13T20:25:04.346737952Z" level=info msg="StartContainer for \"c84e352f59989e4744919376bff74ba7c365296ede6b2fa8515b66d24eae3745\" returns successfully" Apr 13 20:25:04.654740 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8-shm.mount: Deactivated successfully. Apr 13 20:25:04.654968 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14-shm.mount: Deactivated successfully. Apr 13 20:25:04.655101 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011-shm.mount: Deactivated successfully. Apr 13 20:25:04.779272 kubelet[2629]: I0413 20:25:04.779028 2629 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Apr 13 20:25:04.785114 containerd[1471]: time="2026-04-13T20:25:04.784425582Z" level=info msg="StopPodSandbox for \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\"" Apr 13 20:25:04.788950 kubelet[2629]: I0413 20:25:04.784942 2629 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Apr 13 20:25:04.789085 containerd[1471]: time="2026-04-13T20:25:04.786425766Z" level=info msg="Ensure that sandbox 6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011 in task-service has been cleanup successfully" Apr 13 20:25:04.790795 containerd[1471]: time="2026-04-13T20:25:04.790139716Z" level=info msg="StopPodSandbox for \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\"" Apr 13 20:25:04.793358 containerd[1471]: time="2026-04-13T20:25:04.792807622Z" level=info msg="Ensure that sandbox cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e in task-service has been cleanup successfully" Apr 13 20:25:04.798717 kubelet[2629]: I0413 20:25:04.798666 2629 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Apr 13 20:25:04.802686 containerd[1471]: time="2026-04-13T20:25:04.802622432Z" level=info msg="StopPodSandbox for \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\"" Apr 13 20:25:04.804140 containerd[1471]: time="2026-04-13T20:25:04.804001075Z" level=info msg="Ensure that sandbox f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e in task-service has been cleanup successfully" Apr 13 20:25:04.851809 kubelet[2629]: I0413 20:25:04.849931 2629 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Apr 13 20:25:04.852096 containerd[1471]: time="2026-04-13T20:25:04.850432413Z" level=info msg="StopPodSandbox for \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\"" Apr 13 20:25:04.852096 containerd[1471]: time="2026-04-13T20:25:04.850808509Z" level=info msg="Ensure that sandbox 795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8 in task-service has been cleanup successfully" Apr 13 20:25:04.858494 kubelet[2629]: I0413 20:25:04.858429 2629 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Apr 13 20:25:04.865271 containerd[1471]: time="2026-04-13T20:25:04.864288948Z" level=info msg="StopPodSandbox for \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\"" Apr 13 20:25:04.879477 kubelet[2629]: I0413 20:25:04.879407 2629 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Apr 13 20:25:04.880898 containerd[1471]: time="2026-04-13T20:25:04.880429937Z" level=info msg="StopPodSandbox for \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\"" Apr 13 20:25:04.882981 containerd[1471]: time="2026-04-13T20:25:04.882918687Z" level=info msg="Ensure that sandbox cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14 in task-service has been cleanup successfully" Apr 13 20:25:04.883552 containerd[1471]: time="2026-04-13T20:25:04.883511389Z" level=info msg="Ensure that sandbox c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5 in task-service has been cleanup successfully" Apr 13 20:25:04.911354 kubelet[2629]: I0413 20:25:04.911039 2629 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Apr 13 20:25:04.919508 containerd[1471]: time="2026-04-13T20:25:04.918430422Z" level=info msg="StopPodSandbox for \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\"" Apr 13 20:25:04.922540 containerd[1471]: time="2026-04-13T20:25:04.920524433Z" level=info msg="Ensure that sandbox 807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb in task-service has been cleanup successfully" Apr 13 20:25:05.022894 kubelet[2629]: I0413 20:25:05.018120 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-sg8gs" podStartSLOduration=5.427207574 podStartE2EDuration="26.018077173s" podCreationTimestamp="2026-04-13 20:24:39 +0000 UTC" firstStartedPulling="2026-04-13 20:24:40.045479531 +0000 UTC m=+23.938449058" lastFinishedPulling="2026-04-13 20:25:00.636349131 +0000 UTC m=+44.529318657" observedRunningTime="2026-04-13 20:25:04.991365743 +0000 UTC m=+48.884335284" watchObservedRunningTime="2026-04-13 20:25:05.018077173 +0000 UTC m=+48.911046719" Apr 13 20:25:05.811049 containerd[1471]: 2026-04-13 20:25:05.412 [INFO][3814] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Apr 13 20:25:05.811049 containerd[1471]: 2026-04-13 20:25:05.413 [INFO][3814] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" iface="eth0" netns="/var/run/netns/cni-88941db6-0be8-14ab-63b5-e87da7d3f330" Apr 13 20:25:05.811049 containerd[1471]: 2026-04-13 20:25:05.415 [INFO][3814] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" iface="eth0" netns="/var/run/netns/cni-88941db6-0be8-14ab-63b5-e87da7d3f330" Apr 13 20:25:05.811049 containerd[1471]: 2026-04-13 20:25:05.416 [INFO][3814] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" iface="eth0" netns="/var/run/netns/cni-88941db6-0be8-14ab-63b5-e87da7d3f330" Apr 13 20:25:05.811049 containerd[1471]: 2026-04-13 20:25:05.416 [INFO][3814] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Apr 13 20:25:05.811049 containerd[1471]: 2026-04-13 20:25:05.417 [INFO][3814] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Apr 13 20:25:05.811049 containerd[1471]: 2026-04-13 20:25:05.688 [INFO][3907] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" HandleID="k8s-pod-network.6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:05.811049 containerd[1471]: 2026-04-13 20:25:05.692 [INFO][3907] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:05.811049 containerd[1471]: 2026-04-13 20:25:05.692 [INFO][3907] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:05.811049 containerd[1471]: 2026-04-13 20:25:05.759 [WARNING][3907] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" HandleID="k8s-pod-network.6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:05.811049 containerd[1471]: 2026-04-13 20:25:05.759 [INFO][3907] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" HandleID="k8s-pod-network.6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:05.811049 containerd[1471]: 2026-04-13 20:25:05.767 [INFO][3907] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:05.811049 containerd[1471]: 2026-04-13 20:25:05.792 [INFO][3814] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Apr 13 20:25:05.815969 containerd[1471]: time="2026-04-13T20:25:05.815898098Z" level=info msg="TearDown network for sandbox \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\" successfully" Apr 13 20:25:05.818531 containerd[1471]: time="2026-04-13T20:25:05.818482703Z" level=info msg="StopPodSandbox for \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\" returns successfully" Apr 13 20:25:05.825084 systemd[1]: run-netns-cni\x2d88941db6\x2d0be8\x2d14ab\x2d63b5\x2de87da7d3f330.mount: Deactivated successfully. Apr 13 20:25:05.834171 containerd[1471]: time="2026-04-13T20:25:05.834116396Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6769499dcc-lkthr,Uid:ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1,Namespace:calico-system,Attempt:1,}" Apr 13 20:25:06.073444 containerd[1471]: 2026-04-13 20:25:05.709 [INFO][3876] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Apr 13 20:25:06.073444 containerd[1471]: 2026-04-13 20:25:05.709 [INFO][3876] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" iface="eth0" netns="/var/run/netns/cni-17166e0e-2df8-f1a2-ef7b-938b830ad858" Apr 13 20:25:06.073444 containerd[1471]: 2026-04-13 20:25:05.710 [INFO][3876] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" iface="eth0" netns="/var/run/netns/cni-17166e0e-2df8-f1a2-ef7b-938b830ad858" Apr 13 20:25:06.073444 containerd[1471]: 2026-04-13 20:25:05.714 [INFO][3876] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" iface="eth0" netns="/var/run/netns/cni-17166e0e-2df8-f1a2-ef7b-938b830ad858" Apr 13 20:25:06.073444 containerd[1471]: 2026-04-13 20:25:05.714 [INFO][3876] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Apr 13 20:25:06.073444 containerd[1471]: 2026-04-13 20:25:05.714 [INFO][3876] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Apr 13 20:25:06.073444 containerd[1471]: 2026-04-13 20:25:06.009 [INFO][3942] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" HandleID="k8s-pod-network.c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:06.073444 containerd[1471]: 2026-04-13 20:25:06.009 [INFO][3942] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:06.073444 containerd[1471]: 2026-04-13 20:25:06.009 [INFO][3942] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:06.073444 containerd[1471]: 2026-04-13 20:25:06.038 [WARNING][3942] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" HandleID="k8s-pod-network.c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:06.073444 containerd[1471]: 2026-04-13 20:25:06.038 [INFO][3942] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" HandleID="k8s-pod-network.c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:06.073444 containerd[1471]: 2026-04-13 20:25:06.042 [INFO][3942] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:06.073444 containerd[1471]: 2026-04-13 20:25:06.063 [INFO][3876] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Apr 13 20:25:06.073444 containerd[1471]: time="2026-04-13T20:25:06.070371527Z" level=info msg="TearDown network for sandbox \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\" successfully" Apr 13 20:25:06.073444 containerd[1471]: time="2026-04-13T20:25:06.070416176Z" level=info msg="StopPodSandbox for \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\" returns successfully" Apr 13 20:25:06.081311 systemd[1]: run-netns-cni\x2d17166e0e\x2d2df8\x2df1a2\x2def7b\x2d938b830ad858.mount: Deactivated successfully. Apr 13 20:25:06.086119 containerd[1471]: time="2026-04-13T20:25:06.085800050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rgwfm,Uid:19e38d3b-6f87-4768-8075-3c82e0d91d00,Namespace:kube-system,Attempt:1,}" Apr 13 20:25:06.090248 containerd[1471]: 2026-04-13 20:25:05.696 [INFO][3885] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Apr 13 20:25:06.090248 containerd[1471]: 2026-04-13 20:25:05.696 [INFO][3885] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" iface="eth0" netns="/var/run/netns/cni-4ce20bb9-1cc8-c557-4caf-812968b04c87" Apr 13 20:25:06.090248 containerd[1471]: 2026-04-13 20:25:05.697 [INFO][3885] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" iface="eth0" netns="/var/run/netns/cni-4ce20bb9-1cc8-c557-4caf-812968b04c87" Apr 13 20:25:06.090248 containerd[1471]: 2026-04-13 20:25:05.698 [INFO][3885] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" iface="eth0" netns="/var/run/netns/cni-4ce20bb9-1cc8-c557-4caf-812968b04c87" Apr 13 20:25:06.090248 containerd[1471]: 2026-04-13 20:25:05.699 [INFO][3885] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Apr 13 20:25:06.090248 containerd[1471]: 2026-04-13 20:25:05.699 [INFO][3885] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Apr 13 20:25:06.090248 containerd[1471]: 2026-04-13 20:25:06.024 [INFO][3940] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" HandleID="k8s-pod-network.807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:06.090248 containerd[1471]: 2026-04-13 20:25:06.024 [INFO][3940] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:06.090248 containerd[1471]: 2026-04-13 20:25:06.042 [INFO][3940] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:06.090248 containerd[1471]: 2026-04-13 20:25:06.063 [WARNING][3940] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" HandleID="k8s-pod-network.807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:06.090248 containerd[1471]: 2026-04-13 20:25:06.063 [INFO][3940] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" HandleID="k8s-pod-network.807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:06.090248 containerd[1471]: 2026-04-13 20:25:06.066 [INFO][3940] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:06.090248 containerd[1471]: 2026-04-13 20:25:06.078 [INFO][3885] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Apr 13 20:25:06.097656 containerd[1471]: time="2026-04-13T20:25:06.097489081Z" level=info msg="TearDown network for sandbox \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\" successfully" Apr 13 20:25:06.097656 containerd[1471]: time="2026-04-13T20:25:06.097655098Z" level=info msg="StopPodSandbox for \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\" returns successfully" Apr 13 20:25:06.101389 systemd[1]: run-netns-cni\x2d4ce20bb9\x2d1cc8\x2dc557\x2d4caf\x2d812968b04c87.mount: Deactivated successfully. Apr 13 20:25:06.109326 containerd[1471]: time="2026-04-13T20:25:06.109070186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6769499dcc-cptll,Uid:585a18a3-2006-4f0c-a63c-f101aa142823,Namespace:calico-system,Attempt:1,}" Apr 13 20:25:06.128982 containerd[1471]: 2026-04-13 20:25:05.571 [INFO][3826] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Apr 13 20:25:06.128982 containerd[1471]: 2026-04-13 20:25:05.571 [INFO][3826] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" iface="eth0" netns="/var/run/netns/cni-e3c340a4-4972-b24e-b03f-30324ac2c0f5" Apr 13 20:25:06.128982 containerd[1471]: 2026-04-13 20:25:05.575 [INFO][3826] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" iface="eth0" netns="/var/run/netns/cni-e3c340a4-4972-b24e-b03f-30324ac2c0f5" Apr 13 20:25:06.128982 containerd[1471]: 2026-04-13 20:25:05.577 [INFO][3826] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" iface="eth0" netns="/var/run/netns/cni-e3c340a4-4972-b24e-b03f-30324ac2c0f5" Apr 13 20:25:06.128982 containerd[1471]: 2026-04-13 20:25:05.578 [INFO][3826] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Apr 13 20:25:06.128982 containerd[1471]: 2026-04-13 20:25:05.578 [INFO][3826] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Apr 13 20:25:06.128982 containerd[1471]: 2026-04-13 20:25:06.044 [INFO][3922] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" HandleID="k8s-pod-network.f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:06.128982 containerd[1471]: 2026-04-13 20:25:06.044 [INFO][3922] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:06.128982 containerd[1471]: 2026-04-13 20:25:06.067 [INFO][3922] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:06.128982 containerd[1471]: 2026-04-13 20:25:06.106 [WARNING][3922] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" HandleID="k8s-pod-network.f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:06.128982 containerd[1471]: 2026-04-13 20:25:06.106 [INFO][3922] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" HandleID="k8s-pod-network.f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:06.128982 containerd[1471]: 2026-04-13 20:25:06.114 [INFO][3922] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:06.128982 containerd[1471]: 2026-04-13 20:25:06.121 [INFO][3826] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Apr 13 20:25:06.130879 containerd[1471]: time="2026-04-13T20:25:06.130159756Z" level=info msg="TearDown network for sandbox \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\" successfully" Apr 13 20:25:06.130879 containerd[1471]: time="2026-04-13T20:25:06.130220168Z" level=info msg="StopPodSandbox for \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\" returns successfully" Apr 13 20:25:06.137633 containerd[1471]: time="2026-04-13T20:25:06.137489892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-w2rcq,Uid:be2f09be-e384-4b88-a802-0ae6bc590ea7,Namespace:calico-system,Attempt:1,}" Apr 13 20:25:06.198246 containerd[1471]: 2026-04-13 20:25:05.652 [INFO][3822] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Apr 13 20:25:06.198246 containerd[1471]: 2026-04-13 20:25:05.654 [INFO][3822] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" iface="eth0" netns="/var/run/netns/cni-40c9473b-bbaa-c99e-78db-f3e92f338fba" Apr 13 20:25:06.198246 containerd[1471]: 2026-04-13 20:25:05.655 [INFO][3822] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" iface="eth0" netns="/var/run/netns/cni-40c9473b-bbaa-c99e-78db-f3e92f338fba" Apr 13 20:25:06.198246 containerd[1471]: 2026-04-13 20:25:05.656 [INFO][3822] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" iface="eth0" netns="/var/run/netns/cni-40c9473b-bbaa-c99e-78db-f3e92f338fba" Apr 13 20:25:06.198246 containerd[1471]: 2026-04-13 20:25:05.656 [INFO][3822] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Apr 13 20:25:06.198246 containerd[1471]: 2026-04-13 20:25:05.656 [INFO][3822] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Apr 13 20:25:06.198246 containerd[1471]: 2026-04-13 20:25:06.060 [INFO][3935] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" HandleID="k8s-pod-network.cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:06.198246 containerd[1471]: 2026-04-13 20:25:06.062 [INFO][3935] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:06.198246 containerd[1471]: 2026-04-13 20:25:06.114 [INFO][3935] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:06.198246 containerd[1471]: 2026-04-13 20:25:06.140 [WARNING][3935] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" HandleID="k8s-pod-network.cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:06.198246 containerd[1471]: 2026-04-13 20:25:06.140 [INFO][3935] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" HandleID="k8s-pod-network.cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:06.198246 containerd[1471]: 2026-04-13 20:25:06.146 [INFO][3935] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:06.198246 containerd[1471]: 2026-04-13 20:25:06.168 [INFO][3822] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Apr 13 20:25:06.199779 containerd[1471]: time="2026-04-13T20:25:06.199697408Z" level=info msg="TearDown network for sandbox \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\" successfully" Apr 13 20:25:06.200213 containerd[1471]: time="2026-04-13T20:25:06.199739295Z" level=info msg="StopPodSandbox for \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\" returns successfully" Apr 13 20:25:06.206670 containerd[1471]: time="2026-04-13T20:25:06.206612224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bjsrs,Uid:616bbb20-6acc-4142-9ccc-5584aac07844,Namespace:calico-system,Attempt:1,}" Apr 13 20:25:06.239801 containerd[1471]: 2026-04-13 20:25:05.570 [INFO][3871] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Apr 13 20:25:06.239801 containerd[1471]: 2026-04-13 20:25:05.581 [INFO][3871] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" iface="eth0" netns="/var/run/netns/cni-9c48b475-675d-158f-752c-6329864ab61c" Apr 13 20:25:06.239801 containerd[1471]: 2026-04-13 20:25:05.583 [INFO][3871] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" iface="eth0" netns="/var/run/netns/cni-9c48b475-675d-158f-752c-6329864ab61c" Apr 13 20:25:06.239801 containerd[1471]: 2026-04-13 20:25:05.589 [INFO][3871] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" iface="eth0" netns="/var/run/netns/cni-9c48b475-675d-158f-752c-6329864ab61c" Apr 13 20:25:06.239801 containerd[1471]: 2026-04-13 20:25:05.589 [INFO][3871] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Apr 13 20:25:06.239801 containerd[1471]: 2026-04-13 20:25:05.589 [INFO][3871] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Apr 13 20:25:06.239801 containerd[1471]: 2026-04-13 20:25:06.061 [INFO][3924] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" HandleID="k8s-pod-network.cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:06.239801 containerd[1471]: 2026-04-13 20:25:06.063 [INFO][3924] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:06.239801 containerd[1471]: 2026-04-13 20:25:06.146 [INFO][3924] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:06.239801 containerd[1471]: 2026-04-13 20:25:06.164 [WARNING][3924] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" HandleID="k8s-pod-network.cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:06.239801 containerd[1471]: 2026-04-13 20:25:06.164 [INFO][3924] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" HandleID="k8s-pod-network.cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:06.239801 containerd[1471]: 2026-04-13 20:25:06.178 [INFO][3924] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:06.239801 containerd[1471]: 2026-04-13 20:25:06.212 [INFO][3871] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Apr 13 20:25:06.243292 containerd[1471]: time="2026-04-13T20:25:06.243241137Z" level=info msg="TearDown network for sandbox \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\" successfully" Apr 13 20:25:06.243673 containerd[1471]: time="2026-04-13T20:25:06.243488012Z" level=info msg="StopPodSandbox for \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\" returns successfully" Apr 13 20:25:06.248847 containerd[1471]: time="2026-04-13T20:25:06.248348434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6865ddd44-nxqpx,Uid:a5b2719d-ae8e-4020-9d59-65852e11ae8d,Namespace:calico-system,Attempt:1,}" Apr 13 20:25:06.261931 containerd[1471]: 2026-04-13 20:25:05.644 [INFO][3861] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Apr 13 20:25:06.261931 containerd[1471]: 2026-04-13 20:25:05.645 [INFO][3861] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" iface="eth0" netns="/var/run/netns/cni-ac2f4bcf-ab25-c48f-995b-e3b03ed53d66" Apr 13 20:25:06.261931 containerd[1471]: 2026-04-13 20:25:05.646 [INFO][3861] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" iface="eth0" netns="/var/run/netns/cni-ac2f4bcf-ab25-c48f-995b-e3b03ed53d66" Apr 13 20:25:06.261931 containerd[1471]: 2026-04-13 20:25:05.652 [INFO][3861] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" iface="eth0" netns="/var/run/netns/cni-ac2f4bcf-ab25-c48f-995b-e3b03ed53d66" Apr 13 20:25:06.261931 containerd[1471]: 2026-04-13 20:25:05.653 [INFO][3861] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Apr 13 20:25:06.261931 containerd[1471]: 2026-04-13 20:25:05.653 [INFO][3861] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Apr 13 20:25:06.261931 containerd[1471]: 2026-04-13 20:25:06.060 [INFO][3934] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" HandleID="k8s-pod-network.795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--68478d5f94--jn8p5-eth0" Apr 13 20:25:06.261931 containerd[1471]: 2026-04-13 20:25:06.063 [INFO][3934] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:06.261931 containerd[1471]: 2026-04-13 20:25:06.182 [INFO][3934] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:06.261931 containerd[1471]: 2026-04-13 20:25:06.226 [WARNING][3934] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" HandleID="k8s-pod-network.795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--68478d5f94--jn8p5-eth0" Apr 13 20:25:06.261931 containerd[1471]: 2026-04-13 20:25:06.226 [INFO][3934] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" HandleID="k8s-pod-network.795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--68478d5f94--jn8p5-eth0" Apr 13 20:25:06.261931 containerd[1471]: 2026-04-13 20:25:06.231 [INFO][3934] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:06.261931 containerd[1471]: 2026-04-13 20:25:06.253 [INFO][3861] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Apr 13 20:25:06.267453 containerd[1471]: time="2026-04-13T20:25:06.267241085Z" level=info msg="TearDown network for sandbox \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\" successfully" Apr 13 20:25:06.267453 containerd[1471]: time="2026-04-13T20:25:06.267301464Z" level=info msg="StopPodSandbox for \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\" returns successfully" Apr 13 20:25:06.410706 kubelet[2629]: I0413 20:25:06.409144 2629 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx2gk\" (UniqueName: \"kubernetes.io/projected/5538c62e-5813-4ee8-9c45-fed02ec42082-kube-api-access-nx2gk\") pod \"5538c62e-5813-4ee8-9c45-fed02ec42082\" (UID: \"5538c62e-5813-4ee8-9c45-fed02ec42082\") " Apr 13 20:25:06.410706 kubelet[2629]: I0413 20:25:06.409236 2629 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5538c62e-5813-4ee8-9c45-fed02ec42082-whisker-backend-key-pair\") pod \"5538c62e-5813-4ee8-9c45-fed02ec42082\" (UID: \"5538c62e-5813-4ee8-9c45-fed02ec42082\") " Apr 13 20:25:06.410706 kubelet[2629]: I0413 20:25:06.409282 2629 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5538c62e-5813-4ee8-9c45-fed02ec42082-nginx-config\") pod \"5538c62e-5813-4ee8-9c45-fed02ec42082\" (UID: \"5538c62e-5813-4ee8-9c45-fed02ec42082\") " Apr 13 20:25:06.410706 kubelet[2629]: I0413 20:25:06.409322 2629 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5538c62e-5813-4ee8-9c45-fed02ec42082-whisker-ca-bundle\") pod \"5538c62e-5813-4ee8-9c45-fed02ec42082\" (UID: \"5538c62e-5813-4ee8-9c45-fed02ec42082\") " Apr 13 20:25:06.419828 kubelet[2629]: I0413 20:25:06.416533 2629 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5538c62e-5813-4ee8-9c45-fed02ec42082-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "5538c62e-5813-4ee8-9c45-fed02ec42082" (UID: "5538c62e-5813-4ee8-9c45-fed02ec42082"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:25:06.419828 kubelet[2629]: I0413 20:25:06.419588 2629 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5538c62e-5813-4ee8-9c45-fed02ec42082-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "5538c62e-5813-4ee8-9c45-fed02ec42082" (UID: "5538c62e-5813-4ee8-9c45-fed02ec42082"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:25:06.429374 kubelet[2629]: I0413 20:25:06.429059 2629 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5538c62e-5813-4ee8-9c45-fed02ec42082-kube-api-access-nx2gk" (OuterVolumeSpecName: "kube-api-access-nx2gk") pod "5538c62e-5813-4ee8-9c45-fed02ec42082" (UID: "5538c62e-5813-4ee8-9c45-fed02ec42082"). InnerVolumeSpecName "kube-api-access-nx2gk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:25:06.434274 kubelet[2629]: I0413 20:25:06.433972 2629 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5538c62e-5813-4ee8-9c45-fed02ec42082-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "5538c62e-5813-4ee8-9c45-fed02ec42082" (UID: "5538c62e-5813-4ee8-9c45-fed02ec42082"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:25:06.510035 kubelet[2629]: I0413 20:25:06.509980 2629 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5538c62e-5813-4ee8-9c45-fed02ec42082-nginx-config\") on node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" DevicePath \"\"" Apr 13 20:25:06.510561 kubelet[2629]: I0413 20:25:06.510356 2629 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5538c62e-5813-4ee8-9c45-fed02ec42082-whisker-ca-bundle\") on node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" DevicePath \"\"" Apr 13 20:25:06.510561 kubelet[2629]: I0413 20:25:06.510509 2629 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nx2gk\" (UniqueName: \"kubernetes.io/projected/5538c62e-5813-4ee8-9c45-fed02ec42082-kube-api-access-nx2gk\") on node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" DevicePath \"\"" Apr 13 20:25:06.510561 kubelet[2629]: I0413 20:25:06.510529 2629 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5538c62e-5813-4ee8-9c45-fed02ec42082-whisker-backend-key-pair\") on node \"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal\" DevicePath \"\"" Apr 13 20:25:06.850044 systemd[1]: run-netns-cni\x2de3c340a4\x2d4972\x2db24e\x2db03f\x2d30324ac2c0f5.mount: Deactivated successfully. Apr 13 20:25:06.850232 systemd[1]: run-netns-cni\x2dac2f4bcf\x2dab25\x2dc48f\x2d995b\x2de3b03ed53d66.mount: Deactivated successfully. Apr 13 20:25:06.850359 systemd[1]: run-netns-cni\x2d9c48b475\x2d675d\x2d158f\x2d752c\x2d6329864ab61c.mount: Deactivated successfully. Apr 13 20:25:06.850486 systemd[1]: var-lib-kubelet-pods-5538c62e\x2d5813\x2d4ee8\x2d9c45\x2dfed02ec42082-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnx2gk.mount: Deactivated successfully. Apr 13 20:25:06.850610 systemd[1]: var-lib-kubelet-pods-5538c62e\x2d5813\x2d4ee8\x2d9c45\x2dfed02ec42082-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 20:25:06.850724 systemd[1]: run-netns-cni\x2d40c9473b\x2dbbaa\x2dc99e\x2d78db\x2df3e92f338fba.mount: Deactivated successfully. Apr 13 20:25:06.910096 systemd-networkd[1370]: caliae81db1f5f4: Link UP Apr 13 20:25:06.910568 systemd-networkd[1370]: caliae81db1f5f4: Gained carrier Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.204 [ERROR][3960] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.288 [INFO][3960] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0 calico-apiserver-6769499dcc- calico-system ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1 934 0 2026-04-13 20:24:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6769499dcc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal calico-apiserver-6769499dcc-lkthr eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] caliae81db1f5f4 [] [] }} ContainerID="c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-lkthr" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-" Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.288 [INFO][3960] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-lkthr" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.660 [INFO][4045] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" HandleID="k8s-pod-network.c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.704 [INFO][4045] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" HandleID="k8s-pod-network.c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039c120), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", "pod":"calico-apiserver-6769499dcc-lkthr", "timestamp":"2026-04-13 20:25:06.660721025 +0000 UTC"}, Hostname:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002d6580)} Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.704 [INFO][4045] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.704 [INFO][4045] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.705 [INFO][4045] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal' Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.713 [INFO][4045] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.736 [INFO][4045] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.756 [INFO][4045] ipam/ipam.go 526: Trying affinity for 192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.767 [INFO][4045] ipam/ipam.go 160: Attempting to load block cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.793 [INFO][4045] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.793 [INFO][4045] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.799 [INFO][4045] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8 Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.811 [INFO][4045] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.846 [INFO][4045] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.108.1/26] block=192.168.108.0/26 handle="k8s-pod-network.c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.846 [INFO][4045] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.108.1/26] handle="k8s-pod-network.c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.846 [INFO][4045] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:06.961617 containerd[1471]: 2026-04-13 20:25:06.846 [INFO][4045] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.108.1/26] IPv6=[] ContainerID="c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" HandleID="k8s-pod-network.c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:06.963821 containerd[1471]: 2026-04-13 20:25:06.875 [INFO][3960] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-lkthr" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0", GenerateName:"calico-apiserver-6769499dcc-", Namespace:"calico-system", SelfLink:"", UID:"ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6769499dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-6769499dcc-lkthr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliae81db1f5f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:06.963821 containerd[1471]: 2026-04-13 20:25:06.875 [INFO][3960] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.1/32] ContainerID="c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-lkthr" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:06.963821 containerd[1471]: 2026-04-13 20:25:06.875 [INFO][3960] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliae81db1f5f4 ContainerID="c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-lkthr" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:06.963821 containerd[1471]: 2026-04-13 20:25:06.911 [INFO][3960] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-lkthr" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:06.963821 containerd[1471]: 2026-04-13 20:25:06.912 [INFO][3960] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-lkthr" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0", GenerateName:"calico-apiserver-6769499dcc-", Namespace:"calico-system", SelfLink:"", UID:"ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6769499dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8", Pod:"calico-apiserver-6769499dcc-lkthr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliae81db1f5f4", MAC:"4a:07:45:9d:24:e5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:06.963821 containerd[1471]: 2026-04-13 20:25:06.955 [INFO][3960] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-lkthr" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:06.981071 systemd[1]: Removed slice kubepods-besteffort-pod5538c62e_5813_4ee8_9c45_fed02ec42082.slice - libcontainer container kubepods-besteffort-pod5538c62e_5813_4ee8_9c45_fed02ec42082.slice. Apr 13 20:25:07.133381 containerd[1471]: time="2026-04-13T20:25:07.131112526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:25:07.133381 containerd[1471]: time="2026-04-13T20:25:07.131373814Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:25:07.133381 containerd[1471]: time="2026-04-13T20:25:07.131414476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:07.133381 containerd[1471]: time="2026-04-13T20:25:07.131585380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:07.200662 systemd-networkd[1370]: cali7bfda0e842d: Link UP Apr 13 20:25:07.201113 systemd-networkd[1370]: cali7bfda0e842d: Gained carrier Apr 13 20:25:07.264132 systemd[1]: run-containerd-runc-k8s.io-c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8-runc.1Cfxth.mount: Deactivated successfully. Apr 13 20:25:07.298822 systemd[1]: Started cri-containerd-c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8.scope - libcontainer container c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8. Apr 13 20:25:07.303242 systemd[1]: Created slice kubepods-besteffort-pod7bf5432d_acc0_4734_9f97_387b5d8e7c4d.slice - libcontainer container kubepods-besteffort-pod7bf5432d_acc0_4734_9f97_387b5d8e7c4d.slice. Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:06.503 [ERROR][3986] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:06.575 [INFO][3986] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0 coredns-66bc5c9577- kube-system 19e38d3b-6f87-4768-8075-3c82e0d91d00 945 0 2026-04-13 20:24:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal coredns-66bc5c9577-rgwfm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7bfda0e842d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" Namespace="kube-system" Pod="coredns-66bc5c9577-rgwfm" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-" Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:06.576 [INFO][3986] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" Namespace="kube-system" Pod="coredns-66bc5c9577-rgwfm" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:06.901 [INFO][4086] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" HandleID="k8s-pod-network.d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:06.952 [INFO][4086] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" HandleID="k8s-pod-network.d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034b710), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", "pod":"coredns-66bc5c9577-rgwfm", "timestamp":"2026-04-13 20:25:06.901604729 +0000 UTC"}, Hostname:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0003d22c0)} Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:06.952 [INFO][4086] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:06.953 [INFO][4086] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:06.953 [INFO][4086] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal' Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:06.964 [INFO][4086] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:07.017 [INFO][4086] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:07.050 [INFO][4086] ipam/ipam.go 526: Trying affinity for 192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:07.058 [INFO][4086] ipam/ipam.go 160: Attempting to load block cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:07.102 [INFO][4086] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:07.102 [INFO][4086] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:07.109 [INFO][4086] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:07.135 [INFO][4086] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:07.176 [INFO][4086] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.108.2/26] block=192.168.108.0/26 handle="k8s-pod-network.d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:07.176 [INFO][4086] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.108.2/26] handle="k8s-pod-network.d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:07.176 [INFO][4086] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:07.322225 containerd[1471]: 2026-04-13 20:25:07.176 [INFO][4086] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.108.2/26] IPv6=[] ContainerID="d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" HandleID="k8s-pod-network.d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:07.329592 containerd[1471]: 2026-04-13 20:25:07.185 [INFO][3986] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" Namespace="kube-system" Pod="coredns-66bc5c9577-rgwfm" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"19e38d3b-6f87-4768-8075-3c82e0d91d00", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-66bc5c9577-rgwfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bfda0e842d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:07.329592 containerd[1471]: 2026-04-13 20:25:07.185 [INFO][3986] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.2/32] ContainerID="d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" Namespace="kube-system" Pod="coredns-66bc5c9577-rgwfm" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:07.329592 containerd[1471]: 2026-04-13 20:25:07.185 [INFO][3986] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bfda0e842d ContainerID="d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" Namespace="kube-system" Pod="coredns-66bc5c9577-rgwfm" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:07.329592 containerd[1471]: 2026-04-13 20:25:07.206 [INFO][3986] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" Namespace="kube-system" Pod="coredns-66bc5c9577-rgwfm" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:07.332152 kubelet[2629]: I0413 20:25:07.322196 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/7bf5432d-acc0-4734-9f97-387b5d8e7c4d-nginx-config\") pod \"whisker-7648fcb95-gm92l\" (UID: \"7bf5432d-acc0-4734-9f97-387b5d8e7c4d\") " pod="calico-system/whisker-7648fcb95-gm92l" Apr 13 20:25:07.332152 kubelet[2629]: I0413 20:25:07.322276 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7bf5432d-acc0-4734-9f97-387b5d8e7c4d-whisker-backend-key-pair\") pod \"whisker-7648fcb95-gm92l\" (UID: \"7bf5432d-acc0-4734-9f97-387b5d8e7c4d\") " pod="calico-system/whisker-7648fcb95-gm92l" Apr 13 20:25:07.332152 kubelet[2629]: I0413 20:25:07.322335 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7bf5432d-acc0-4734-9f97-387b5d8e7c4d-whisker-ca-bundle\") pod \"whisker-7648fcb95-gm92l\" (UID: \"7bf5432d-acc0-4734-9f97-387b5d8e7c4d\") " pod="calico-system/whisker-7648fcb95-gm92l" Apr 13 20:25:07.332152 kubelet[2629]: I0413 20:25:07.322395 2629 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2hqp\" (UniqueName: \"kubernetes.io/projected/7bf5432d-acc0-4734-9f97-387b5d8e7c4d-kube-api-access-m2hqp\") pod \"whisker-7648fcb95-gm92l\" (UID: \"7bf5432d-acc0-4734-9f97-387b5d8e7c4d\") " pod="calico-system/whisker-7648fcb95-gm92l" Apr 13 20:25:07.333978 containerd[1471]: 2026-04-13 20:25:07.228 [INFO][3986] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" Namespace="kube-system" Pod="coredns-66bc5c9577-rgwfm" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"19e38d3b-6f87-4768-8075-3c82e0d91d00", ResourceVersion:"945", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a", Pod:"coredns-66bc5c9577-rgwfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bfda0e842d", MAC:"ea:75:f5:b1:63:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:07.333978 containerd[1471]: 2026-04-13 20:25:07.313 [INFO][3986] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a" Namespace="kube-system" Pod="coredns-66bc5c9577-rgwfm" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:07.386400 containerd[1471]: time="2026-04-13T20:25:07.386088554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:25:07.386400 containerd[1471]: time="2026-04-13T20:25:07.386207623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:25:07.386400 containerd[1471]: time="2026-04-13T20:25:07.386247434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:07.387918 containerd[1471]: time="2026-04-13T20:25:07.386625205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:07.444915 systemd-networkd[1370]: calib437c229611: Link UP Apr 13 20:25:07.449432 systemd-networkd[1370]: calib437c229611: Gained carrier Apr 13 20:25:07.501601 systemd[1]: Started cri-containerd-d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a.scope - libcontainer container d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a. Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:06.458 [ERROR][4015] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:06.512 [INFO][4015] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0 goldmane-cccfbd5cf- calico-system be2f09be-e384-4b88-a802-0ae6bc590ea7 936 0 2026-04-13 20:24:38 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:cccfbd5cf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal goldmane-cccfbd5cf-w2rcq eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib437c229611 [] [] }} ContainerID="2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w2rcq" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-" Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:06.512 [INFO][4015] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w2rcq" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:06.915 [INFO][4077] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" HandleID="k8s-pod-network.2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:06.995 [INFO][4077] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" HandleID="k8s-pod-network.2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0005fb810), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", "pod":"goldmane-cccfbd5cf-w2rcq", "timestamp":"2026-04-13 20:25:06.915286548 +0000 UTC"}, Hostname:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000224000)} Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:06.995 [INFO][4077] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:07.179 [INFO][4077] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:07.179 [INFO][4077] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal' Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:07.223 [INFO][4077] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:07.320 [INFO][4077] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:07.343 [INFO][4077] ipam/ipam.go 526: Trying affinity for 192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:07.351 [INFO][4077] ipam/ipam.go 160: Attempting to load block cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:07.358 [INFO][4077] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:07.358 [INFO][4077] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:07.363 [INFO][4077] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4 Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:07.381 [INFO][4077] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:07.401 [INFO][4077] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.108.3/26] block=192.168.108.0/26 handle="k8s-pod-network.2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:07.401 [INFO][4077] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.108.3/26] handle="k8s-pod-network.2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:07.401 [INFO][4077] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:07.512871 containerd[1471]: 2026-04-13 20:25:07.402 [INFO][4077] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.108.3/26] IPv6=[] ContainerID="2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" HandleID="k8s-pod-network.2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:07.515985 containerd[1471]: 2026-04-13 20:25:07.410 [INFO][4015] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w2rcq" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"be2f09be-e384-4b88-a802-0ae6bc590ea7", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-cccfbd5cf-w2rcq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.108.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib437c229611", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:07.515985 containerd[1471]: 2026-04-13 20:25:07.412 [INFO][4015] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.3/32] ContainerID="2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w2rcq" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:07.515985 containerd[1471]: 2026-04-13 20:25:07.412 [INFO][4015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib437c229611 ContainerID="2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w2rcq" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:07.515985 containerd[1471]: 2026-04-13 20:25:07.455 [INFO][4015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w2rcq" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:07.515985 containerd[1471]: 2026-04-13 20:25:07.468 [INFO][4015] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w2rcq" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"be2f09be-e384-4b88-a802-0ae6bc590ea7", ResourceVersion:"936", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4", Pod:"goldmane-cccfbd5cf-w2rcq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.108.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib437c229611", MAC:"fe:3b:a8:33:45:21", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:07.515985 containerd[1471]: 2026-04-13 20:25:07.502 [INFO][4015] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4" Namespace="calico-system" Pod="goldmane-cccfbd5cf-w2rcq" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:07.565922 systemd-networkd[1370]: calib765097bd41: Link UP Apr 13 20:25:07.570337 systemd-networkd[1370]: calib765097bd41: Gained carrier Apr 13 20:25:07.627627 containerd[1471]: time="2026-04-13T20:25:07.627559444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7648fcb95-gm92l,Uid:7bf5432d-acc0-4734-9f97-387b5d8e7c4d,Namespace:calico-system,Attempt:0,}" Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:06.537 [ERROR][3988] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:06.606 [INFO][3988] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0 calico-apiserver-6769499dcc- calico-system 585a18a3-2006-4f0c-a63c-f101aa142823 942 0 2026-04-13 20:24:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6769499dcc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal calico-apiserver-6769499dcc-cptll eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib765097bd41 [] [] }} ContainerID="a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-cptll" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-" Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:06.606 [INFO][3988] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-cptll" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:06.927 [INFO][4090] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" HandleID="k8s-pod-network.a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.022 [INFO][4090] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" HandleID="k8s-pod-network.a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ebe30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", "pod":"calico-apiserver-6769499dcc-cptll", "timestamp":"2026-04-13 20:25:06.927252996 +0000 UTC"}, Hostname:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.022 [INFO][4090] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.401 [INFO][4090] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.402 [INFO][4090] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal' Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.407 [INFO][4090] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.419 [INFO][4090] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.448 [INFO][4090] ipam/ipam.go 526: Trying affinity for 192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.458 [INFO][4090] ipam/ipam.go 160: Attempting to load block cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.469 [INFO][4090] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.469 [INFO][4090] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.482 [INFO][4090] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.510 [INFO][4090] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.532 [INFO][4090] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.108.4/26] block=192.168.108.0/26 handle="k8s-pod-network.a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.532 [INFO][4090] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.108.4/26] handle="k8s-pod-network.a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.532 [INFO][4090] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:07.634022 containerd[1471]: 2026-04-13 20:25:07.532 [INFO][4090] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.108.4/26] IPv6=[] ContainerID="a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" HandleID="k8s-pod-network.a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:07.636397 containerd[1471]: 2026-04-13 20:25:07.540 [INFO][3988] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-cptll" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0", GenerateName:"calico-apiserver-6769499dcc-", Namespace:"calico-system", SelfLink:"", UID:"585a18a3-2006-4f0c-a63c-f101aa142823", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6769499dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-6769499dcc-cptll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib765097bd41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:07.636397 containerd[1471]: 2026-04-13 20:25:07.542 [INFO][3988] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.4/32] ContainerID="a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-cptll" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:07.636397 containerd[1471]: 2026-04-13 20:25:07.543 [INFO][3988] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib765097bd41 ContainerID="a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-cptll" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:07.636397 containerd[1471]: 2026-04-13 20:25:07.577 [INFO][3988] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-cptll" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:07.636397 containerd[1471]: 2026-04-13 20:25:07.577 [INFO][3988] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-cptll" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0", GenerateName:"calico-apiserver-6769499dcc-", Namespace:"calico-system", SelfLink:"", UID:"585a18a3-2006-4f0c-a63c-f101aa142823", ResourceVersion:"942", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6769499dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e", Pod:"calico-apiserver-6769499dcc-cptll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib765097bd41", MAC:"8a:6f:df:67:1c:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:07.636397 containerd[1471]: 2026-04-13 20:25:07.620 [INFO][3988] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e" Namespace="calico-system" Pod="calico-apiserver-6769499dcc-cptll" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:07.668554 containerd[1471]: time="2026-04-13T20:25:07.667701755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:25:07.668554 containerd[1471]: time="2026-04-13T20:25:07.667819373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:25:07.668554 containerd[1471]: time="2026-04-13T20:25:07.667876799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:07.668554 containerd[1471]: time="2026-04-13T20:25:07.668072167Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:07.751364 systemd-networkd[1370]: cali45acde61227: Link UP Apr 13 20:25:07.759495 systemd-networkd[1370]: cali45acde61227: Gained carrier Apr 13 20:25:07.808376 systemd[1]: Started cri-containerd-2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4.scope - libcontainer container 2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4. Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:06.616 [ERROR][4049] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:06.684 [INFO][4049] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0 calico-kube-controllers-6865ddd44- calico-system a5b2719d-ae8e-4020-9d59-65852e11ae8d 937 0 2026-04-13 20:24:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6865ddd44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal calico-kube-controllers-6865ddd44-nxqpx eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali45acde61227 [] [] }} ContainerID="6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" Namespace="calico-system" Pod="calico-kube-controllers-6865ddd44-nxqpx" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-" Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:06.686 [INFO][4049] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" Namespace="calico-system" Pod="calico-kube-controllers-6865ddd44-nxqpx" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.060 [INFO][4102] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" HandleID="k8s-pod-network.6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.106 [INFO][4102] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" HandleID="k8s-pod-network.6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f440), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", "pod":"calico-kube-controllers-6865ddd44-nxqpx", "timestamp":"2026-04-13 20:25:07.060803727 +0000 UTC"}, Hostname:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00018ac60)} Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.106 [INFO][4102] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.532 [INFO][4102] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.533 [INFO][4102] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal' Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.544 [INFO][4102] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.571 [INFO][4102] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.626 [INFO][4102] ipam/ipam.go 526: Trying affinity for 192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.638 [INFO][4102] ipam/ipam.go 160: Attempting to load block cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.651 [INFO][4102] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.651 [INFO][4102] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.655 [INFO][4102] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.674 [INFO][4102] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.694 [INFO][4102] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.108.5/26] block=192.168.108.0/26 handle="k8s-pod-network.6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.694 [INFO][4102] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.108.5/26] handle="k8s-pod-network.6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:07.859013 containerd[1471]: 2026-04-13 20:25:07.697 [INFO][4102] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:07.860340 containerd[1471]: 2026-04-13 20:25:07.698 [INFO][4102] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.108.5/26] IPv6=[] ContainerID="6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" HandleID="k8s-pod-network.6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:07.860340 containerd[1471]: 2026-04-13 20:25:07.724 [INFO][4049] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" Namespace="calico-system" Pod="calico-kube-controllers-6865ddd44-nxqpx" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0", GenerateName:"calico-kube-controllers-6865ddd44-", Namespace:"calico-system", SelfLink:"", UID:"a5b2719d-ae8e-4020-9d59-65852e11ae8d", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6865ddd44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-6865ddd44-nxqpx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45acde61227", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:07.860340 containerd[1471]: 2026-04-13 20:25:07.725 [INFO][4049] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.5/32] ContainerID="6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" Namespace="calico-system" Pod="calico-kube-controllers-6865ddd44-nxqpx" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:07.860340 containerd[1471]: 2026-04-13 20:25:07.725 [INFO][4049] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali45acde61227 ContainerID="6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" Namespace="calico-system" Pod="calico-kube-controllers-6865ddd44-nxqpx" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:07.860340 containerd[1471]: 2026-04-13 20:25:07.759 [INFO][4049] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" Namespace="calico-system" Pod="calico-kube-controllers-6865ddd44-nxqpx" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:07.860340 containerd[1471]: 2026-04-13 20:25:07.776 [INFO][4049] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" Namespace="calico-system" Pod="calico-kube-controllers-6865ddd44-nxqpx" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0", GenerateName:"calico-kube-controllers-6865ddd44-", Namespace:"calico-system", SelfLink:"", UID:"a5b2719d-ae8e-4020-9d59-65852e11ae8d", ResourceVersion:"937", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6865ddd44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a", Pod:"calico-kube-controllers-6865ddd44-nxqpx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45acde61227", MAC:"de:80:c0:97:91:37", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:07.860340 containerd[1471]: 2026-04-13 20:25:07.827 [INFO][4049] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a" Namespace="calico-system" Pod="calico-kube-controllers-6865ddd44-nxqpx" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:07.899807 containerd[1471]: time="2026-04-13T20:25:07.899290051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-rgwfm,Uid:19e38d3b-6f87-4768-8075-3c82e0d91d00,Namespace:kube-system,Attempt:1,} returns sandbox id \"d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a\"" Apr 13 20:25:07.951418 containerd[1471]: time="2026-04-13T20:25:07.951117046Z" level=info msg="CreateContainer within sandbox \"d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:25:07.965173 containerd[1471]: time="2026-04-13T20:25:07.964889120Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:25:07.966337 containerd[1471]: time="2026-04-13T20:25:07.965719371Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:25:07.968114 containerd[1471]: time="2026-04-13T20:25:07.967818518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:07.986922 systemd-networkd[1370]: cali204a7c1cdef: Link UP Apr 13 20:25:07.994002 containerd[1471]: time="2026-04-13T20:25:07.983910323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:07.995447 systemd-networkd[1370]: cali204a7c1cdef: Gained carrier Apr 13 20:25:08.027363 systemd-networkd[1370]: caliae81db1f5f4: Gained IPv6LL Apr 13 20:25:08.062117 containerd[1471]: time="2026-04-13T20:25:08.062055669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6769499dcc-lkthr,Uid:ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1,Namespace:calico-system,Attempt:1,} returns sandbox id \"c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8\"" Apr 13 20:25:08.079780 containerd[1471]: time="2026-04-13T20:25:08.075565613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:25:08.134954 containerd[1471]: time="2026-04-13T20:25:08.134892225Z" level=info msg="CreateContainer within sandbox \"d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9f77f32d40a3aa16a211e1d17872d8664f0517a6f6c8ed1f33d38ac68bd678f4\"" Apr 13 20:25:08.140231 containerd[1471]: time="2026-04-13T20:25:08.140181483Z" level=info msg="StartContainer for \"9f77f32d40a3aa16a211e1d17872d8664f0517a6f6c8ed1f33d38ac68bd678f4\"" Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:06.643 [ERROR][4032] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:06.729 [INFO][4032] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0 csi-node-driver- calico-system 616bbb20-6acc-4142-9ccc-5584aac07844 940 0 2026-04-13 20:24:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:98cbb5577 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal csi-node-driver-bjsrs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali204a7c1cdef [] [] }} ContainerID="e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" Namespace="calico-system" Pod="csi-node-driver-bjsrs" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-" Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:06.729 [INFO][4032] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" Namespace="calico-system" Pod="csi-node-driver-bjsrs" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.098 [INFO][4108] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" HandleID="k8s-pod-network.e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.147 [INFO][4108] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" HandleID="k8s-pod-network.e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", "pod":"csi-node-driver-bjsrs", "timestamp":"2026-04-13 20:25:07.098869741 +0000 UTC"}, Hostname:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000e22c0)} Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.147 [INFO][4108] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.696 [INFO][4108] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.696 [INFO][4108] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal' Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.715 [INFO][4108] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.733 [INFO][4108] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.755 [INFO][4108] ipam/ipam.go 526: Trying affinity for 192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.766 [INFO][4108] ipam/ipam.go 160: Attempting to load block cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.778 [INFO][4108] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.778 [INFO][4108] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.796 [INFO][4108] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5 Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.837 [INFO][4108] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.879 [INFO][4108] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.108.6/26] block=192.168.108.0/26 handle="k8s-pod-network.e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.879 [INFO][4108] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.108.6/26] handle="k8s-pod-network.e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.879 [INFO][4108] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:08.146496 containerd[1471]: 2026-04-13 20:25:07.879 [INFO][4108] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.108.6/26] IPv6=[] ContainerID="e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" HandleID="k8s-pod-network.e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:08.151714 containerd[1471]: 2026-04-13 20:25:07.958 [INFO][4032] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" Namespace="calico-system" Pod="csi-node-driver-bjsrs" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"616bbb20-6acc-4142-9ccc-5584aac07844", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-bjsrs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali204a7c1cdef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:08.151714 containerd[1471]: 2026-04-13 20:25:07.964 [INFO][4032] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.6/32] ContainerID="e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" Namespace="calico-system" Pod="csi-node-driver-bjsrs" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:08.151714 containerd[1471]: 2026-04-13 20:25:07.964 [INFO][4032] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali204a7c1cdef ContainerID="e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" Namespace="calico-system" Pod="csi-node-driver-bjsrs" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:08.151714 containerd[1471]: 2026-04-13 20:25:08.029 [INFO][4032] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" Namespace="calico-system" Pod="csi-node-driver-bjsrs" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:08.151714 containerd[1471]: 2026-04-13 20:25:08.044 [INFO][4032] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" Namespace="calico-system" Pod="csi-node-driver-bjsrs" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"616bbb20-6acc-4142-9ccc-5584aac07844", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5", Pod:"csi-node-driver-bjsrs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali204a7c1cdef", MAC:"4e:a2:dc:fd:68:71", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:08.151714 containerd[1471]: 2026-04-13 20:25:08.109 [INFO][4032] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5" Namespace="calico-system" Pod="csi-node-driver-bjsrs" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:08.150036 systemd[1]: Started cri-containerd-a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e.scope - libcontainer container a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e. Apr 13 20:25:08.250978 containerd[1471]: time="2026-04-13T20:25:08.249404866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:25:08.250978 containerd[1471]: time="2026-04-13T20:25:08.249511860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:25:08.250978 containerd[1471]: time="2026-04-13T20:25:08.249541146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:08.250978 containerd[1471]: time="2026-04-13T20:25:08.249685428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:08.282052 systemd[1]: Started cri-containerd-9f77f32d40a3aa16a211e1d17872d8664f0517a6f6c8ed1f33d38ac68bd678f4.scope - libcontainer container 9f77f32d40a3aa16a211e1d17872d8664f0517a6f6c8ed1f33d38ac68bd678f4. Apr 13 20:25:08.370990 containerd[1471]: time="2026-04-13T20:25:08.370920540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-cccfbd5cf-w2rcq,Uid:be2f09be-e384-4b88-a802-0ae6bc590ea7,Namespace:calico-system,Attempt:1,} returns sandbox id \"2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4\"" Apr 13 20:25:08.375054 systemd[1]: Started cri-containerd-6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a.scope - libcontainer container 6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a. Apr 13 20:25:08.399983 kubelet[2629]: I0413 20:25:08.399193 2629 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5538c62e-5813-4ee8-9c45-fed02ec42082" path="/var/lib/kubelet/pods/5538c62e-5813-4ee8-9c45-fed02ec42082/volumes" Apr 13 20:25:08.464939 containerd[1471]: time="2026-04-13T20:25:08.464864082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6769499dcc-cptll,Uid:585a18a3-2006-4f0c-a63c-f101aa142823,Namespace:calico-system,Attempt:1,} returns sandbox id \"a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e\"" Apr 13 20:25:08.506109 containerd[1471]: time="2026-04-13T20:25:08.504123494Z" level=info msg="StartContainer for \"9f77f32d40a3aa16a211e1d17872d8664f0517a6f6c8ed1f33d38ac68bd678f4\" returns successfully" Apr 13 20:25:08.532561 containerd[1471]: time="2026-04-13T20:25:08.531349868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:25:08.532561 containerd[1471]: time="2026-04-13T20:25:08.531471149Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:25:08.532561 containerd[1471]: time="2026-04-13T20:25:08.531505941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:08.535501 containerd[1471]: time="2026-04-13T20:25:08.534666850Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:08.612527 systemd[1]: Started cri-containerd-e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5.scope - libcontainer container e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5. Apr 13 20:25:08.657365 systemd-networkd[1370]: cali7bfda0e842d: Gained IPv6LL Apr 13 20:25:08.913041 containerd[1471]: time="2026-04-13T20:25:08.912942214Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6865ddd44-nxqpx,Uid:a5b2719d-ae8e-4020-9d59-65852e11ae8d,Namespace:calico-system,Attempt:1,} returns sandbox id \"6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a\"" Apr 13 20:25:08.947009 systemd-networkd[1370]: calided136547d4: Link UP Apr 13 20:25:08.954474 systemd-networkd[1370]: calided136547d4: Gained carrier Apr 13 20:25:09.001259 containerd[1471]: time="2026-04-13T20:25:08.998478599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bjsrs,Uid:616bbb20-6acc-4142-9ccc-5584aac07844,Namespace:calico-system,Attempt:1,} returns sandbox id \"e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5\"" Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.327 [ERROR][4276] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.470 [INFO][4276] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--7648fcb95--gm92l-eth0 whisker-7648fcb95- calico-system 7bf5432d-acc0-4734-9f97-387b5d8e7c4d 969 0 2026-04-13 20:25:07 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7648fcb95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal whisker-7648fcb95-gm92l eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calided136547d4 [] [] }} ContainerID="566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" Namespace="calico-system" Pod="whisker-7648fcb95-gm92l" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--7648fcb95--gm92l-" Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.503 [INFO][4276] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" Namespace="calico-system" Pod="whisker-7648fcb95-gm92l" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--7648fcb95--gm92l-eth0" Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.753 [INFO][4527] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" HandleID="k8s-pod-network.566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--7648fcb95--gm92l-eth0" Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.769 [INFO][4527] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" HandleID="k8s-pod-network.566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--7648fcb95--gm92l-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003cebc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", "pod":"whisker-7648fcb95-gm92l", "timestamp":"2026-04-13 20:25:08.753881365 +0000 UTC"}, Hostname:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002409a0)} Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.769 [INFO][4527] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.769 [INFO][4527] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.770 [INFO][4527] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal' Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.775 [INFO][4527] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.787 [INFO][4527] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.802 [INFO][4527] ipam/ipam.go 526: Trying affinity for 192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.808 [INFO][4527] ipam/ipam.go 160: Attempting to load block cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.815 [INFO][4527] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.815 [INFO][4527] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.835 [INFO][4527] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6 Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.863 [INFO][4527] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.895 [INFO][4527] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.108.7/26] block=192.168.108.0/26 handle="k8s-pod-network.566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.895 [INFO][4527] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.108.7/26] handle="k8s-pod-network.566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.896 [INFO][4527] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:09.012889 containerd[1471]: 2026-04-13 20:25:08.896 [INFO][4527] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.108.7/26] IPv6=[] ContainerID="566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" HandleID="k8s-pod-network.566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--7648fcb95--gm92l-eth0" Apr 13 20:25:09.017276 containerd[1471]: 2026-04-13 20:25:08.922 [INFO][4276] cni-plugin/k8s.go 418: Populated endpoint ContainerID="566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" Namespace="calico-system" Pod="whisker-7648fcb95-gm92l" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--7648fcb95--gm92l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--7648fcb95--gm92l-eth0", GenerateName:"whisker-7648fcb95-", Namespace:"calico-system", SelfLink:"", UID:"7bf5432d-acc0-4734-9f97-387b5d8e7c4d", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 25, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7648fcb95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-7648fcb95-gm92l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.108.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calided136547d4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:09.017276 containerd[1471]: 2026-04-13 20:25:08.923 [INFO][4276] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.7/32] ContainerID="566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" Namespace="calico-system" Pod="whisker-7648fcb95-gm92l" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--7648fcb95--gm92l-eth0" Apr 13 20:25:09.017276 containerd[1471]: 2026-04-13 20:25:08.924 [INFO][4276] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calided136547d4 ContainerID="566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" Namespace="calico-system" Pod="whisker-7648fcb95-gm92l" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--7648fcb95--gm92l-eth0" Apr 13 20:25:09.017276 containerd[1471]: 2026-04-13 20:25:08.956 [INFO][4276] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" Namespace="calico-system" Pod="whisker-7648fcb95-gm92l" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--7648fcb95--gm92l-eth0" Apr 13 20:25:09.017276 containerd[1471]: 2026-04-13 20:25:08.957 [INFO][4276] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" Namespace="calico-system" Pod="whisker-7648fcb95-gm92l" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--7648fcb95--gm92l-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--7648fcb95--gm92l-eth0", GenerateName:"whisker-7648fcb95-", Namespace:"calico-system", SelfLink:"", UID:"7bf5432d-acc0-4734-9f97-387b5d8e7c4d", ResourceVersion:"969", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 25, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7648fcb95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6", Pod:"whisker-7648fcb95-gm92l", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.108.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calided136547d4", MAC:"de:9f:07:6a:dc:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:09.017276 containerd[1471]: 2026-04-13 20:25:08.990 [INFO][4276] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6" Namespace="calico-system" Pod="whisker-7648fcb95-gm92l" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--7648fcb95--gm92l-eth0" Apr 13 20:25:09.040242 systemd-networkd[1370]: calib765097bd41: Gained IPv6LL Apr 13 20:25:09.125799 kubelet[2629]: I0413 20:25:09.123101 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-rgwfm" podStartSLOduration=46.123066547 podStartE2EDuration="46.123066547s" podCreationTimestamp="2026-04-13 20:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:25:09.117254661 +0000 UTC m=+53.010224219" watchObservedRunningTime="2026-04-13 20:25:09.123066547 +0000 UTC m=+53.016036090" Apr 13 20:25:09.155283 containerd[1471]: time="2026-04-13T20:25:09.154358608Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:25:09.158977 containerd[1471]: time="2026-04-13T20:25:09.157585844Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:25:09.158977 containerd[1471]: time="2026-04-13T20:25:09.157645386Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:09.161782 containerd[1471]: time="2026-04-13T20:25:09.160669796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:09.237791 systemd-networkd[1370]: calib437c229611: Gained IPv6LL Apr 13 20:25:09.257110 systemd[1]: Started cri-containerd-566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6.scope - libcontainer container 566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6. Apr 13 20:25:09.361214 systemd-networkd[1370]: cali204a7c1cdef: Gained IPv6LL Apr 13 20:25:09.470506 containerd[1471]: time="2026-04-13T20:25:09.470292045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7648fcb95-gm92l,Uid:7bf5432d-acc0-4734-9f97-387b5d8e7c4d,Namespace:calico-system,Attempt:0,} returns sandbox id \"566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6\"" Apr 13 20:25:09.616131 systemd-networkd[1370]: cali45acde61227: Gained IPv6LL Apr 13 20:25:10.313804 kernel: calico-node[4257]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 20:25:10.384325 systemd-networkd[1370]: calided136547d4: Gained IPv6LL Apr 13 20:25:12.003956 systemd-networkd[1370]: vxlan.calico: Link UP Apr 13 20:25:12.003978 systemd-networkd[1370]: vxlan.calico: Gained carrier Apr 13 20:25:12.794547 containerd[1471]: time="2026-04-13T20:25:12.794467514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:12.796877 containerd[1471]: time="2026-04-13T20:25:12.796813226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 13 20:25:12.800840 containerd[1471]: time="2026-04-13T20:25:12.800794926Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:12.808701 containerd[1471]: time="2026-04-13T20:25:12.808597792Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:12.811360 containerd[1471]: time="2026-04-13T20:25:12.811093863Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 4.735432763s" Apr 13 20:25:12.812695 containerd[1471]: time="2026-04-13T20:25:12.812629000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:25:12.816823 containerd[1471]: time="2026-04-13T20:25:12.816215947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 20:25:12.824302 containerd[1471]: time="2026-04-13T20:25:12.824112240Z" level=info msg="CreateContainer within sandbox \"c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:25:12.852575 containerd[1471]: time="2026-04-13T20:25:12.852503349Z" level=info msg="CreateContainer within sandbox \"c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7b7fc280880abdd1090d3d4075b6c7bd0c3723c629502408fbe782d5e5d3267b\"" Apr 13 20:25:12.862209 containerd[1471]: time="2026-04-13T20:25:12.858236250Z" level=info msg="StartContainer for \"7b7fc280880abdd1090d3d4075b6c7bd0c3723c629502408fbe782d5e5d3267b\"" Apr 13 20:25:12.953017 systemd[1]: Started cri-containerd-7b7fc280880abdd1090d3d4075b6c7bd0c3723c629502408fbe782d5e5d3267b.scope - libcontainer container 7b7fc280880abdd1090d3d4075b6c7bd0c3723c629502408fbe782d5e5d3267b. Apr 13 20:25:13.106864 containerd[1471]: time="2026-04-13T20:25:13.106686465Z" level=info msg="StartContainer for \"7b7fc280880abdd1090d3d4075b6c7bd0c3723c629502408fbe782d5e5d3267b\" returns successfully" Apr 13 20:25:13.203026 kubelet[2629]: I0413 20:25:13.202927 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6769499dcc-lkthr" podStartSLOduration=31.461155179 podStartE2EDuration="36.202897031s" podCreationTimestamp="2026-04-13 20:24:37 +0000 UTC" firstStartedPulling="2026-04-13 20:25:08.073294923 +0000 UTC m=+51.966264446" lastFinishedPulling="2026-04-13 20:25:12.815036781 +0000 UTC m=+56.708006298" observedRunningTime="2026-04-13 20:25:13.201917219 +0000 UTC m=+57.094886767" watchObservedRunningTime="2026-04-13 20:25:13.202897031 +0000 UTC m=+57.095866573" Apr 13 20:25:13.841959 systemd-networkd[1370]: vxlan.calico: Gained IPv6LL Apr 13 20:25:14.179513 kubelet[2629]: I0413 20:25:14.179312 2629 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:25:14.391332 containerd[1471]: time="2026-04-13T20:25:14.390637331Z" level=info msg="StopPodSandbox for \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\"" Apr 13 20:25:14.562787 containerd[1471]: 2026-04-13 20:25:14.485 [INFO][4816] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Apr 13 20:25:14.562787 containerd[1471]: 2026-04-13 20:25:14.485 [INFO][4816] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" iface="eth0" netns="/var/run/netns/cni-afb9a3f8-05b5-396e-3568-4e14d4275a44" Apr 13 20:25:14.562787 containerd[1471]: 2026-04-13 20:25:14.487 [INFO][4816] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" iface="eth0" netns="/var/run/netns/cni-afb9a3f8-05b5-396e-3568-4e14d4275a44" Apr 13 20:25:14.562787 containerd[1471]: 2026-04-13 20:25:14.487 [INFO][4816] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" iface="eth0" netns="/var/run/netns/cni-afb9a3f8-05b5-396e-3568-4e14d4275a44" Apr 13 20:25:14.562787 containerd[1471]: 2026-04-13 20:25:14.488 [INFO][4816] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Apr 13 20:25:14.562787 containerd[1471]: 2026-04-13 20:25:14.488 [INFO][4816] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Apr 13 20:25:14.562787 containerd[1471]: 2026-04-13 20:25:14.535 [INFO][4823] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" HandleID="k8s-pod-network.b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:14.562787 containerd[1471]: 2026-04-13 20:25:14.536 [INFO][4823] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:14.562787 containerd[1471]: 2026-04-13 20:25:14.536 [INFO][4823] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:14.562787 containerd[1471]: 2026-04-13 20:25:14.551 [WARNING][4823] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" HandleID="k8s-pod-network.b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:14.562787 containerd[1471]: 2026-04-13 20:25:14.551 [INFO][4823] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" HandleID="k8s-pod-network.b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:14.562787 containerd[1471]: 2026-04-13 20:25:14.555 [INFO][4823] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:14.562787 containerd[1471]: 2026-04-13 20:25:14.558 [INFO][4816] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Apr 13 20:25:14.567022 containerd[1471]: time="2026-04-13T20:25:14.564927558Z" level=info msg="TearDown network for sandbox \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\" successfully" Apr 13 20:25:14.567022 containerd[1471]: time="2026-04-13T20:25:14.564984005Z" level=info msg="StopPodSandbox for \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\" returns successfully" Apr 13 20:25:14.573845 systemd[1]: run-netns-cni\x2dafb9a3f8\x2d05b5\x2d396e\x2d3568\x2d4e14d4275a44.mount: Deactivated successfully. Apr 13 20:25:14.581891 containerd[1471]: time="2026-04-13T20:25:14.578212171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lzkq6,Uid:07973dbc-15b9-4935-84bf-81b38774c1cf,Namespace:kube-system,Attempt:1,}" Apr 13 20:25:14.865222 systemd-networkd[1370]: cali48f0c2eebbe: Link UP Apr 13 20:25:14.875544 systemd-networkd[1370]: cali48f0c2eebbe: Gained carrier Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.697 [INFO][4831] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0 coredns-66bc5c9577- kube-system 07973dbc-15b9-4935-84bf-81b38774c1cf 1027 0 2026-04-13 20:24:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal coredns-66bc5c9577-lzkq6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali48f0c2eebbe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" Namespace="kube-system" Pod="coredns-66bc5c9577-lzkq6" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-" Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.698 [INFO][4831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" Namespace="kube-system" Pod="coredns-66bc5c9577-lzkq6" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.763 [INFO][4842] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" HandleID="k8s-pod-network.fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.780 [INFO][4842] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" HandleID="k8s-pod-network.fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001021d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", "pod":"coredns-66bc5c9577-lzkq6", "timestamp":"2026-04-13 20:25:14.763016421 +0000 UTC"}, Hostname:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002d5ce0)} Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.780 [INFO][4842] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.780 [INFO][4842] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.780 [INFO][4842] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal' Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.786 [INFO][4842] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.797 [INFO][4842] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.808 [INFO][4842] ipam/ipam.go 526: Trying affinity for 192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.813 [INFO][4842] ipam/ipam.go 160: Attempting to load block cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.818 [INFO][4842] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.108.0/26 host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.819 [INFO][4842] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.108.0/26 handle="k8s-pod-network.fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.822 [INFO][4842] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315 Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.830 [INFO][4842] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.108.0/26 handle="k8s-pod-network.fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.844 [INFO][4842] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.108.8/26] block=192.168.108.0/26 handle="k8s-pod-network.fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.845 [INFO][4842] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.108.8/26] handle="k8s-pod-network.fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" host="ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal" Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.845 [INFO][4842] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:14.912983 containerd[1471]: 2026-04-13 20:25:14.846 [INFO][4842] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.108.8/26] IPv6=[] ContainerID="fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" HandleID="k8s-pod-network.fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:14.917856 containerd[1471]: 2026-04-13 20:25:14.852 [INFO][4831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" Namespace="kube-system" Pod="coredns-66bc5c9577-lzkq6" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"07973dbc-15b9-4935-84bf-81b38774c1cf", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-66bc5c9577-lzkq6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48f0c2eebbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:14.917856 containerd[1471]: 2026-04-13 20:25:14.852 [INFO][4831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.108.8/32] ContainerID="fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" Namespace="kube-system" Pod="coredns-66bc5c9577-lzkq6" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:14.917856 containerd[1471]: 2026-04-13 20:25:14.852 [INFO][4831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali48f0c2eebbe ContainerID="fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" Namespace="kube-system" Pod="coredns-66bc5c9577-lzkq6" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:14.917856 containerd[1471]: 2026-04-13 20:25:14.872 [INFO][4831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" Namespace="kube-system" Pod="coredns-66bc5c9577-lzkq6" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:14.918342 containerd[1471]: 2026-04-13 20:25:14.873 [INFO][4831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" Namespace="kube-system" Pod="coredns-66bc5c9577-lzkq6" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"07973dbc-15b9-4935-84bf-81b38774c1cf", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315", Pod:"coredns-66bc5c9577-lzkq6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48f0c2eebbe", MAC:"5a:b2:4f:21:56:58", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:14.918342 containerd[1471]: 2026-04-13 20:25:14.903 [INFO][4831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315" Namespace="kube-system" Pod="coredns-66bc5c9577-lzkq6" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:14.973682 containerd[1471]: time="2026-04-13T20:25:14.971298215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:25:14.973682 containerd[1471]: time="2026-04-13T20:25:14.971400464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:25:14.973682 containerd[1471]: time="2026-04-13T20:25:14.971459074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:14.973682 containerd[1471]: time="2026-04-13T20:25:14.971667719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:25:15.041535 systemd[1]: Started cri-containerd-fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315.scope - libcontainer container fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315. Apr 13 20:25:15.173865 containerd[1471]: time="2026-04-13T20:25:15.173423900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-lzkq6,Uid:07973dbc-15b9-4935-84bf-81b38774c1cf,Namespace:kube-system,Attempt:1,} returns sandbox id \"fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315\"" Apr 13 20:25:15.198762 containerd[1471]: time="2026-04-13T20:25:15.198452057Z" level=info msg="CreateContainer within sandbox \"fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:25:15.221116 containerd[1471]: time="2026-04-13T20:25:15.220149789Z" level=info msg="CreateContainer within sandbox \"fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"12a8a0777e318c6ba4cc9c8927725fef08823d95928fb5810a6ebf9902119737\"" Apr 13 20:25:15.224848 containerd[1471]: time="2026-04-13T20:25:15.223297215Z" level=info msg="StartContainer for \"12a8a0777e318c6ba4cc9c8927725fef08823d95928fb5810a6ebf9902119737\"" Apr 13 20:25:15.284115 systemd[1]: Started cri-containerd-12a8a0777e318c6ba4cc9c8927725fef08823d95928fb5810a6ebf9902119737.scope - libcontainer container 12a8a0777e318c6ba4cc9c8927725fef08823d95928fb5810a6ebf9902119737. Apr 13 20:25:15.347133 containerd[1471]: time="2026-04-13T20:25:15.347038370Z" level=info msg="StartContainer for \"12a8a0777e318c6ba4cc9c8927725fef08823d95928fb5810a6ebf9902119737\" returns successfully" Apr 13 20:25:16.264621 kubelet[2629]: I0413 20:25:16.263059 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-lzkq6" podStartSLOduration=53.263025613 podStartE2EDuration="53.263025613s" podCreationTimestamp="2026-04-13 20:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:25:16.230981976 +0000 UTC m=+60.123951520" watchObservedRunningTime="2026-04-13 20:25:16.263025613 +0000 UTC m=+60.155995154" Apr 13 20:25:16.337081 containerd[1471]: time="2026-04-13T20:25:16.336991300Z" level=info msg="StopPodSandbox for \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\"" Apr 13 20:25:16.517147 containerd[1471]: 2026-04-13 20:25:16.429 [WARNING][4953] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"19e38d3b-6f87-4768-8075-3c82e0d91d00", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a", Pod:"coredns-66bc5c9577-rgwfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bfda0e842d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:16.517147 containerd[1471]: 2026-04-13 20:25:16.430 [INFO][4953] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Apr 13 20:25:16.517147 containerd[1471]: 2026-04-13 20:25:16.430 [INFO][4953] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" iface="eth0" netns="" Apr 13 20:25:16.517147 containerd[1471]: 2026-04-13 20:25:16.430 [INFO][4953] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Apr 13 20:25:16.517147 containerd[1471]: 2026-04-13 20:25:16.430 [INFO][4953] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Apr 13 20:25:16.517147 containerd[1471]: 2026-04-13 20:25:16.486 [INFO][4963] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" HandleID="k8s-pod-network.c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:16.517147 containerd[1471]: 2026-04-13 20:25:16.487 [INFO][4963] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:16.517147 containerd[1471]: 2026-04-13 20:25:16.487 [INFO][4963] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:16.517147 containerd[1471]: 2026-04-13 20:25:16.505 [WARNING][4963] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" HandleID="k8s-pod-network.c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:16.517147 containerd[1471]: 2026-04-13 20:25:16.505 [INFO][4963] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" HandleID="k8s-pod-network.c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:16.517147 containerd[1471]: 2026-04-13 20:25:16.509 [INFO][4963] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:16.517147 containerd[1471]: 2026-04-13 20:25:16.513 [INFO][4953] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Apr 13 20:25:16.520300 containerd[1471]: time="2026-04-13T20:25:16.518700109Z" level=info msg="TearDown network for sandbox \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\" successfully" Apr 13 20:25:16.520300 containerd[1471]: time="2026-04-13T20:25:16.518803394Z" level=info msg="StopPodSandbox for \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\" returns successfully" Apr 13 20:25:16.520300 containerd[1471]: time="2026-04-13T20:25:16.519609117Z" level=info msg="RemovePodSandbox for \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\"" Apr 13 20:25:16.520300 containerd[1471]: time="2026-04-13T20:25:16.519660742Z" level=info msg="Forcibly stopping sandbox \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\"" Apr 13 20:25:16.656292 systemd-networkd[1370]: cali48f0c2eebbe: Gained IPv6LL Apr 13 20:25:16.663875 containerd[1471]: 2026-04-13 20:25:16.588 [WARNING][4978] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"19e38d3b-6f87-4768-8075-3c82e0d91d00", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"d9a0e6c3236413e0fa09fc8b7c150fdd4ccb541c00a148c45f5420af587f2c8a", Pod:"coredns-66bc5c9577-rgwfm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bfda0e842d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:16.663875 containerd[1471]: 2026-04-13 20:25:16.589 [INFO][4978] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Apr 13 20:25:16.663875 containerd[1471]: 2026-04-13 20:25:16.589 [INFO][4978] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" iface="eth0" netns="" Apr 13 20:25:16.663875 containerd[1471]: 2026-04-13 20:25:16.589 [INFO][4978] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Apr 13 20:25:16.663875 containerd[1471]: 2026-04-13 20:25:16.589 [INFO][4978] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Apr 13 20:25:16.663875 containerd[1471]: 2026-04-13 20:25:16.641 [INFO][4986] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" HandleID="k8s-pod-network.c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:16.663875 containerd[1471]: 2026-04-13 20:25:16.642 [INFO][4986] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:16.663875 containerd[1471]: 2026-04-13 20:25:16.642 [INFO][4986] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:16.663875 containerd[1471]: 2026-04-13 20:25:16.652 [WARNING][4986] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" HandleID="k8s-pod-network.c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:16.663875 containerd[1471]: 2026-04-13 20:25:16.652 [INFO][4986] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" HandleID="k8s-pod-network.c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--rgwfm-eth0" Apr 13 20:25:16.663875 containerd[1471]: 2026-04-13 20:25:16.658 [INFO][4986] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:16.663875 containerd[1471]: 2026-04-13 20:25:16.661 [INFO][4978] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5" Apr 13 20:25:16.666712 containerd[1471]: time="2026-04-13T20:25:16.663923154Z" level=info msg="TearDown network for sandbox \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\" successfully" Apr 13 20:25:16.672198 containerd[1471]: time="2026-04-13T20:25:16.672099122Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:25:16.672359 containerd[1471]: time="2026-04-13T20:25:16.672247726Z" level=info msg="RemovePodSandbox \"c997c1f6824ad2d08c5de0efad615959c0af59771e82d27fa8ce6cf3e7e380a5\" returns successfully" Apr 13 20:25:16.673248 containerd[1471]: time="2026-04-13T20:25:16.673130883Z" level=info msg="StopPodSandbox for \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\"" Apr 13 20:25:16.879455 containerd[1471]: 2026-04-13 20:25:16.754 [WARNING][5001] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"07973dbc-15b9-4935-84bf-81b38774c1cf", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315", Pod:"coredns-66bc5c9577-lzkq6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48f0c2eebbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:16.879455 containerd[1471]: 2026-04-13 20:25:16.755 [INFO][5001] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Apr 13 20:25:16.879455 containerd[1471]: 2026-04-13 20:25:16.755 [INFO][5001] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" iface="eth0" netns="" Apr 13 20:25:16.879455 containerd[1471]: 2026-04-13 20:25:16.755 [INFO][5001] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Apr 13 20:25:16.879455 containerd[1471]: 2026-04-13 20:25:16.755 [INFO][5001] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Apr 13 20:25:16.879455 containerd[1471]: 2026-04-13 20:25:16.811 [INFO][5011] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" HandleID="k8s-pod-network.b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:16.879455 containerd[1471]: 2026-04-13 20:25:16.812 [INFO][5011] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:16.879455 containerd[1471]: 2026-04-13 20:25:16.812 [INFO][5011] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:16.879455 containerd[1471]: 2026-04-13 20:25:16.849 [WARNING][5011] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" HandleID="k8s-pod-network.b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:16.879455 containerd[1471]: 2026-04-13 20:25:16.850 [INFO][5011] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" HandleID="k8s-pod-network.b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:16.879455 containerd[1471]: 2026-04-13 20:25:16.859 [INFO][5011] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:16.879455 containerd[1471]: 2026-04-13 20:25:16.870 [INFO][5001] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Apr 13 20:25:16.879455 containerd[1471]: time="2026-04-13T20:25:16.879305220Z" level=info msg="TearDown network for sandbox \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\" successfully" Apr 13 20:25:16.879455 containerd[1471]: time="2026-04-13T20:25:16.879344234Z" level=info msg="StopPodSandbox for \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\" returns successfully" Apr 13 20:25:16.883327 containerd[1471]: time="2026-04-13T20:25:16.882974891Z" level=info msg="RemovePodSandbox for \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\"" Apr 13 20:25:16.883327 containerd[1471]: time="2026-04-13T20:25:16.883022915Z" level=info msg="Forcibly stopping sandbox \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\"" Apr 13 20:25:17.044705 containerd[1471]: 2026-04-13 20:25:16.972 [WARNING][5028] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"07973dbc-15b9-4935-84bf-81b38774c1cf", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"fac10ab4d7873301a7a9e0a62f4a53bc9440fff26b767d04f530661a43f77315", Pod:"coredns-66bc5c9577-lzkq6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.108.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali48f0c2eebbe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:17.044705 containerd[1471]: 2026-04-13 20:25:16.973 [INFO][5028] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Apr 13 20:25:17.044705 containerd[1471]: 2026-04-13 20:25:16.973 [INFO][5028] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" iface="eth0" netns="" Apr 13 20:25:17.044705 containerd[1471]: 2026-04-13 20:25:16.973 [INFO][5028] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Apr 13 20:25:17.044705 containerd[1471]: 2026-04-13 20:25:16.973 [INFO][5028] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Apr 13 20:25:17.044705 containerd[1471]: 2026-04-13 20:25:17.016 [INFO][5035] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" HandleID="k8s-pod-network.b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:17.044705 containerd[1471]: 2026-04-13 20:25:17.016 [INFO][5035] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:17.044705 containerd[1471]: 2026-04-13 20:25:17.016 [INFO][5035] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:17.044705 containerd[1471]: 2026-04-13 20:25:17.029 [WARNING][5035] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" HandleID="k8s-pod-network.b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:17.044705 containerd[1471]: 2026-04-13 20:25:17.029 [INFO][5035] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" HandleID="k8s-pod-network.b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-coredns--66bc5c9577--lzkq6-eth0" Apr 13 20:25:17.044705 containerd[1471]: 2026-04-13 20:25:17.032 [INFO][5035] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:17.044705 containerd[1471]: 2026-04-13 20:25:17.037 [INFO][5028] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80" Apr 13 20:25:17.047584 containerd[1471]: time="2026-04-13T20:25:17.046150829Z" level=info msg="TearDown network for sandbox \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\" successfully" Apr 13 20:25:17.053998 containerd[1471]: time="2026-04-13T20:25:17.053704545Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:25:17.053998 containerd[1471]: time="2026-04-13T20:25:17.053838858Z" level=info msg="RemovePodSandbox \"b65e2329732694283dae5e6ce69aa951e56e918590737f699e3b7fc5f90bab80\" returns successfully" Apr 13 20:25:17.055401 containerd[1471]: time="2026-04-13T20:25:17.054861948Z" level=info msg="StopPodSandbox for \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\"" Apr 13 20:25:17.220271 containerd[1471]: 2026-04-13 20:25:17.145 [WARNING][5049] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0", GenerateName:"calico-kube-controllers-6865ddd44-", Namespace:"calico-system", SelfLink:"", UID:"a5b2719d-ae8e-4020-9d59-65852e11ae8d", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6865ddd44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a", Pod:"calico-kube-controllers-6865ddd44-nxqpx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45acde61227", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:17.220271 containerd[1471]: 2026-04-13 20:25:17.145 [INFO][5049] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Apr 13 20:25:17.220271 containerd[1471]: 2026-04-13 20:25:17.145 [INFO][5049] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" iface="eth0" netns="" Apr 13 20:25:17.220271 containerd[1471]: 2026-04-13 20:25:17.145 [INFO][5049] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Apr 13 20:25:17.220271 containerd[1471]: 2026-04-13 20:25:17.145 [INFO][5049] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Apr 13 20:25:17.220271 containerd[1471]: 2026-04-13 20:25:17.190 [INFO][5057] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" HandleID="k8s-pod-network.cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:17.220271 containerd[1471]: 2026-04-13 20:25:17.191 [INFO][5057] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:17.220271 containerd[1471]: 2026-04-13 20:25:17.191 [INFO][5057] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:17.220271 containerd[1471]: 2026-04-13 20:25:17.206 [WARNING][5057] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" HandleID="k8s-pod-network.cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:17.220271 containerd[1471]: 2026-04-13 20:25:17.206 [INFO][5057] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" HandleID="k8s-pod-network.cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:17.220271 containerd[1471]: 2026-04-13 20:25:17.210 [INFO][5057] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:17.220271 containerd[1471]: 2026-04-13 20:25:17.216 [INFO][5049] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Apr 13 20:25:17.220271 containerd[1471]: time="2026-04-13T20:25:17.220022827Z" level=info msg="TearDown network for sandbox \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\" successfully" Apr 13 20:25:17.220271 containerd[1471]: time="2026-04-13T20:25:17.220058654Z" level=info msg="StopPodSandbox for \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\" returns successfully" Apr 13 20:25:17.222101 containerd[1471]: time="2026-04-13T20:25:17.221179014Z" level=info msg="RemovePodSandbox for \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\"" Apr 13 20:25:17.222101 containerd[1471]: time="2026-04-13T20:25:17.221224076Z" level=info msg="Forcibly stopping sandbox \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\"" Apr 13 20:25:17.341010 containerd[1471]: 2026-04-13 20:25:17.278 [WARNING][5071] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0", GenerateName:"calico-kube-controllers-6865ddd44-", Namespace:"calico-system", SelfLink:"", UID:"a5b2719d-ae8e-4020-9d59-65852e11ae8d", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6865ddd44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a", Pod:"calico-kube-controllers-6865ddd44-nxqpx", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.108.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali45acde61227", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:17.341010 containerd[1471]: 2026-04-13 20:25:17.279 [INFO][5071] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Apr 13 20:25:17.341010 containerd[1471]: 2026-04-13 20:25:17.279 [INFO][5071] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" iface="eth0" netns="" Apr 13 20:25:17.341010 containerd[1471]: 2026-04-13 20:25:17.279 [INFO][5071] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Apr 13 20:25:17.341010 containerd[1471]: 2026-04-13 20:25:17.279 [INFO][5071] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Apr 13 20:25:17.341010 containerd[1471]: 2026-04-13 20:25:17.314 [INFO][5078] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" HandleID="k8s-pod-network.cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:17.341010 containerd[1471]: 2026-04-13 20:25:17.314 [INFO][5078] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:17.341010 containerd[1471]: 2026-04-13 20:25:17.314 [INFO][5078] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:17.341010 containerd[1471]: 2026-04-13 20:25:17.329 [WARNING][5078] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" HandleID="k8s-pod-network.cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:17.341010 containerd[1471]: 2026-04-13 20:25:17.329 [INFO][5078] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" HandleID="k8s-pod-network.cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--kube--controllers--6865ddd44--nxqpx-eth0" Apr 13 20:25:17.341010 containerd[1471]: 2026-04-13 20:25:17.333 [INFO][5078] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:17.341010 containerd[1471]: 2026-04-13 20:25:17.336 [INFO][5071] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14" Apr 13 20:25:17.341010 containerd[1471]: time="2026-04-13T20:25:17.340665470Z" level=info msg="TearDown network for sandbox \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\" successfully" Apr 13 20:25:17.349120 containerd[1471]: time="2026-04-13T20:25:17.348421638Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:25:17.349120 containerd[1471]: time="2026-04-13T20:25:17.348540058Z" level=info msg="RemovePodSandbox \"cb75f9ff40d15f339ead438a51cb5faf1458a25002535dfa2bccdaf176b92f14\" returns successfully" Apr 13 20:25:17.349469 containerd[1471]: time="2026-04-13T20:25:17.349428517Z" level=info msg="StopPodSandbox for \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\"" Apr 13 20:25:17.481397 containerd[1471]: 2026-04-13 20:25:17.423 [WARNING][5095] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"616bbb20-6acc-4142-9ccc-5584aac07844", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5", Pod:"csi-node-driver-bjsrs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali204a7c1cdef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:17.481397 containerd[1471]: 2026-04-13 20:25:17.423 [INFO][5095] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Apr 13 20:25:17.481397 containerd[1471]: 2026-04-13 20:25:17.423 [INFO][5095] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" iface="eth0" netns="" Apr 13 20:25:17.481397 containerd[1471]: 2026-04-13 20:25:17.423 [INFO][5095] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Apr 13 20:25:17.481397 containerd[1471]: 2026-04-13 20:25:17.423 [INFO][5095] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Apr 13 20:25:17.481397 containerd[1471]: 2026-04-13 20:25:17.457 [INFO][5102] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" HandleID="k8s-pod-network.cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:17.481397 containerd[1471]: 2026-04-13 20:25:17.458 [INFO][5102] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:17.481397 containerd[1471]: 2026-04-13 20:25:17.458 [INFO][5102] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:17.481397 containerd[1471]: 2026-04-13 20:25:17.471 [WARNING][5102] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" HandleID="k8s-pod-network.cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:17.481397 containerd[1471]: 2026-04-13 20:25:17.472 [INFO][5102] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" HandleID="k8s-pod-network.cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:17.481397 containerd[1471]: 2026-04-13 20:25:17.475 [INFO][5102] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:17.481397 containerd[1471]: 2026-04-13 20:25:17.478 [INFO][5095] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Apr 13 20:25:17.481397 containerd[1471]: time="2026-04-13T20:25:17.481064394Z" level=info msg="TearDown network for sandbox \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\" successfully" Apr 13 20:25:17.481397 containerd[1471]: time="2026-04-13T20:25:17.481109861Z" level=info msg="StopPodSandbox for \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\" returns successfully" Apr 13 20:25:17.485805 containerd[1471]: time="2026-04-13T20:25:17.483437141Z" level=info msg="RemovePodSandbox for \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\"" Apr 13 20:25:17.485805 containerd[1471]: time="2026-04-13T20:25:17.483632075Z" level=info msg="Forcibly stopping sandbox \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\"" Apr 13 20:25:17.607826 containerd[1471]: 2026-04-13 20:25:17.548 [WARNING][5116] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"616bbb20-6acc-4142-9ccc-5584aac07844", ResourceVersion:"989", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"98cbb5577", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5", Pod:"csi-node-driver-bjsrs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.108.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali204a7c1cdef", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:17.607826 containerd[1471]: 2026-04-13 20:25:17.548 [INFO][5116] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Apr 13 20:25:17.607826 containerd[1471]: 2026-04-13 20:25:17.548 [INFO][5116] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" iface="eth0" netns="" Apr 13 20:25:17.607826 containerd[1471]: 2026-04-13 20:25:17.548 [INFO][5116] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Apr 13 20:25:17.607826 containerd[1471]: 2026-04-13 20:25:17.548 [INFO][5116] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Apr 13 20:25:17.607826 containerd[1471]: 2026-04-13 20:25:17.583 [INFO][5124] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" HandleID="k8s-pod-network.cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:17.607826 containerd[1471]: 2026-04-13 20:25:17.583 [INFO][5124] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:17.607826 containerd[1471]: 2026-04-13 20:25:17.584 [INFO][5124] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:17.607826 containerd[1471]: 2026-04-13 20:25:17.598 [WARNING][5124] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" HandleID="k8s-pod-network.cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:17.607826 containerd[1471]: 2026-04-13 20:25:17.599 [INFO][5124] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" HandleID="k8s-pod-network.cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-csi--node--driver--bjsrs-eth0" Apr 13 20:25:17.607826 containerd[1471]: 2026-04-13 20:25:17.602 [INFO][5124] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:17.607826 containerd[1471]: 2026-04-13 20:25:17.605 [INFO][5116] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e" Apr 13 20:25:17.610801 containerd[1471]: time="2026-04-13T20:25:17.607906902Z" level=info msg="TearDown network for sandbox \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\" successfully" Apr 13 20:25:17.615039 containerd[1471]: time="2026-04-13T20:25:17.614969272Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:25:17.615189 containerd[1471]: time="2026-04-13T20:25:17.615075153Z" level=info msg="RemovePodSandbox \"cf3e5fe3b1b14ea090ff67bc6cfb8c4af03210036e3a285c6a0825dd1993e94e\" returns successfully" Apr 13 20:25:17.615877 containerd[1471]: time="2026-04-13T20:25:17.615837174Z" level=info msg="StopPodSandbox for \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\"" Apr 13 20:25:17.745392 containerd[1471]: 2026-04-13 20:25:17.682 [WARNING][5138] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--68478d5f94--jn8p5-eth0" Apr 13 20:25:17.745392 containerd[1471]: 2026-04-13 20:25:17.683 [INFO][5138] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Apr 13 20:25:17.745392 containerd[1471]: 2026-04-13 20:25:17.683 [INFO][5138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" iface="eth0" netns="" Apr 13 20:25:17.745392 containerd[1471]: 2026-04-13 20:25:17.683 [INFO][5138] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Apr 13 20:25:17.745392 containerd[1471]: 2026-04-13 20:25:17.683 [INFO][5138] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Apr 13 20:25:17.745392 containerd[1471]: 2026-04-13 20:25:17.722 [INFO][5145] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" HandleID="k8s-pod-network.795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--68478d5f94--jn8p5-eth0" Apr 13 20:25:17.745392 containerd[1471]: 2026-04-13 20:25:17.722 [INFO][5145] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:17.745392 containerd[1471]: 2026-04-13 20:25:17.722 [INFO][5145] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:17.745392 containerd[1471]: 2026-04-13 20:25:17.735 [WARNING][5145] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" HandleID="k8s-pod-network.795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--68478d5f94--jn8p5-eth0" Apr 13 20:25:17.745392 containerd[1471]: 2026-04-13 20:25:17.736 [INFO][5145] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" HandleID="k8s-pod-network.795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--68478d5f94--jn8p5-eth0" Apr 13 20:25:17.745392 containerd[1471]: 2026-04-13 20:25:17.739 [INFO][5145] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:17.745392 containerd[1471]: 2026-04-13 20:25:17.742 [INFO][5138] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Apr 13 20:25:17.745392 containerd[1471]: time="2026-04-13T20:25:17.745357144Z" level=info msg="TearDown network for sandbox \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\" successfully" Apr 13 20:25:17.751024 containerd[1471]: time="2026-04-13T20:25:17.745395014Z" level=info msg="StopPodSandbox for \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\" returns successfully" Apr 13 20:25:17.751024 containerd[1471]: time="2026-04-13T20:25:17.748568247Z" level=info msg="RemovePodSandbox for \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\"" Apr 13 20:25:17.751024 containerd[1471]: time="2026-04-13T20:25:17.748624207Z" level=info msg="Forcibly stopping sandbox \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\"" Apr 13 20:25:17.865964 containerd[1471]: 2026-04-13 20:25:17.805 [WARNING][5159] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" WorkloadEndpoint="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--68478d5f94--jn8p5-eth0" Apr 13 20:25:17.865964 containerd[1471]: 2026-04-13 20:25:17.806 [INFO][5159] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Apr 13 20:25:17.865964 containerd[1471]: 2026-04-13 20:25:17.806 [INFO][5159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" iface="eth0" netns="" Apr 13 20:25:17.865964 containerd[1471]: 2026-04-13 20:25:17.806 [INFO][5159] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Apr 13 20:25:17.865964 containerd[1471]: 2026-04-13 20:25:17.806 [INFO][5159] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Apr 13 20:25:17.865964 containerd[1471]: 2026-04-13 20:25:17.843 [INFO][5167] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" HandleID="k8s-pod-network.795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--68478d5f94--jn8p5-eth0" Apr 13 20:25:17.865964 containerd[1471]: 2026-04-13 20:25:17.843 [INFO][5167] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:17.865964 containerd[1471]: 2026-04-13 20:25:17.843 [INFO][5167] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:17.865964 containerd[1471]: 2026-04-13 20:25:17.855 [WARNING][5167] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" HandleID="k8s-pod-network.795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--68478d5f94--jn8p5-eth0" Apr 13 20:25:17.865964 containerd[1471]: 2026-04-13 20:25:17.855 [INFO][5167] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" HandleID="k8s-pod-network.795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-whisker--68478d5f94--jn8p5-eth0" Apr 13 20:25:17.865964 containerd[1471]: 2026-04-13 20:25:17.860 [INFO][5167] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:17.865964 containerd[1471]: 2026-04-13 20:25:17.862 [INFO][5159] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8" Apr 13 20:25:17.866990 containerd[1471]: time="2026-04-13T20:25:17.866101307Z" level=info msg="TearDown network for sandbox \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\" successfully" Apr 13 20:25:17.873104 containerd[1471]: time="2026-04-13T20:25:17.872817102Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:25:17.873104 containerd[1471]: time="2026-04-13T20:25:17.872912628Z" level=info msg="RemovePodSandbox \"795046c7db50152be66e110f471edddcc226f2fb14f77bd5511c4361277d99c8\" returns successfully" Apr 13 20:25:17.874719 containerd[1471]: time="2026-04-13T20:25:17.874286000Z" level=info msg="StopPodSandbox for \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\"" Apr 13 20:25:18.010537 containerd[1471]: 2026-04-13 20:25:17.950 [WARNING][5181] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"be2f09be-e384-4b88-a802-0ae6bc590ea7", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4", Pod:"goldmane-cccfbd5cf-w2rcq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.108.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib437c229611", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:18.010537 containerd[1471]: 2026-04-13 20:25:17.951 [INFO][5181] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Apr 13 20:25:18.010537 containerd[1471]: 2026-04-13 20:25:17.951 [INFO][5181] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" iface="eth0" netns="" Apr 13 20:25:18.010537 containerd[1471]: 2026-04-13 20:25:17.951 [INFO][5181] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Apr 13 20:25:18.010537 containerd[1471]: 2026-04-13 20:25:17.951 [INFO][5181] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Apr 13 20:25:18.010537 containerd[1471]: 2026-04-13 20:25:17.986 [INFO][5189] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" HandleID="k8s-pod-network.f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:18.010537 containerd[1471]: 2026-04-13 20:25:17.986 [INFO][5189] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:18.010537 containerd[1471]: 2026-04-13 20:25:17.986 [INFO][5189] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:18.010537 containerd[1471]: 2026-04-13 20:25:18.001 [WARNING][5189] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" HandleID="k8s-pod-network.f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:18.010537 containerd[1471]: 2026-04-13 20:25:18.001 [INFO][5189] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" HandleID="k8s-pod-network.f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:18.010537 containerd[1471]: 2026-04-13 20:25:18.004 [INFO][5189] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:18.010537 containerd[1471]: 2026-04-13 20:25:18.007 [INFO][5181] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Apr 13 20:25:18.010537 containerd[1471]: time="2026-04-13T20:25:18.010241041Z" level=info msg="TearDown network for sandbox \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\" successfully" Apr 13 20:25:18.010537 containerd[1471]: time="2026-04-13T20:25:18.010282785Z" level=info msg="StopPodSandbox for \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\" returns successfully" Apr 13 20:25:18.013605 containerd[1471]: time="2026-04-13T20:25:18.011789676Z" level=info msg="RemovePodSandbox for \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\"" Apr 13 20:25:18.013605 containerd[1471]: time="2026-04-13T20:25:18.011834951Z" level=info msg="Forcibly stopping sandbox \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\"" Apr 13 20:25:18.174249 containerd[1471]: 2026-04-13 20:25:18.087 [WARNING][5203] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0", GenerateName:"goldmane-cccfbd5cf-", Namespace:"calico-system", SelfLink:"", UID:"be2f09be-e384-4b88-a802-0ae6bc590ea7", ResourceVersion:"976", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"cccfbd5cf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4", Pod:"goldmane-cccfbd5cf-w2rcq", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.108.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib437c229611", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:18.174249 containerd[1471]: 2026-04-13 20:25:18.088 [INFO][5203] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Apr 13 20:25:18.174249 containerd[1471]: 2026-04-13 20:25:18.088 [INFO][5203] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" iface="eth0" netns="" Apr 13 20:25:18.174249 containerd[1471]: 2026-04-13 20:25:18.088 [INFO][5203] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Apr 13 20:25:18.174249 containerd[1471]: 2026-04-13 20:25:18.088 [INFO][5203] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Apr 13 20:25:18.174249 containerd[1471]: 2026-04-13 20:25:18.143 [INFO][5210] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" HandleID="k8s-pod-network.f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:18.174249 containerd[1471]: 2026-04-13 20:25:18.145 [INFO][5210] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:18.174249 containerd[1471]: 2026-04-13 20:25:18.145 [INFO][5210] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:18.174249 containerd[1471]: 2026-04-13 20:25:18.163 [WARNING][5210] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" HandleID="k8s-pod-network.f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:18.174249 containerd[1471]: 2026-04-13 20:25:18.163 [INFO][5210] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" HandleID="k8s-pod-network.f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-goldmane--cccfbd5cf--w2rcq-eth0" Apr 13 20:25:18.174249 containerd[1471]: 2026-04-13 20:25:18.167 [INFO][5210] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:18.174249 containerd[1471]: 2026-04-13 20:25:18.170 [INFO][5203] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e" Apr 13 20:25:18.175385 containerd[1471]: time="2026-04-13T20:25:18.174308629Z" level=info msg="TearDown network for sandbox \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\" successfully" Apr 13 20:25:18.181662 containerd[1471]: time="2026-04-13T20:25:18.181494623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:25:18.181662 containerd[1471]: time="2026-04-13T20:25:18.181605516Z" level=info msg="RemovePodSandbox \"f70c64ffc11843b0b182f5f3217817a252c33726b7ce819d8b0d4c4b3f09f45e\" returns successfully" Apr 13 20:25:18.182717 containerd[1471]: time="2026-04-13T20:25:18.182457440Z" level=info msg="StopPodSandbox for \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\"" Apr 13 20:25:18.311835 containerd[1471]: 2026-04-13 20:25:18.250 [WARNING][5235] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0", GenerateName:"calico-apiserver-6769499dcc-", Namespace:"calico-system", SelfLink:"", UID:"585a18a3-2006-4f0c-a63c-f101aa142823", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6769499dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e", Pod:"calico-apiserver-6769499dcc-cptll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib765097bd41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:18.311835 containerd[1471]: 2026-04-13 20:25:18.250 [INFO][5235] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Apr 13 20:25:18.311835 containerd[1471]: 2026-04-13 20:25:18.250 [INFO][5235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" iface="eth0" netns="" Apr 13 20:25:18.311835 containerd[1471]: 2026-04-13 20:25:18.250 [INFO][5235] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Apr 13 20:25:18.311835 containerd[1471]: 2026-04-13 20:25:18.250 [INFO][5235] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Apr 13 20:25:18.311835 containerd[1471]: 2026-04-13 20:25:18.288 [INFO][5242] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" HandleID="k8s-pod-network.807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:18.311835 containerd[1471]: 2026-04-13 20:25:18.289 [INFO][5242] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:18.311835 containerd[1471]: 2026-04-13 20:25:18.289 [INFO][5242] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:18.311835 containerd[1471]: 2026-04-13 20:25:18.301 [WARNING][5242] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" HandleID="k8s-pod-network.807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:18.311835 containerd[1471]: 2026-04-13 20:25:18.302 [INFO][5242] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" HandleID="k8s-pod-network.807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:18.311835 containerd[1471]: 2026-04-13 20:25:18.305 [INFO][5242] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:18.311835 containerd[1471]: 2026-04-13 20:25:18.308 [INFO][5235] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Apr 13 20:25:18.311835 containerd[1471]: time="2026-04-13T20:25:18.311682350Z" level=info msg="TearDown network for sandbox \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\" successfully" Apr 13 20:25:18.311835 containerd[1471]: time="2026-04-13T20:25:18.311730494Z" level=info msg="StopPodSandbox for \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\" returns successfully" Apr 13 20:25:18.313023 containerd[1471]: time="2026-04-13T20:25:18.312575198Z" level=info msg="RemovePodSandbox for \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\"" Apr 13 20:25:18.313023 containerd[1471]: time="2026-04-13T20:25:18.312623108Z" level=info msg="Forcibly stopping sandbox \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\"" Apr 13 20:25:18.435917 containerd[1471]: 2026-04-13 20:25:18.375 [WARNING][5257] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0", GenerateName:"calico-apiserver-6769499dcc-", Namespace:"calico-system", SelfLink:"", UID:"585a18a3-2006-4f0c-a63c-f101aa142823", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6769499dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e", Pod:"calico-apiserver-6769499dcc-cptll", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib765097bd41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:18.435917 containerd[1471]: 2026-04-13 20:25:18.375 [INFO][5257] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Apr 13 20:25:18.435917 containerd[1471]: 2026-04-13 20:25:18.375 [INFO][5257] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" iface="eth0" netns="" Apr 13 20:25:18.435917 containerd[1471]: 2026-04-13 20:25:18.375 [INFO][5257] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Apr 13 20:25:18.435917 containerd[1471]: 2026-04-13 20:25:18.375 [INFO][5257] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Apr 13 20:25:18.435917 containerd[1471]: 2026-04-13 20:25:18.415 [INFO][5264] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" HandleID="k8s-pod-network.807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:18.435917 containerd[1471]: 2026-04-13 20:25:18.415 [INFO][5264] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:18.435917 containerd[1471]: 2026-04-13 20:25:18.416 [INFO][5264] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:18.435917 containerd[1471]: 2026-04-13 20:25:18.427 [WARNING][5264] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" HandleID="k8s-pod-network.807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:18.435917 containerd[1471]: 2026-04-13 20:25:18.427 [INFO][5264] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" HandleID="k8s-pod-network.807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--cptll-eth0" Apr 13 20:25:18.435917 containerd[1471]: 2026-04-13 20:25:18.431 [INFO][5264] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:18.435917 containerd[1471]: 2026-04-13 20:25:18.433 [INFO][5257] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb" Apr 13 20:25:18.438106 containerd[1471]: time="2026-04-13T20:25:18.435979699Z" level=info msg="TearDown network for sandbox \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\" successfully" Apr 13 20:25:18.442143 containerd[1471]: time="2026-04-13T20:25:18.442081397Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:25:18.442311 containerd[1471]: time="2026-04-13T20:25:18.442182443Z" level=info msg="RemovePodSandbox \"807c4d9cac1d3095c7ac4e73fa3de46a3f2d7b8cae51a917fe2d40d6aa580fdb\" returns successfully" Apr 13 20:25:18.443575 containerd[1471]: time="2026-04-13T20:25:18.442984763Z" level=info msg="StopPodSandbox for \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\"" Apr 13 20:25:18.558044 containerd[1471]: 2026-04-13 20:25:18.501 [WARNING][5278] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0", GenerateName:"calico-apiserver-6769499dcc-", Namespace:"calico-system", SelfLink:"", UID:"ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6769499dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8", Pod:"calico-apiserver-6769499dcc-lkthr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliae81db1f5f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:18.558044 containerd[1471]: 2026-04-13 20:25:18.501 [INFO][5278] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Apr 13 20:25:18.558044 containerd[1471]: 2026-04-13 20:25:18.501 [INFO][5278] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" iface="eth0" netns="" Apr 13 20:25:18.558044 containerd[1471]: 2026-04-13 20:25:18.501 [INFO][5278] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Apr 13 20:25:18.558044 containerd[1471]: 2026-04-13 20:25:18.501 [INFO][5278] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Apr 13 20:25:18.558044 containerd[1471]: 2026-04-13 20:25:18.534 [INFO][5285] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" HandleID="k8s-pod-network.6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:18.558044 containerd[1471]: 2026-04-13 20:25:18.535 [INFO][5285] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:18.558044 containerd[1471]: 2026-04-13 20:25:18.535 [INFO][5285] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:18.558044 containerd[1471]: 2026-04-13 20:25:18.548 [WARNING][5285] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" HandleID="k8s-pod-network.6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:18.558044 containerd[1471]: 2026-04-13 20:25:18.548 [INFO][5285] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" HandleID="k8s-pod-network.6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:18.558044 containerd[1471]: 2026-04-13 20:25:18.551 [INFO][5285] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:18.558044 containerd[1471]: 2026-04-13 20:25:18.555 [INFO][5278] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Apr 13 20:25:18.558044 containerd[1471]: time="2026-04-13T20:25:18.557851369Z" level=info msg="TearDown network for sandbox \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\" successfully" Apr 13 20:25:18.558044 containerd[1471]: time="2026-04-13T20:25:18.557878889Z" level=info msg="StopPodSandbox for \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\" returns successfully" Apr 13 20:25:18.559113 containerd[1471]: time="2026-04-13T20:25:18.558564309Z" level=info msg="RemovePodSandbox for \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\"" Apr 13 20:25:18.559113 containerd[1471]: time="2026-04-13T20:25:18.558607881Z" level=info msg="Forcibly stopping sandbox \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\"" Apr 13 20:25:18.690924 containerd[1471]: 2026-04-13 20:25:18.634 [WARNING][5299] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0", GenerateName:"calico-apiserver-6769499dcc-", Namespace:"calico-system", SelfLink:"", UID:"ad3ffed3-72b8-4b25-b898-ef75c4c8b3c1", ResourceVersion:"1018", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 24, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6769499dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-2540ca57ce6935ef8028.c.flatcar-212911.internal", ContainerID:"c3808bfe22c926b588145587795133195b4e71f555a8f49145d2134b780f9da8", Pod:"calico-apiserver-6769499dcc-lkthr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.108.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"caliae81db1f5f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:25:18.690924 containerd[1471]: 2026-04-13 20:25:18.634 [INFO][5299] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Apr 13 20:25:18.690924 containerd[1471]: 2026-04-13 20:25:18.634 [INFO][5299] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" iface="eth0" netns="" Apr 13 20:25:18.690924 containerd[1471]: 2026-04-13 20:25:18.634 [INFO][5299] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Apr 13 20:25:18.690924 containerd[1471]: 2026-04-13 20:25:18.634 [INFO][5299] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Apr 13 20:25:18.690924 containerd[1471]: 2026-04-13 20:25:18.670 [INFO][5306] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" HandleID="k8s-pod-network.6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:18.690924 containerd[1471]: 2026-04-13 20:25:18.671 [INFO][5306] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:25:18.690924 containerd[1471]: 2026-04-13 20:25:18.671 [INFO][5306] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:25:18.690924 containerd[1471]: 2026-04-13 20:25:18.682 [WARNING][5306] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" HandleID="k8s-pod-network.6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:18.690924 containerd[1471]: 2026-04-13 20:25:18.682 [INFO][5306] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" HandleID="k8s-pod-network.6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Workload="ci--4081--3--7--2540ca57ce6935ef8028.c.flatcar--212911.internal-k8s-calico--apiserver--6769499dcc--lkthr-eth0" Apr 13 20:25:18.690924 containerd[1471]: 2026-04-13 20:25:18.686 [INFO][5306] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:25:18.690924 containerd[1471]: 2026-04-13 20:25:18.688 [INFO][5299] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011" Apr 13 20:25:18.692198 containerd[1471]: time="2026-04-13T20:25:18.690923471Z" level=info msg="TearDown network for sandbox \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\" successfully" Apr 13 20:25:18.700048 containerd[1471]: time="2026-04-13T20:25:18.699876467Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:25:18.700048 containerd[1471]: time="2026-04-13T20:25:18.699970408Z" level=info msg="RemovePodSandbox \"6f201e1bfac310e360ee0dd21db5a91cf05a0c3a2e6d5bd507902f73624db011\" returns successfully" Apr 13 20:25:19.506669 ntpd[1439]: Listen normally on 6 vxlan.calico 192.168.108.0:123 Apr 13 20:25:19.508781 ntpd[1439]: 13 Apr 20:25:19 ntpd[1439]: Listen normally on 6 vxlan.calico 192.168.108.0:123 Apr 13 20:25:19.508781 ntpd[1439]: 13 Apr 20:25:19 ntpd[1439]: Listen normally on 7 caliae81db1f5f4 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 20:25:19.508781 ntpd[1439]: 13 Apr 20:25:19 ntpd[1439]: Listen normally on 8 cali7bfda0e842d [fe80::ecee:eeff:feee:eeee%5]:123 Apr 13 20:25:19.508781 ntpd[1439]: 13 Apr 20:25:19 ntpd[1439]: Listen normally on 9 calib437c229611 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 13 20:25:19.508781 ntpd[1439]: 13 Apr 20:25:19 ntpd[1439]: Listen normally on 10 calib765097bd41 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 13 20:25:19.508781 ntpd[1439]: 13 Apr 20:25:19 ntpd[1439]: Listen normally on 11 cali45acde61227 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 20:25:19.508781 ntpd[1439]: 13 Apr 20:25:19 ntpd[1439]: Listen normally on 12 cali204a7c1cdef [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 20:25:19.508781 ntpd[1439]: 13 Apr 20:25:19 ntpd[1439]: Listen normally on 13 calided136547d4 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 20:25:19.508781 ntpd[1439]: 13 Apr 20:25:19 ntpd[1439]: Listen normally on 14 vxlan.calico [fe80::6481:ddff:fe2a:e17b%11]:123 Apr 13 20:25:19.508781 ntpd[1439]: 13 Apr 20:25:19 ntpd[1439]: Listen normally on 15 cali48f0c2eebbe [fe80::ecee:eeff:feee:eeee%14]:123 Apr 13 20:25:19.508126 ntpd[1439]: Listen normally on 7 caliae81db1f5f4 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 20:25:19.508227 ntpd[1439]: Listen normally on 8 cali7bfda0e842d [fe80::ecee:eeff:feee:eeee%5]:123 Apr 13 20:25:19.508293 ntpd[1439]: Listen normally on 9 calib437c229611 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 13 20:25:19.508341 ntpd[1439]: Listen normally on 10 calib765097bd41 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 13 20:25:19.508429 ntpd[1439]: Listen normally on 11 cali45acde61227 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 20:25:19.508504 ntpd[1439]: Listen normally on 12 cali204a7c1cdef [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 20:25:19.508564 ntpd[1439]: Listen normally on 13 calided136547d4 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 20:25:19.508611 ntpd[1439]: Listen normally on 14 vxlan.calico [fe80::6481:ddff:fe2a:e17b%11]:123 Apr 13 20:25:19.508658 ntpd[1439]: Listen normally on 15 cali48f0c2eebbe [fe80::ecee:eeff:feee:eeee%14]:123 Apr 13 20:25:20.133614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4104973584.mount: Deactivated successfully. Apr 13 20:25:21.063440 containerd[1471]: time="2026-04-13T20:25:21.063349810Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:21.066417 containerd[1471]: time="2026-04-13T20:25:21.066020869Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 13 20:25:21.069450 containerd[1471]: time="2026-04-13T20:25:21.068654760Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:21.074393 containerd[1471]: time="2026-04-13T20:25:21.074291016Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:21.077843 containerd[1471]: time="2026-04-13T20:25:21.077705912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 8.26141603s" Apr 13 20:25:21.078223 containerd[1471]: time="2026-04-13T20:25:21.078156078Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 13 20:25:21.084968 containerd[1471]: time="2026-04-13T20:25:21.084409772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:25:21.092043 containerd[1471]: time="2026-04-13T20:25:21.091957697Z" level=info msg="CreateContainer within sandbox \"2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 20:25:21.124611 containerd[1471]: time="2026-04-13T20:25:21.124417395Z" level=info msg="CreateContainer within sandbox \"2ca13d6276c28a79292c3e286ff58db7b92288bcf24e77dce78c0af48e83e6b4\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"4c05bc683c3f405452cb3198c3095826e4933d5674e04ef7b043e6af911cfc1b\"" Apr 13 20:25:21.127824 containerd[1471]: time="2026-04-13T20:25:21.126669285Z" level=info msg="StartContainer for \"4c05bc683c3f405452cb3198c3095826e4933d5674e04ef7b043e6af911cfc1b\"" Apr 13 20:25:21.219147 systemd[1]: Started cri-containerd-4c05bc683c3f405452cb3198c3095826e4933d5674e04ef7b043e6af911cfc1b.scope - libcontainer container 4c05bc683c3f405452cb3198c3095826e4933d5674e04ef7b043e6af911cfc1b. Apr 13 20:25:21.311540 containerd[1471]: time="2026-04-13T20:25:21.311423392Z" level=info msg="StartContainer for \"4c05bc683c3f405452cb3198c3095826e4933d5674e04ef7b043e6af911cfc1b\" returns successfully" Apr 13 20:25:21.331438 containerd[1471]: time="2026-04-13T20:25:21.330086528Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:21.333040 containerd[1471]: time="2026-04-13T20:25:21.332941059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 20:25:21.342016 containerd[1471]: time="2026-04-13T20:25:21.341838309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 257.271402ms" Apr 13 20:25:21.342467 containerd[1471]: time="2026-04-13T20:25:21.342052530Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:25:21.349772 containerd[1471]: time="2026-04-13T20:25:21.349694764Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 20:25:21.358548 containerd[1471]: time="2026-04-13T20:25:21.358203360Z" level=info msg="CreateContainer within sandbox \"a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:25:21.392022 containerd[1471]: time="2026-04-13T20:25:21.391684848Z" level=info msg="CreateContainer within sandbox \"a84592d91e1ac3cd60d5105666604f06601b2ac8e1cdc1301457e41cebaf4d4e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5871549ad88529fc43ea3c23b70b4456d7a8d1dcd3f6928c77ae66a1ad4b7e72\"" Apr 13 20:25:21.398929 containerd[1471]: time="2026-04-13T20:25:21.398093792Z" level=info msg="StartContainer for \"5871549ad88529fc43ea3c23b70b4456d7a8d1dcd3f6928c77ae66a1ad4b7e72\"" Apr 13 20:25:21.408558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3961691799.mount: Deactivated successfully. Apr 13 20:25:21.481088 systemd[1]: Started cri-containerd-5871549ad88529fc43ea3c23b70b4456d7a8d1dcd3f6928c77ae66a1ad4b7e72.scope - libcontainer container 5871549ad88529fc43ea3c23b70b4456d7a8d1dcd3f6928c77ae66a1ad4b7e72. Apr 13 20:25:21.573545 containerd[1471]: time="2026-04-13T20:25:21.573419150Z" level=info msg="StartContainer for \"5871549ad88529fc43ea3c23b70b4456d7a8d1dcd3f6928c77ae66a1ad4b7e72\" returns successfully" Apr 13 20:25:22.379269 kubelet[2629]: I0413 20:25:22.377800 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-6769499dcc-cptll" podStartSLOduration=32.497924875 podStartE2EDuration="45.377763456s" podCreationTimestamp="2026-04-13 20:24:37 +0000 UTC" firstStartedPulling="2026-04-13 20:25:08.468416308 +0000 UTC m=+52.361385845" lastFinishedPulling="2026-04-13 20:25:21.348254902 +0000 UTC m=+65.241224426" observedRunningTime="2026-04-13 20:25:22.306444403 +0000 UTC m=+66.199413945" watchObservedRunningTime="2026-04-13 20:25:22.377763456 +0000 UTC m=+66.270732990" Apr 13 20:25:23.275775 kubelet[2629]: I0413 20:25:23.275117 2629 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:25:25.791539 containerd[1471]: time="2026-04-13T20:25:25.791395540Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:25.794345 containerd[1471]: time="2026-04-13T20:25:25.793801284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 13 20:25:25.796300 containerd[1471]: time="2026-04-13T20:25:25.796200244Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:25.803193 containerd[1471]: time="2026-04-13T20:25:25.803077695Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:25.805825 containerd[1471]: time="2026-04-13T20:25:25.804802232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 4.454960126s" Apr 13 20:25:25.805825 containerd[1471]: time="2026-04-13T20:25:25.804863249Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 13 20:25:25.807313 containerd[1471]: time="2026-04-13T20:25:25.807184637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 20:25:25.839004 containerd[1471]: time="2026-04-13T20:25:25.838238591Z" level=info msg="CreateContainer within sandbox \"6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 20:25:25.898181 containerd[1471]: time="2026-04-13T20:25:25.897935542Z" level=info msg="CreateContainer within sandbox \"6065c2e9ef2c3d2a8f4c3a7218db1d688464ab2c854c0e590555e6a7d71ed12a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"74ebdd7341f912e8af2ff6a469afd3e920c95cb797adc7043fb0ea0fb316ebc5\"" Apr 13 20:25:25.901683 containerd[1471]: time="2026-04-13T20:25:25.899796329Z" level=info msg="StartContainer for \"74ebdd7341f912e8af2ff6a469afd3e920c95cb797adc7043fb0ea0fb316ebc5\"" Apr 13 20:25:25.978173 systemd[1]: Started cri-containerd-74ebdd7341f912e8af2ff6a469afd3e920c95cb797adc7043fb0ea0fb316ebc5.scope - libcontainer container 74ebdd7341f912e8af2ff6a469afd3e920c95cb797adc7043fb0ea0fb316ebc5. Apr 13 20:25:26.061557 containerd[1471]: time="2026-04-13T20:25:26.061317826Z" level=info msg="StartContainer for \"74ebdd7341f912e8af2ff6a469afd3e920c95cb797adc7043fb0ea0fb316ebc5\" returns successfully" Apr 13 20:25:26.325306 kubelet[2629]: I0413 20:25:26.324313 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6865ddd44-nxqpx" podStartSLOduration=30.447099565 podStartE2EDuration="47.324284736s" podCreationTimestamp="2026-04-13 20:24:39 +0000 UTC" firstStartedPulling="2026-04-13 20:25:08.929586268 +0000 UTC m=+52.822555797" lastFinishedPulling="2026-04-13 20:25:25.806771438 +0000 UTC m=+69.699740968" observedRunningTime="2026-04-13 20:25:26.32224297 +0000 UTC m=+70.215212511" watchObservedRunningTime="2026-04-13 20:25:26.324284736 +0000 UTC m=+70.217254298" Apr 13 20:25:26.329672 kubelet[2629]: I0413 20:25:26.328008 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-cccfbd5cf-w2rcq" podStartSLOduration=35.63354009 podStartE2EDuration="48.327983005s" podCreationTimestamp="2026-04-13 20:24:38 +0000 UTC" firstStartedPulling="2026-04-13 20:25:08.388140201 +0000 UTC m=+52.281109733" lastFinishedPulling="2026-04-13 20:25:21.08258313 +0000 UTC m=+64.975552648" observedRunningTime="2026-04-13 20:25:22.381390385 +0000 UTC m=+66.274359925" watchObservedRunningTime="2026-04-13 20:25:26.327983005 +0000 UTC m=+70.220952547" Apr 13 20:25:27.363864 containerd[1471]: time="2026-04-13T20:25:27.363719870Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:27.365802 containerd[1471]: time="2026-04-13T20:25:27.365617001Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 13 20:25:27.367824 containerd[1471]: time="2026-04-13T20:25:27.367675751Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:27.372716 containerd[1471]: time="2026-04-13T20:25:27.372655500Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:27.374390 containerd[1471]: time="2026-04-13T20:25:27.374172114Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.566925291s" Apr 13 20:25:27.374390 containerd[1471]: time="2026-04-13T20:25:27.374240035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 13 20:25:27.378459 containerd[1471]: time="2026-04-13T20:25:27.378101026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 20:25:27.385879 containerd[1471]: time="2026-04-13T20:25:27.385796845Z" level=info msg="CreateContainer within sandbox \"e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 20:25:27.428521 containerd[1471]: time="2026-04-13T20:25:27.428418493Z" level=info msg="CreateContainer within sandbox \"e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"48f49da1a8ed1551c899f3ea006f576cff21d62ebb63e88bfd349862ce16e6d3\"" Apr 13 20:25:27.428936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3590466289.mount: Deactivated successfully. Apr 13 20:25:27.432477 containerd[1471]: time="2026-04-13T20:25:27.431820125Z" level=info msg="StartContainer for \"48f49da1a8ed1551c899f3ea006f576cff21d62ebb63e88bfd349862ce16e6d3\"" Apr 13 20:25:27.519590 systemd[1]: Started cri-containerd-48f49da1a8ed1551c899f3ea006f576cff21d62ebb63e88bfd349862ce16e6d3.scope - libcontainer container 48f49da1a8ed1551c899f3ea006f576cff21d62ebb63e88bfd349862ce16e6d3. Apr 13 20:25:27.592309 containerd[1471]: time="2026-04-13T20:25:27.592237781Z" level=info msg="StartContainer for \"48f49da1a8ed1551c899f3ea006f576cff21d62ebb63e88bfd349862ce16e6d3\" returns successfully" Apr 13 20:25:28.661974 containerd[1471]: time="2026-04-13T20:25:28.661880349Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:28.663863 containerd[1471]: time="2026-04-13T20:25:28.663778103Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 13 20:25:28.665717 containerd[1471]: time="2026-04-13T20:25:28.665197391Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:28.668811 containerd[1471]: time="2026-04-13T20:25:28.668705410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:28.670051 containerd[1471]: time="2026-04-13T20:25:28.669996564Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.291842924s" Apr 13 20:25:28.670183 containerd[1471]: time="2026-04-13T20:25:28.670051037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 13 20:25:28.677628 containerd[1471]: time="2026-04-13T20:25:28.676775505Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 20:25:28.680560 containerd[1471]: time="2026-04-13T20:25:28.680497680Z" level=info msg="CreateContainer within sandbox \"566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 20:25:28.704785 containerd[1471]: time="2026-04-13T20:25:28.703611145Z" level=info msg="CreateContainer within sandbox \"566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"3c8822a2cdede8752659a083db41ca3f63eebb54eaf610ed061e038dd3435ffe\"" Apr 13 20:25:28.708338 containerd[1471]: time="2026-04-13T20:25:28.708242721Z" level=info msg="StartContainer for \"3c8822a2cdede8752659a083db41ca3f63eebb54eaf610ed061e038dd3435ffe\"" Apr 13 20:25:28.791318 systemd[1]: Started cri-containerd-3c8822a2cdede8752659a083db41ca3f63eebb54eaf610ed061e038dd3435ffe.scope - libcontainer container 3c8822a2cdede8752659a083db41ca3f63eebb54eaf610ed061e038dd3435ffe. Apr 13 20:25:28.872576 containerd[1471]: time="2026-04-13T20:25:28.872509140Z" level=info msg="StartContainer for \"3c8822a2cdede8752659a083db41ca3f63eebb54eaf610ed061e038dd3435ffe\" returns successfully" Apr 13 20:25:30.239370 containerd[1471]: time="2026-04-13T20:25:30.239265200Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:30.241502 containerd[1471]: time="2026-04-13T20:25:30.241399165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 13 20:25:30.244801 containerd[1471]: time="2026-04-13T20:25:30.242595145Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:30.248332 containerd[1471]: time="2026-04-13T20:25:30.248266362Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:30.250030 containerd[1471]: time="2026-04-13T20:25:30.249976681Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.573144794s" Apr 13 20:25:30.250231 containerd[1471]: time="2026-04-13T20:25:30.250200339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 13 20:25:30.252299 containerd[1471]: time="2026-04-13T20:25:30.252267345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 20:25:30.259472 containerd[1471]: time="2026-04-13T20:25:30.259400160Z" level=info msg="CreateContainer within sandbox \"e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 20:25:30.286998 containerd[1471]: time="2026-04-13T20:25:30.286806690Z" level=info msg="CreateContainer within sandbox \"e2907d77bace133deb22d4331a90cb74437d1e731a80f055486b5c1812fb9ff5\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fc30e471ecdb43e3e050bd03a5b051355b1056503260dea7b4dc742d52c8202f\"" Apr 13 20:25:30.290052 containerd[1471]: time="2026-04-13T20:25:30.289986652Z" level=info msg="StartContainer for \"fc30e471ecdb43e3e050bd03a5b051355b1056503260dea7b4dc742d52c8202f\"" Apr 13 20:25:30.377154 systemd[1]: run-containerd-runc-k8s.io-fc30e471ecdb43e3e050bd03a5b051355b1056503260dea7b4dc742d52c8202f-runc.myOGGD.mount: Deactivated successfully. Apr 13 20:25:30.390968 systemd[1]: Started cri-containerd-fc30e471ecdb43e3e050bd03a5b051355b1056503260dea7b4dc742d52c8202f.scope - libcontainer container fc30e471ecdb43e3e050bd03a5b051355b1056503260dea7b4dc742d52c8202f. Apr 13 20:25:30.443873 containerd[1471]: time="2026-04-13T20:25:30.443774392Z" level=info msg="StartContainer for \"fc30e471ecdb43e3e050bd03a5b051355b1056503260dea7b4dc742d52c8202f\" returns successfully" Apr 13 20:25:30.520457 kubelet[2629]: I0413 20:25:30.519684 2629 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 20:25:30.520457 kubelet[2629]: I0413 20:25:30.519831 2629 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 20:25:31.998293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1504421562.mount: Deactivated successfully. Apr 13 20:25:32.020686 containerd[1471]: time="2026-04-13T20:25:32.020607313Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:32.022379 containerd[1471]: time="2026-04-13T20:25:32.022311963Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 13 20:25:32.024564 containerd[1471]: time="2026-04-13T20:25:32.024512088Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:32.029807 containerd[1471]: time="2026-04-13T20:25:32.029312216Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:25:32.031792 containerd[1471]: time="2026-04-13T20:25:32.031705763Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.779226949s" Apr 13 20:25:32.032065 containerd[1471]: time="2026-04-13T20:25:32.032028124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 13 20:25:32.048189 containerd[1471]: time="2026-04-13T20:25:32.048066485Z" level=info msg="CreateContainer within sandbox \"566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 20:25:32.074176 containerd[1471]: time="2026-04-13T20:25:32.072331053Z" level=info msg="CreateContainer within sandbox \"566fe0ba9e9ed8d12f8fd715b5114b2f4a87f690251dfa6585b238ed884edfc6\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"09627912327877b37d16205844bf7f5eea82eeb2806bbc920dd5abdec456bb4d\"" Apr 13 20:25:32.075238 containerd[1471]: time="2026-04-13T20:25:32.075188348Z" level=info msg="StartContainer for \"09627912327877b37d16205844bf7f5eea82eeb2806bbc920dd5abdec456bb4d\"" Apr 13 20:25:32.140185 systemd[1]: Started cri-containerd-09627912327877b37d16205844bf7f5eea82eeb2806bbc920dd5abdec456bb4d.scope - libcontainer container 09627912327877b37d16205844bf7f5eea82eeb2806bbc920dd5abdec456bb4d. Apr 13 20:25:32.215856 containerd[1471]: time="2026-04-13T20:25:32.215592360Z" level=info msg="StartContainer for \"09627912327877b37d16205844bf7f5eea82eeb2806bbc920dd5abdec456bb4d\" returns successfully" Apr 13 20:25:32.364760 kubelet[2629]: I0413 20:25:32.361555 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7648fcb95-gm92l" podStartSLOduration=2.803568636 podStartE2EDuration="25.361530294s" podCreationTimestamp="2026-04-13 20:25:07 +0000 UTC" firstStartedPulling="2026-04-13 20:25:09.476540632 +0000 UTC m=+53.369510167" lastFinishedPulling="2026-04-13 20:25:32.034502303 +0000 UTC m=+75.927471825" observedRunningTime="2026-04-13 20:25:32.360013544 +0000 UTC m=+76.252983085" watchObservedRunningTime="2026-04-13 20:25:32.361530294 +0000 UTC m=+76.254499836" Apr 13 20:25:32.364760 kubelet[2629]: I0413 20:25:32.361942 2629 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-bjsrs" podStartSLOduration=32.12018614 podStartE2EDuration="53.361929891s" podCreationTimestamp="2026-04-13 20:24:39 +0000 UTC" firstStartedPulling="2026-04-13 20:25:09.01035927 +0000 UTC m=+52.903328806" lastFinishedPulling="2026-04-13 20:25:30.252103042 +0000 UTC m=+74.145072557" observedRunningTime="2026-04-13 20:25:31.369441402 +0000 UTC m=+75.262410945" watchObservedRunningTime="2026-04-13 20:25:32.361929891 +0000 UTC m=+76.254899433" Apr 13 20:25:32.852806 systemd[1]: Started sshd@7-10.128.0.108:22-20.229.252.112:58580.service - OpenSSH per-connection server daemon (20.229.252.112:58580). Apr 13 20:25:33.587271 sshd[5746]: Accepted publickey for core from 20.229.252.112 port 58580 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:25:33.590414 sshd[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:25:33.599655 systemd-logind[1451]: New session 8 of user core. Apr 13 20:25:33.607210 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 20:25:34.229442 sshd[5746]: pam_unix(sshd:session): session closed for user core Apr 13 20:25:34.236346 systemd[1]: sshd@7-10.128.0.108:22-20.229.252.112:58580.service: Deactivated successfully. Apr 13 20:25:34.242486 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 20:25:34.245871 systemd-logind[1451]: Session 8 logged out. Waiting for processes to exit. Apr 13 20:25:34.248044 systemd-logind[1451]: Removed session 8. Apr 13 20:25:39.361405 systemd[1]: Started sshd@8-10.128.0.108:22-20.229.252.112:33332.service - OpenSSH per-connection server daemon (20.229.252.112:33332). Apr 13 20:25:40.102780 sshd[5805]: Accepted publickey for core from 20.229.252.112 port 33332 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:25:40.105186 sshd[5805]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:25:40.117157 systemd-logind[1451]: New session 9 of user core. Apr 13 20:25:40.124020 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 20:25:40.703301 sshd[5805]: pam_unix(sshd:session): session closed for user core Apr 13 20:25:40.710323 systemd[1]: sshd@8-10.128.0.108:22-20.229.252.112:33332.service: Deactivated successfully. Apr 13 20:25:40.714838 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:25:40.716332 systemd-logind[1451]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:25:40.718223 systemd-logind[1451]: Removed session 9. Apr 13 20:25:45.831144 systemd[1]: Started sshd@9-10.128.0.108:22-20.229.252.112:59846.service - OpenSSH per-connection server daemon (20.229.252.112:59846). Apr 13 20:25:46.531536 sshd[5823]: Accepted publickey for core from 20.229.252.112 port 59846 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:25:46.536681 sshd[5823]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:25:46.546589 systemd-logind[1451]: New session 10 of user core. Apr 13 20:25:46.554156 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:25:47.124372 sshd[5823]: pam_unix(sshd:session): session closed for user core Apr 13 20:25:47.133506 systemd-logind[1451]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:25:47.134772 systemd[1]: sshd@9-10.128.0.108:22-20.229.252.112:59846.service: Deactivated successfully. Apr 13 20:25:47.140986 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:25:47.142976 systemd-logind[1451]: Removed session 10. Apr 13 20:25:52.249217 systemd[1]: Started sshd@10-10.128.0.108:22-20.229.252.112:59862.service - OpenSSH per-connection server daemon (20.229.252.112:59862). Apr 13 20:25:52.947452 sshd[5840]: Accepted publickey for core from 20.229.252.112 port 59862 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:25:52.949889 sshd[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:25:52.957927 systemd-logind[1451]: New session 11 of user core. Apr 13 20:25:52.966088 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:25:53.522383 sshd[5840]: pam_unix(sshd:session): session closed for user core Apr 13 20:25:53.528965 systemd[1]: sshd@10-10.128.0.108:22-20.229.252.112:59862.service: Deactivated successfully. Apr 13 20:25:53.532654 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:25:53.534053 systemd-logind[1451]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:25:53.535630 systemd-logind[1451]: Removed session 11. Apr 13 20:25:53.651254 systemd[1]: Started sshd@11-10.128.0.108:22-20.229.252.112:59872.service - OpenSSH per-connection server daemon (20.229.252.112:59872). Apr 13 20:25:54.355047 sshd[5868]: Accepted publickey for core from 20.229.252.112 port 59872 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:25:54.358937 sshd[5868]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:25:54.370812 systemd-logind[1451]: New session 12 of user core. Apr 13 20:25:54.375345 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:25:54.974231 sshd[5868]: pam_unix(sshd:session): session closed for user core Apr 13 20:25:54.985248 systemd-logind[1451]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:25:54.988530 systemd[1]: sshd@11-10.128.0.108:22-20.229.252.112:59872.service: Deactivated successfully. Apr 13 20:25:54.995722 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:25:55.000606 systemd-logind[1451]: Removed session 12. Apr 13 20:25:55.108437 systemd[1]: Started sshd@12-10.128.0.108:22-20.229.252.112:53686.service - OpenSSH per-connection server daemon (20.229.252.112:53686). Apr 13 20:25:55.856129 sshd[5911]: Accepted publickey for core from 20.229.252.112 port 53686 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:25:55.858864 sshd[5911]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:25:55.867661 systemd-logind[1451]: New session 13 of user core. Apr 13 20:25:55.882355 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:25:56.555039 sshd[5911]: pam_unix(sshd:session): session closed for user core Apr 13 20:25:56.562169 systemd[1]: sshd@12-10.128.0.108:22-20.229.252.112:53686.service: Deactivated successfully. Apr 13 20:25:56.567433 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:25:56.568777 systemd-logind[1451]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:25:56.570559 systemd-logind[1451]: Removed session 13. Apr 13 20:25:58.228136 kubelet[2629]: I0413 20:25:58.226983 2629 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:25:58.875497 kubelet[2629]: I0413 20:25:58.874584 2629 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:26:01.688413 systemd[1]: Started sshd@13-10.128.0.108:22-20.229.252.112:53692.service - OpenSSH per-connection server daemon (20.229.252.112:53692). Apr 13 20:26:02.418744 sshd[5962]: Accepted publickey for core from 20.229.252.112 port 53692 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:26:02.421498 sshd[5962]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:26:02.431355 systemd-logind[1451]: New session 14 of user core. Apr 13 20:26:02.433457 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:26:03.017253 sshd[5962]: pam_unix(sshd:session): session closed for user core Apr 13 20:26:03.027355 systemd[1]: sshd@13-10.128.0.108:22-20.229.252.112:53692.service: Deactivated successfully. Apr 13 20:26:03.028140 systemd-logind[1451]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:26:03.035617 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:26:03.043528 systemd-logind[1451]: Removed session 14. Apr 13 20:26:03.154954 systemd[1]: Started sshd@14-10.128.0.108:22-20.229.252.112:53706.service - OpenSSH per-connection server daemon (20.229.252.112:53706). Apr 13 20:26:03.870790 sshd[5975]: Accepted publickey for core from 20.229.252.112 port 53706 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:26:03.874458 sshd[5975]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:26:03.884301 systemd-logind[1451]: New session 15 of user core. Apr 13 20:26:03.895413 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:26:04.567735 sshd[5975]: pam_unix(sshd:session): session closed for user core Apr 13 20:26:04.576442 systemd[1]: sshd@14-10.128.0.108:22-20.229.252.112:53706.service: Deactivated successfully. Apr 13 20:26:04.583049 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:26:04.588298 systemd-logind[1451]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:26:04.590954 systemd-logind[1451]: Removed session 15. Apr 13 20:26:04.701461 systemd[1]: Started sshd@15-10.128.0.108:22-20.229.252.112:53716.service - OpenSSH per-connection server daemon (20.229.252.112:53716). Apr 13 20:26:05.466856 sshd[5986]: Accepted publickey for core from 20.229.252.112 port 53716 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:26:05.468801 sshd[5986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:26:05.482122 systemd-logind[1451]: New session 16 of user core. Apr 13 20:26:05.487106 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:26:07.282095 sshd[5986]: pam_unix(sshd:session): session closed for user core Apr 13 20:26:07.296334 systemd-logind[1451]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:26:07.298516 systemd[1]: sshd@15-10.128.0.108:22-20.229.252.112:53716.service: Deactivated successfully. Apr 13 20:26:07.306344 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:26:07.314201 systemd-logind[1451]: Removed session 16. Apr 13 20:26:07.412352 systemd[1]: Started sshd@16-10.128.0.108:22-20.229.252.112:44008.service - OpenSSH per-connection server daemon (20.229.252.112:44008). Apr 13 20:26:08.169000 sshd[6029]: Accepted publickey for core from 20.229.252.112 port 44008 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:26:08.171980 sshd[6029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:26:08.184959 systemd-logind[1451]: New session 17 of user core. Apr 13 20:26:08.195343 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:26:09.088225 sshd[6029]: pam_unix(sshd:session): session closed for user core Apr 13 20:26:09.101099 systemd-logind[1451]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:26:09.102151 systemd[1]: sshd@16-10.128.0.108:22-20.229.252.112:44008.service: Deactivated successfully. Apr 13 20:26:09.114845 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:26:09.120381 systemd-logind[1451]: Removed session 17. Apr 13 20:26:09.218900 systemd[1]: Started sshd@17-10.128.0.108:22-20.229.252.112:44020.service - OpenSSH per-connection server daemon (20.229.252.112:44020). Apr 13 20:26:09.937326 sshd[6042]: Accepted publickey for core from 20.229.252.112 port 44020 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:26:09.940489 sshd[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:26:09.955700 systemd-logind[1451]: New session 18 of user core. Apr 13 20:26:09.964122 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:26:10.582801 sshd[6042]: pam_unix(sshd:session): session closed for user core Apr 13 20:26:10.591530 systemd[1]: sshd@17-10.128.0.108:22-20.229.252.112:44020.service: Deactivated successfully. Apr 13 20:26:10.601514 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:26:10.606627 systemd-logind[1451]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:26:10.610315 systemd-logind[1451]: Removed session 18. Apr 13 20:26:15.708377 systemd[1]: Started sshd@18-10.128.0.108:22-20.229.252.112:55066.service - OpenSSH per-connection server daemon (20.229.252.112:55066). Apr 13 20:26:16.406982 sshd[6075]: Accepted publickey for core from 20.229.252.112 port 55066 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:26:16.410732 sshd[6075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:26:16.419552 systemd-logind[1451]: New session 19 of user core. Apr 13 20:26:16.425154 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:26:17.004891 sshd[6075]: pam_unix(sshd:session): session closed for user core Apr 13 20:26:17.010609 systemd[1]: sshd@18-10.128.0.108:22-20.229.252.112:55066.service: Deactivated successfully. Apr 13 20:26:17.016180 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:26:17.019447 systemd-logind[1451]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:26:17.021512 systemd-logind[1451]: Removed session 19. Apr 13 20:26:20.221910 update_engine[1458]: I20260413 20:26:20.221657 1458 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Apr 13 20:26:20.221910 update_engine[1458]: I20260413 20:26:20.221822 1458 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Apr 13 20:26:20.222613 update_engine[1458]: I20260413 20:26:20.222141 1458 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Apr 13 20:26:20.223134 update_engine[1458]: I20260413 20:26:20.223032 1458 omaha_request_params.cc:62] Current group set to lts Apr 13 20:26:20.224056 update_engine[1458]: I20260413 20:26:20.223445 1458 update_attempter.cc:499] Already updated boot flags. Skipping. Apr 13 20:26:20.224056 update_engine[1458]: I20260413 20:26:20.223479 1458 update_attempter.cc:643] Scheduling an action processor start. Apr 13 20:26:20.224056 update_engine[1458]: I20260413 20:26:20.223510 1458 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 20:26:20.224056 update_engine[1458]: I20260413 20:26:20.223558 1458 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Apr 13 20:26:20.224056 update_engine[1458]: I20260413 20:26:20.223665 1458 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 20:26:20.224056 update_engine[1458]: I20260413 20:26:20.223688 1458 omaha_request_action.cc:272] Request: Apr 13 20:26:20.224056 update_engine[1458]: Apr 13 20:26:20.224056 update_engine[1458]: Apr 13 20:26:20.224056 update_engine[1458]: Apr 13 20:26:20.224056 update_engine[1458]: Apr 13 20:26:20.224056 update_engine[1458]: Apr 13 20:26:20.224056 update_engine[1458]: Apr 13 20:26:20.224056 update_engine[1458]: Apr 13 20:26:20.224056 update_engine[1458]: Apr 13 20:26:20.224056 update_engine[1458]: I20260413 20:26:20.223704 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 20:26:20.224923 locksmithd[1499]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Apr 13 20:26:20.226071 update_engine[1458]: I20260413 20:26:20.226015 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 20:26:20.226518 update_engine[1458]: I20260413 20:26:20.226454 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 20:26:20.677709 update_engine[1458]: E20260413 20:26:20.677410 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 20:26:20.677709 update_engine[1458]: I20260413 20:26:20.677644 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Apr 13 20:26:22.138312 systemd[1]: Started sshd@19-10.128.0.108:22-20.229.252.112:55080.service - OpenSSH per-connection server daemon (20.229.252.112:55080). Apr 13 20:26:22.874661 sshd[6090]: Accepted publickey for core from 20.229.252.112 port 55080 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:26:22.877306 sshd[6090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:26:22.886801 systemd-logind[1451]: New session 20 of user core. Apr 13 20:26:22.894226 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 20:26:23.483057 sshd[6090]: pam_unix(sshd:session): session closed for user core Apr 13 20:26:23.492600 systemd[1]: sshd@19-10.128.0.108:22-20.229.252.112:55080.service: Deactivated successfully. Apr 13 20:26:23.497183 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 20:26:23.499438 systemd-logind[1451]: Session 20 logged out. Waiting for processes to exit. Apr 13 20:26:23.502473 systemd-logind[1451]: Removed session 20. Apr 13 20:26:28.612303 systemd[1]: Started sshd@20-10.128.0.108:22-20.229.252.112:53292.service - OpenSSH per-connection server daemon (20.229.252.112:53292). Apr 13 20:26:29.336354 sshd[6145]: Accepted publickey for core from 20.229.252.112 port 53292 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:26:29.338735 sshd[6145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:26:29.348159 systemd-logind[1451]: New session 21 of user core. Apr 13 20:26:29.354086 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 20:26:29.925465 sshd[6145]: pam_unix(sshd:session): session closed for user core Apr 13 20:26:29.931415 systemd[1]: sshd@20-10.128.0.108:22-20.229.252.112:53292.service: Deactivated successfully. Apr 13 20:26:29.935592 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 20:26:29.938319 systemd-logind[1451]: Session 21 logged out. Waiting for processes to exit. Apr 13 20:26:29.940912 systemd-logind[1451]: Removed session 21. Apr 13 20:26:31.222936 update_engine[1458]: I20260413 20:26:31.222734 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 20:26:31.223728 update_engine[1458]: I20260413 20:26:31.223299 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 20:26:31.223898 update_engine[1458]: I20260413 20:26:31.223737 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 20:26:31.234270 update_engine[1458]: E20260413 20:26:31.234162 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 20:26:31.234497 update_engine[1458]: I20260413 20:26:31.234319 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Apr 13 20:26:35.067501 systemd[1]: Started sshd@21-10.128.0.108:22-20.229.252.112:43286.service - OpenSSH per-connection server daemon (20.229.252.112:43286). Apr 13 20:26:35.817565 sshd[6190]: Accepted publickey for core from 20.229.252.112 port 43286 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:26:35.820731 sshd[6190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:26:35.829546 systemd-logind[1451]: New session 22 of user core. Apr 13 20:26:35.835210 systemd[1]: Started session-22.scope - Session 22 of User core. Apr 13 20:26:36.557120 sshd[6190]: pam_unix(sshd:session): session closed for user core Apr 13 20:26:36.567272 systemd[1]: sshd@21-10.128.0.108:22-20.229.252.112:43286.service: Deactivated successfully. Apr 13 20:26:36.567354 systemd-logind[1451]: Session 22 logged out. Waiting for processes to exit. Apr 13 20:26:36.574229 systemd[1]: session-22.scope: Deactivated successfully. Apr 13 20:26:36.579354 systemd-logind[1451]: Removed session 22. Apr 13 20:26:41.222203 update_engine[1458]: I20260413 20:26:41.221923 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 20:26:41.222937 update_engine[1458]: I20260413 20:26:41.222568 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 20:26:41.223257 update_engine[1458]: I20260413 20:26:41.222980 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 20:26:41.236262 update_engine[1458]: E20260413 20:26:41.236146 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 20:26:41.236560 update_engine[1458]: I20260413 20:26:41.236321 1458 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Apr 13 20:26:41.690348 systemd[1]: Started sshd@22-10.128.0.108:22-20.229.252.112:43290.service - OpenSSH per-connection server daemon (20.229.252.112:43290). Apr 13 20:26:42.425552 sshd[6231]: Accepted publickey for core from 20.229.252.112 port 43290 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:26:42.426654 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:26:42.436481 systemd-logind[1451]: New session 23 of user core. Apr 13 20:26:42.441143 systemd[1]: Started session-23.scope - Session 23 of User core. Apr 13 20:26:43.011488 sshd[6231]: pam_unix(sshd:session): session closed for user core Apr 13 20:26:43.018462 systemd[1]: sshd@22-10.128.0.108:22-20.229.252.112:43290.service: Deactivated successfully. Apr 13 20:26:43.023547 systemd[1]: session-23.scope: Deactivated successfully. Apr 13 20:26:43.024804 systemd-logind[1451]: Session 23 logged out. Waiting for processes to exit. Apr 13 20:26:43.027075 systemd-logind[1451]: Removed session 23. Apr 13 20:26:48.142086 systemd[1]: Started sshd@23-10.128.0.108:22-20.229.252.112:35150.service - OpenSSH per-connection server daemon (20.229.252.112:35150). Apr 13 20:26:48.848736 sshd[6277]: Accepted publickey for core from 20.229.252.112 port 35150 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:26:48.851316 sshd[6277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:26:48.858855 systemd-logind[1451]: New session 24 of user core. Apr 13 20:26:48.868128 systemd[1]: Started session-24.scope - Session 24 of User core. Apr 13 20:26:49.438155 sshd[6277]: pam_unix(sshd:session): session closed for user core Apr 13 20:26:49.443899 systemd[1]: sshd@23-10.128.0.108:22-20.229.252.112:35150.service: Deactivated successfully. Apr 13 20:26:49.448740 systemd[1]: session-24.scope: Deactivated successfully. Apr 13 20:26:49.452091 systemd-logind[1451]: Session 24 logged out. Waiting for processes to exit. Apr 13 20:26:49.455110 systemd-logind[1451]: Removed session 24. Apr 13 20:26:51.223005 update_engine[1458]: I20260413 20:26:51.222728 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 20:26:51.224082 update_engine[1458]: I20260413 20:26:51.223582 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 20:26:51.224169 update_engine[1458]: I20260413 20:26:51.224074 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 20:26:51.684556 update_engine[1458]: E20260413 20:26:51.684037 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 20:26:51.684556 update_engine[1458]: I20260413 20:26:51.684401 1458 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 20:26:51.684556 update_engine[1458]: I20260413 20:26:51.684453 1458 omaha_request_action.cc:617] Omaha request response: Apr 13 20:26:51.685089 update_engine[1458]: E20260413 20:26:51.684872 1458 omaha_request_action.cc:636] Omaha request network transfer failed. Apr 13 20:26:51.685089 update_engine[1458]: I20260413 20:26:51.684939 1458 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Apr 13 20:26:51.685089 update_engine[1458]: I20260413 20:26:51.684965 1458 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 20:26:51.685089 update_engine[1458]: I20260413 20:26:51.684978 1458 update_attempter.cc:306] Processing Done. Apr 13 20:26:51.685089 update_engine[1458]: E20260413 20:26:51.685011 1458 update_attempter.cc:619] Update failed. Apr 13 20:26:51.685089 update_engine[1458]: I20260413 20:26:51.685042 1458 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Apr 13 20:26:51.685089 update_engine[1458]: I20260413 20:26:51.685055 1458 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Apr 13 20:26:51.685089 update_engine[1458]: I20260413 20:26:51.685070 1458 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Apr 13 20:26:51.685695 update_engine[1458]: I20260413 20:26:51.685329 1458 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Apr 13 20:26:51.685695 update_engine[1458]: I20260413 20:26:51.685395 1458 omaha_request_action.cc:271] Posting an Omaha request to disabled Apr 13 20:26:51.685695 update_engine[1458]: I20260413 20:26:51.685410 1458 omaha_request_action.cc:272] Request: Apr 13 20:26:51.685695 update_engine[1458]: Apr 13 20:26:51.685695 update_engine[1458]: Apr 13 20:26:51.685695 update_engine[1458]: Apr 13 20:26:51.685695 update_engine[1458]: Apr 13 20:26:51.685695 update_engine[1458]: Apr 13 20:26:51.685695 update_engine[1458]: Apr 13 20:26:51.685695 update_engine[1458]: I20260413 20:26:51.685427 1458 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Apr 13 20:26:51.687997 update_engine[1458]: I20260413 20:26:51.685926 1458 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Apr 13 20:26:51.687997 update_engine[1458]: I20260413 20:26:51.686311 1458 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Apr 13 20:26:51.688067 locksmithd[1499]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Apr 13 20:26:51.696308 update_engine[1458]: E20260413 20:26:51.696194 1458 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Apr 13 20:26:51.696679 update_engine[1458]: I20260413 20:26:51.696499 1458 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Apr 13 20:26:51.696679 update_engine[1458]: I20260413 20:26:51.696553 1458 omaha_request_action.cc:617] Omaha request response: Apr 13 20:26:51.696679 update_engine[1458]: I20260413 20:26:51.696578 1458 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 20:26:51.696679 update_engine[1458]: I20260413 20:26:51.696593 1458 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Apr 13 20:26:51.696679 update_engine[1458]: I20260413 20:26:51.696605 1458 update_attempter.cc:306] Processing Done. Apr 13 20:26:51.696679 update_engine[1458]: I20260413 20:26:51.696621 1458 update_attempter.cc:310] Error event sent. Apr 13 20:26:51.696679 update_engine[1458]: I20260413 20:26:51.696645 1458 update_check_scheduler.cc:74] Next update check in 44m50s Apr 13 20:26:51.697676 locksmithd[1499]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Apr 13 20:26:54.577345 systemd[1]: Started sshd@24-10.128.0.108:22-20.229.252.112:35162.service - OpenSSH per-connection server daemon (20.229.252.112:35162). Apr 13 20:26:55.305577 sshd[6313]: Accepted publickey for core from 20.229.252.112 port 35162 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:26:55.307991 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:26:55.319144 systemd-logind[1451]: New session 25 of user core. Apr 13 20:26:55.326138 systemd[1]: Started session-25.scope - Session 25 of User core. Apr 13 20:26:55.894998 sshd[6313]: pam_unix(sshd:session): session closed for user core Apr 13 20:26:55.903054 systemd[1]: sshd@24-10.128.0.108:22-20.229.252.112:35162.service: Deactivated successfully. Apr 13 20:26:55.908388 systemd[1]: session-25.scope: Deactivated successfully. Apr 13 20:26:55.910908 systemd-logind[1451]: Session 25 logged out. Waiting for processes to exit. Apr 13 20:26:55.913607 systemd-logind[1451]: Removed session 25. Apr 13 20:27:01.026405 systemd[1]: Started sshd@25-10.128.0.108:22-20.229.252.112:37474.service - OpenSSH per-connection server daemon (20.229.252.112:37474). Apr 13 20:27:01.728863 sshd[6347]: Accepted publickey for core from 20.229.252.112 port 37474 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:27:01.731519 sshd[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:27:01.741967 systemd-logind[1451]: New session 26 of user core. Apr 13 20:27:01.747127 systemd[1]: Started session-26.scope - Session 26 of User core. Apr 13 20:27:02.319518 sshd[6347]: pam_unix(sshd:session): session closed for user core Apr 13 20:27:02.327982 systemd[1]: sshd@25-10.128.0.108:22-20.229.252.112:37474.service: Deactivated successfully. Apr 13 20:27:02.334061 systemd[1]: session-26.scope: Deactivated successfully. Apr 13 20:27:02.335636 systemd-logind[1451]: Session 26 logged out. Waiting for processes to exit. Apr 13 20:27:02.338291 systemd-logind[1451]: Removed session 26. Apr 13 20:27:07.450615 systemd[1]: Started sshd@26-10.128.0.108:22-20.229.252.112:37806.service - OpenSSH per-connection server daemon (20.229.252.112:37806). Apr 13 20:27:08.154819 sshd[6383]: Accepted publickey for core from 20.229.252.112 port 37806 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:27:08.157342 sshd[6383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:27:08.167030 systemd-logind[1451]: New session 27 of user core. Apr 13 20:27:08.173336 systemd[1]: Started session-27.scope - Session 27 of User core. Apr 13 20:27:08.742642 sshd[6383]: pam_unix(sshd:session): session closed for user core Apr 13 20:27:08.749962 systemd[1]: sshd@26-10.128.0.108:22-20.229.252.112:37806.service: Deactivated successfully. Apr 13 20:27:08.754616 systemd[1]: session-27.scope: Deactivated successfully. Apr 13 20:27:08.757211 systemd-logind[1451]: Session 27 logged out. Waiting for processes to exit. Apr 13 20:27:08.759684 systemd-logind[1451]: Removed session 27. Apr 13 20:27:13.872278 systemd[1]: Started sshd@27-10.128.0.108:22-20.229.252.112:37810.service - OpenSSH per-connection server daemon (20.229.252.112:37810). Apr 13 20:27:14.567906 sshd[6414]: Accepted publickey for core from 20.229.252.112 port 37810 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:27:14.570586 sshd[6414]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:27:14.582048 systemd-logind[1451]: New session 28 of user core. Apr 13 20:27:14.589109 systemd[1]: Started session-28.scope - Session 28 of User core. Apr 13 20:27:15.166620 sshd[6414]: pam_unix(sshd:session): session closed for user core Apr 13 20:27:15.174596 systemd[1]: sshd@27-10.128.0.108:22-20.229.252.112:37810.service: Deactivated successfully. Apr 13 20:27:15.180051 systemd[1]: session-28.scope: Deactivated successfully. Apr 13 20:27:15.181976 systemd-logind[1451]: Session 28 logged out. Waiting for processes to exit. Apr 13 20:27:15.184543 systemd-logind[1451]: Removed session 28. Apr 13 20:27:20.299215 systemd[1]: Started sshd@28-10.128.0.108:22-20.229.252.112:50030.service - OpenSSH per-connection server daemon (20.229.252.112:50030). Apr 13 20:27:21.021906 sshd[6429]: Accepted publickey for core from 20.229.252.112 port 50030 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:27:21.024678 sshd[6429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:27:21.033997 systemd-logind[1451]: New session 29 of user core. Apr 13 20:27:21.041236 systemd[1]: Started session-29.scope - Session 29 of User core. Apr 13 20:27:21.615713 sshd[6429]: pam_unix(sshd:session): session closed for user core Apr 13 20:27:21.623610 systemd[1]: sshd@28-10.128.0.108:22-20.229.252.112:50030.service: Deactivated successfully. Apr 13 20:27:21.629001 systemd[1]: session-29.scope: Deactivated successfully. Apr 13 20:27:21.630691 systemd-logind[1451]: Session 29 logged out. Waiting for processes to exit. Apr 13 20:27:21.632802 systemd-logind[1451]: Removed session 29. Apr 13 20:27:26.752462 systemd[1]: Started sshd@29-10.128.0.108:22-20.229.252.112:45578.service - OpenSSH per-connection server daemon (20.229.252.112:45578). Apr 13 20:27:27.481020 sshd[6485]: Accepted publickey for core from 20.229.252.112 port 45578 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:27:27.482788 sshd[6485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:27:27.490976 systemd-logind[1451]: New session 30 of user core. Apr 13 20:27:27.498070 systemd[1]: Started session-30.scope - Session 30 of User core. Apr 13 20:27:28.124038 sshd[6485]: pam_unix(sshd:session): session closed for user core Apr 13 20:27:28.130360 systemd[1]: sshd@29-10.128.0.108:22-20.229.252.112:45578.service: Deactivated successfully. Apr 13 20:27:28.134650 systemd[1]: session-30.scope: Deactivated successfully. Apr 13 20:27:28.139243 systemd-logind[1451]: Session 30 logged out. Waiting for processes to exit. Apr 13 20:27:28.141603 systemd-logind[1451]: Removed session 30. Apr 13 20:27:33.256614 systemd[1]: Started sshd@30-10.128.0.108:22-20.229.252.112:45580.service - OpenSSH per-connection server daemon (20.229.252.112:45580). Apr 13 20:27:33.976814 sshd[6517]: Accepted publickey for core from 20.229.252.112 port 45580 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:27:33.979144 sshd[6517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:27:33.986991 systemd-logind[1451]: New session 31 of user core. Apr 13 20:27:33.991097 systemd[1]: Started session-31.scope - Session 31 of User core. Apr 13 20:27:34.579501 sshd[6517]: pam_unix(sshd:session): session closed for user core Apr 13 20:27:34.584929 systemd[1]: sshd@30-10.128.0.108:22-20.229.252.112:45580.service: Deactivated successfully. Apr 13 20:27:34.589714 systemd[1]: session-31.scope: Deactivated successfully. Apr 13 20:27:34.592412 systemd-logind[1451]: Session 31 logged out. Waiting for processes to exit. Apr 13 20:27:34.595424 systemd-logind[1451]: Removed session 31. Apr 13 20:27:39.703315 systemd[1]: Started sshd@31-10.128.0.108:22-20.229.252.112:54832.service - OpenSSH per-connection server daemon (20.229.252.112:54832). Apr 13 20:27:40.397654 sshd[6551]: Accepted publickey for core from 20.229.252.112 port 54832 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:27:40.400110 sshd[6551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:27:40.408851 systemd-logind[1451]: New session 32 of user core. Apr 13 20:27:40.417124 systemd[1]: Started session-32.scope - Session 32 of User core. Apr 13 20:27:40.965717 sshd[6551]: pam_unix(sshd:session): session closed for user core Apr 13 20:27:40.973183 systemd[1]: sshd@31-10.128.0.108:22-20.229.252.112:54832.service: Deactivated successfully. Apr 13 20:27:40.976310 systemd[1]: session-32.scope: Deactivated successfully. Apr 13 20:27:40.978150 systemd-logind[1451]: Session 32 logged out. Waiting for processes to exit. Apr 13 20:27:40.980734 systemd-logind[1451]: Removed session 32. Apr 13 20:27:46.096314 systemd[1]: Started sshd@32-10.128.0.108:22-20.229.252.112:46124.service - OpenSSH per-connection server daemon (20.229.252.112:46124). Apr 13 20:27:46.793067 sshd[6564]: Accepted publickey for core from 20.229.252.112 port 46124 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:27:46.795360 sshd[6564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:27:46.802044 systemd-logind[1451]: New session 33 of user core. Apr 13 20:27:46.810061 systemd[1]: Started session-33.scope - Session 33 of User core. Apr 13 20:27:47.371443 sshd[6564]: pam_unix(sshd:session): session closed for user core Apr 13 20:27:47.377662 systemd[1]: sshd@32-10.128.0.108:22-20.229.252.112:46124.service: Deactivated successfully. Apr 13 20:27:47.382469 systemd[1]: session-33.scope: Deactivated successfully. Apr 13 20:27:47.385291 systemd-logind[1451]: Session 33 logged out. Waiting for processes to exit. Apr 13 20:27:47.387950 systemd-logind[1451]: Removed session 33. Apr 13 20:27:52.503323 systemd[1]: Started sshd@33-10.128.0.108:22-20.229.252.112:46138.service - OpenSSH per-connection server daemon (20.229.252.112:46138). Apr 13 20:27:53.228184 sshd[6576]: Accepted publickey for core from 20.229.252.112 port 46138 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:27:53.230527 sshd[6576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:27:53.239394 systemd-logind[1451]: New session 34 of user core. Apr 13 20:27:53.244179 systemd[1]: Started session-34.scope - Session 34 of User core. Apr 13 20:27:53.815702 sshd[6576]: pam_unix(sshd:session): session closed for user core Apr 13 20:27:53.821890 systemd[1]: sshd@33-10.128.0.108:22-20.229.252.112:46138.service: Deactivated successfully. Apr 13 20:27:53.826152 systemd[1]: session-34.scope: Deactivated successfully. Apr 13 20:27:53.829022 systemd-logind[1451]: Session 34 logged out. Waiting for processes to exit. Apr 13 20:27:53.831237 systemd-logind[1451]: Removed session 34. Apr 13 20:27:58.949342 systemd[1]: Started sshd@34-10.128.0.108:22-20.229.252.112:47082.service - OpenSSH per-connection server daemon (20.229.252.112:47082). Apr 13 20:27:59.677149 sshd[6652]: Accepted publickey for core from 20.229.252.112 port 47082 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:27:59.680275 sshd[6652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:27:59.689759 systemd-logind[1451]: New session 35 of user core. Apr 13 20:27:59.699221 systemd[1]: Started session-35.scope - Session 35 of User core. Apr 13 20:28:00.288374 sshd[6652]: pam_unix(sshd:session): session closed for user core Apr 13 20:28:00.295388 systemd[1]: sshd@34-10.128.0.108:22-20.229.252.112:47082.service: Deactivated successfully. Apr 13 20:28:00.298946 systemd[1]: session-35.scope: Deactivated successfully. Apr 13 20:28:00.300491 systemd-logind[1451]: Session 35 logged out. Waiting for processes to exit. Apr 13 20:28:00.302388 systemd-logind[1451]: Removed session 35.