Apr 13 20:40:01.078630 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:40:01.080321 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:40:01.080341 kernel: BIOS-provided physical RAM map: Apr 13 20:40:01.080356 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 13 20:40:01.080371 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 13 20:40:01.080385 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 13 20:40:01.080403 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 13 20:40:01.080422 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 13 20:40:01.080437 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Apr 13 20:40:01.080450 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Apr 13 20:40:01.080465 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Apr 13 20:40:01.080480 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Apr 13 20:40:01.080495 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 13 20:40:01.080510 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 13 20:40:01.080532 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 13 20:40:01.080549 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 13 20:40:01.080574 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 13 20:40:01.080589 kernel: NX (Execute Disable) protection: active Apr 13 20:40:01.080605 kernel: APIC: Static calls initialized Apr 13 20:40:01.080621 kernel: efi: EFI v2.7 by EDK II Apr 13 20:40:01.080639 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Apr 13 20:40:01.080674 kernel: SMBIOS 2.4 present. Apr 13 20:40:01.080690 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Apr 13 20:40:01.080705 kernel: Hypervisor detected: KVM Apr 13 20:40:01.080725 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:40:01.080741 kernel: kvm-clock: using sched offset of 12293226812 cycles Apr 13 20:40:01.080756 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:40:01.080772 kernel: tsc: Detected 2299.998 MHz processor Apr 13 20:40:01.080788 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:40:01.080805 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:40:01.080821 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 13 20:40:01.080838 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Apr 13 20:40:01.080854 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:40:01.080874 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 13 20:40:01.080891 kernel: Using GB pages for direct mapping Apr 13 20:40:01.080906 kernel: Secure boot disabled Apr 13 20:40:01.080922 kernel: ACPI: Early table checksum verification disabled Apr 13 20:40:01.080937 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 13 20:40:01.080952 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 13 20:40:01.080968 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 13 20:40:01.080990 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 13 20:40:01.081011 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 13 20:40:01.081028 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Apr 13 20:40:01.081045 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 13 20:40:01.081062 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 13 20:40:01.081080 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 13 20:40:01.081097 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 13 20:40:01.081118 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 13 20:40:01.081136 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 13 20:40:01.081155 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 13 20:40:01.081173 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 13 20:40:01.081191 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 13 20:40:01.081210 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 13 20:40:01.081228 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 13 20:40:01.081247 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 13 20:40:01.081265 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 13 20:40:01.081288 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 13 20:40:01.081306 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 13 20:40:01.081324 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 13 20:40:01.081341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 13 20:40:01.081358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 13 20:40:01.081377 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 13 20:40:01.081396 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Apr 13 20:40:01.081415 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Apr 13 20:40:01.081434 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Apr 13 20:40:01.081457 kernel: Zone ranges: Apr 13 20:40:01.081476 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:40:01.081495 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:40:01.081514 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 13 20:40:01.081532 kernel: Movable zone start for each node Apr 13 20:40:01.081559 kernel: Early memory node ranges Apr 13 20:40:01.081575 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 13 20:40:01.081591 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 13 20:40:01.081608 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Apr 13 20:40:01.081624 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 13 20:40:01.081684 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 13 20:40:01.081705 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 13 20:40:01.081723 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:40:01.081741 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 13 20:40:01.081760 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 13 20:40:01.081779 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 13 20:40:01.081797 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 13 20:40:01.081816 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 13 20:40:01.081833 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:40:01.081855 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:40:01.081874 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:40:01.081893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:40:01.081911 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:40:01.081929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:40:01.081948 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:40:01.081966 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:40:01.081984 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 13 20:40:01.082002 kernel: Booting paravirtualized kernel on KVM Apr 13 20:40:01.082025 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:40:01.082043 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:40:01.082061 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:40:01.082080 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:40:01.082097 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:40:01.082115 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:40:01.082134 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:40:01.082154 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:40:01.082177 kernel: random: crng init done Apr 13 20:40:01.082196 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 13 20:40:01.082214 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:40:01.082232 kernel: Fallback order for Node 0: 0 Apr 13 20:40:01.082251 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Apr 13 20:40:01.082270 kernel: Policy zone: Normal Apr 13 20:40:01.082288 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:40:01.082306 kernel: software IO TLB: area num 2. Apr 13 20:40:01.082325 kernel: Memory: 7513184K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 347140K reserved, 0K cma-reserved) Apr 13 20:40:01.082347 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:40:01.082365 kernel: Kernel/User page tables isolation: enabled Apr 13 20:40:01.082383 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:40:01.082402 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:40:01.082420 kernel: Dynamic Preempt: voluntary Apr 13 20:40:01.082438 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:40:01.082458 kernel: rcu: RCU event tracing is enabled. Apr 13 20:40:01.082478 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:40:01.082514 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:40:01.082534 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:40:01.082560 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:40:01.082580 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:40:01.082603 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:40:01.082622 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:40:01.082641 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:40:01.082810 kernel: Console: colour dummy device 80x25 Apr 13 20:40:01.082835 kernel: printk: console [ttyS0] enabled Apr 13 20:40:01.082854 kernel: ACPI: Core revision 20230628 Apr 13 20:40:01.082873 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:40:01.082892 kernel: x2apic enabled Apr 13 20:40:01.082911 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:40:01.082931 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 13 20:40:01.082951 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 13 20:40:01.082971 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 13 20:40:01.082991 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 13 20:40:01.083010 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 13 20:40:01.083033 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:40:01.083050 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 13 20:40:01.083069 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 13 20:40:01.083086 kernel: Spectre V2 : Mitigation: IBRS Apr 13 20:40:01.083106 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:40:01.083142 kernel: RETBleed: Mitigation: IBRS Apr 13 20:40:01.083160 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:40:01.083180 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Apr 13 20:40:01.083205 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:40:01.083223 kernel: MDS: Mitigation: Clear CPU buffers Apr 13 20:40:01.083242 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:40:01.083261 kernel: active return thunk: its_return_thunk Apr 13 20:40:01.083279 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 20:40:01.083298 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:40:01.083316 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:40:01.083334 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:40:01.083353 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:40:01.083377 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 13 20:40:01.083397 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:40:01.083416 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:40:01.083434 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:40:01.083453 kernel: landlock: Up and running. Apr 13 20:40:01.083471 kernel: SELinux: Initializing. Apr 13 20:40:01.083489 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:40:01.083508 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:40:01.083527 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 13 20:40:01.083565 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:40:01.083586 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:40:01.083606 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:40:01.083626 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 13 20:40:01.083757 kernel: signal: max sigframe size: 1776 Apr 13 20:40:01.083784 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:40:01.083802 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:40:01.083820 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 20:40:01.083837 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:40:01.083862 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:40:01.083879 kernel: .... node #0, CPUs: #1 Apr 13 20:40:01.083898 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 13 20:40:01.083918 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 13 20:40:01.083936 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:40:01.083955 kernel: smpboot: Max logical packages: 1 Apr 13 20:40:01.083973 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 13 20:40:01.084002 kernel: devtmpfs: initialized Apr 13 20:40:01.084026 kernel: x86/mm: Memory block size: 128MB Apr 13 20:40:01.084046 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 13 20:40:01.084065 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:40:01.084084 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:40:01.084103 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:40:01.084121 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:40:01.084139 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:40:01.084157 kernel: audit: type=2000 audit(1776112799.574:1): state=initialized audit_enabled=0 res=1 Apr 13 20:40:01.084175 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:40:01.084224 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:40:01.084244 kernel: cpuidle: using governor menu Apr 13 20:40:01.084263 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:40:01.084283 kernel: dca service started, version 1.12.1 Apr 13 20:40:01.084302 kernel: PCI: Using configuration type 1 for base access Apr 13 20:40:01.084321 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:40:01.084342 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:40:01.084362 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:40:01.084381 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:40:01.084423 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:40:01.084443 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:40:01.084463 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:40:01.084482 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:40:01.084502 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 13 20:40:01.084522 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:40:01.084542 kernel: ACPI: Interpreter enabled Apr 13 20:40:01.084569 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 20:40:01.084588 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:40:01.084608 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:40:01.084631 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 13 20:40:01.084666 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 13 20:40:01.084686 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:40:01.084958 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:40:01.085165 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 13 20:40:01.085354 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 13 20:40:01.085380 kernel: PCI host bridge to bus 0000:00 Apr 13 20:40:01.085578 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:40:01.085793 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:40:01.085966 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:40:01.086137 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 13 20:40:01.086306 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:40:01.086541 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 13 20:40:01.087031 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Apr 13 20:40:01.087231 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 13 20:40:01.087415 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 13 20:40:01.087614 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Apr 13 20:40:01.087829 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 13 20:40:01.088013 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Apr 13 20:40:01.088205 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 13 20:40:01.088398 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Apr 13 20:40:01.088589 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Apr 13 20:40:01.088817 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Apr 13 20:40:01.089000 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 13 20:40:01.089181 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Apr 13 20:40:01.089204 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:40:01.089224 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:40:01.089249 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:40:01.089266 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:40:01.089285 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 13 20:40:01.089303 kernel: iommu: Default domain type: Translated Apr 13 20:40:01.089322 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:40:01.089341 kernel: efivars: Registered efivars operations Apr 13 20:40:01.089360 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:40:01.089379 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:40:01.089397 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 13 20:40:01.089419 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 13 20:40:01.089436 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 13 20:40:01.089454 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 13 20:40:01.089473 kernel: vgaarb: loaded Apr 13 20:40:01.089491 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:40:01.089510 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:40:01.089527 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:40:01.089546 kernel: pnp: PnP ACPI init Apr 13 20:40:01.089573 kernel: pnp: PnP ACPI: found 7 devices Apr 13 20:40:01.089592 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:40:01.089615 kernel: NET: Registered PF_INET protocol family Apr 13 20:40:01.089634 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 13 20:40:01.089676 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 13 20:40:01.089695 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:40:01.089714 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:40:01.089732 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 13 20:40:01.089750 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 13 20:40:01.089769 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 13 20:40:01.089792 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 13 20:40:01.089810 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:40:01.089829 kernel: NET: Registered PF_XDP protocol family Apr 13 20:40:01.090003 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:40:01.090169 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:40:01.090334 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:40:01.090498 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 13 20:40:01.090774 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 13 20:40:01.090805 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:40:01.090824 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:40:01.090841 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Apr 13 20:40:01.091046 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 20:40:01.091065 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 13 20:40:01.091083 kernel: clocksource: Switched to clocksource tsc Apr 13 20:40:01.091101 kernel: Initialise system trusted keyrings Apr 13 20:40:01.091119 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 13 20:40:01.091143 kernel: Key type asymmetric registered Apr 13 20:40:01.091160 kernel: Asymmetric key parser 'x509' registered Apr 13 20:40:01.091177 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:40:01.091195 kernel: io scheduler mq-deadline registered Apr 13 20:40:01.091213 kernel: io scheduler kyber registered Apr 13 20:40:01.091231 kernel: io scheduler bfq registered Apr 13 20:40:01.091249 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:40:01.091268 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 13 20:40:01.092805 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 13 20:40:01.092844 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 13 20:40:01.093043 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 13 20:40:01.093068 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 13 20:40:01.093260 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 13 20:40:01.093285 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:40:01.093306 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:40:01.093326 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 13 20:40:01.093346 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 13 20:40:01.093367 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 13 20:40:01.093642 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 13 20:40:01.094968 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:40:01.094990 kernel: i8042: Warning: Keylock active Apr 13 20:40:01.095010 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:40:01.095031 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:40:01.095251 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 13 20:40:01.095447 kernel: rtc_cmos 00:00: registered as rtc0 Apr 13 20:40:01.095988 kernel: rtc_cmos 00:00: setting system clock to 2026-04-13T20:40:00 UTC (1776112800) Apr 13 20:40:01.096207 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 13 20:40:01.096234 kernel: intel_pstate: CPU model not supported Apr 13 20:40:01.096254 kernel: pstore: Using crash dump compression: deflate Apr 13 20:40:01.096273 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 20:40:01.096292 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:40:01.096310 kernel: Segment Routing with IPv6 Apr 13 20:40:01.096329 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:40:01.096349 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:40:01.096374 kernel: Key type dns_resolver registered Apr 13 20:40:01.096392 kernel: IPI shorthand broadcast: enabled Apr 13 20:40:01.096412 kernel: sched_clock: Marking stable (847003964, 134298661)->(994368419, -13065794) Apr 13 20:40:01.096431 kernel: registered taskstats version 1 Apr 13 20:40:01.096450 kernel: Loading compiled-in X.509 certificates Apr 13 20:40:01.096469 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:40:01.096488 kernel: Key type .fscrypt registered Apr 13 20:40:01.096507 kernel: Key type fscrypt-provisioning registered Apr 13 20:40:01.096526 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:40:01.096559 kernel: ima: No architecture policies found Apr 13 20:40:01.096579 kernel: clk: Disabling unused clocks Apr 13 20:40:01.096598 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:40:01.096616 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:40:01.096635 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:40:01.096798 kernel: Run /init as init process Apr 13 20:40:01.096819 kernel: with arguments: Apr 13 20:40:01.096838 kernel: /init Apr 13 20:40:01.096857 kernel: with environment: Apr 13 20:40:01.096881 kernel: HOME=/ Apr 13 20:40:01.096900 kernel: TERM=linux Apr 13 20:40:01.096920 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:40:01.096944 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:40:01.096967 systemd[1]: Detected virtualization google. Apr 13 20:40:01.096988 systemd[1]: Detected architecture x86-64. Apr 13 20:40:01.097007 systemd[1]: Running in initrd. Apr 13 20:40:01.097030 systemd[1]: No hostname configured, using default hostname. Apr 13 20:40:01.097050 systemd[1]: Hostname set to . Apr 13 20:40:01.097070 systemd[1]: Initializing machine ID from random generator. Apr 13 20:40:01.097090 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:40:01.097110 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:40:01.097130 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:40:01.097152 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:40:01.097172 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:40:01.097196 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:40:01.097216 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:40:01.097239 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:40:01.097260 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:40:01.097281 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:40:01.097301 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:40:01.097320 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:40:01.097345 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:40:01.097385 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:40:01.097409 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:40:01.097430 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:40:01.097451 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:40:01.097472 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:40:01.097497 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:40:01.097518 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:40:01.097539 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:40:01.097569 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:40:01.097590 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:40:01.097611 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:40:01.097632 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:40:01.097674 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:40:01.097696 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:40:01.097722 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:40:01.097743 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:40:01.097764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:40:01.097818 systemd-journald[184]: Collecting audit messages is disabled. Apr 13 20:40:01.097867 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:40:01.097888 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:40:01.097909 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:40:01.097932 systemd-journald[184]: Journal started Apr 13 20:40:01.097977 systemd-journald[184]: Runtime Journal (/run/log/journal/1e639bcad65948a2bb2037655c686cdd) is 8.0M, max 148.7M, 140.7M free. Apr 13 20:40:01.104920 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:40:01.102764 systemd-modules-load[185]: Inserted module 'overlay' Apr 13 20:40:01.113675 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:40:01.125819 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:40:01.126579 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:40:01.134530 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:40:01.140612 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:40:01.156826 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:40:01.164041 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:40:01.169790 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:40:01.169833 kernel: Bridge firewalling registered Apr 13 20:40:01.168464 systemd-modules-load[185]: Inserted module 'br_netfilter' Apr 13 20:40:01.170230 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:40:01.187941 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:40:01.201915 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:40:01.207150 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:40:01.216987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:40:01.221570 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:40:01.234898 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:40:01.269644 dracut-cmdline[218]: dracut-dracut-053 Apr 13 20:40:01.274892 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:40:01.278007 systemd-resolved[212]: Positive Trust Anchors: Apr 13 20:40:01.278155 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:40:01.278226 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:40:01.286000 systemd-resolved[212]: Defaulting to hostname 'linux'. Apr 13 20:40:01.289166 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:40:01.315906 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:40:01.374689 kernel: SCSI subsystem initialized Apr 13 20:40:01.385686 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:40:01.397686 kernel: iscsi: registered transport (tcp) Apr 13 20:40:01.422701 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:40:01.422781 kernel: QLogic iSCSI HBA Driver Apr 13 20:40:01.475330 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:40:01.481889 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:40:01.523215 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:40:01.523318 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:40:01.523347 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:40:01.568700 kernel: raid6: avx2x4 gen() 18065 MB/s Apr 13 20:40:01.585691 kernel: raid6: avx2x2 gen() 18139 MB/s Apr 13 20:40:01.603044 kernel: raid6: avx2x1 gen() 14058 MB/s Apr 13 20:40:01.603076 kernel: raid6: using algorithm avx2x2 gen() 18139 MB/s Apr 13 20:40:01.621083 kernel: raid6: .... xor() 17694 MB/s, rmw enabled Apr 13 20:40:01.621122 kernel: raid6: using avx2x2 recovery algorithm Apr 13 20:40:01.643685 kernel: xor: automatically using best checksumming function avx Apr 13 20:40:01.815690 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:40:01.829103 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:40:01.835875 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:40:01.861024 systemd-udevd[401]: Using default interface naming scheme 'v255'. Apr 13 20:40:01.868304 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:40:01.875878 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:40:01.901953 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 13 20:40:01.939289 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:40:01.949021 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:40:02.056541 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:40:02.067928 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:40:02.104434 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:40:02.110915 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:40:02.120810 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:40:02.127633 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:40:02.141895 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:40:02.176560 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:40:02.189669 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:40:02.256240 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:40:02.256384 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:40:02.256415 kernel: blk-mq: reduced tag depth to 10240 Apr 13 20:40:02.257677 kernel: AES CTR mode by8 optimization enabled Apr 13 20:40:02.276167 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:40:02.288099 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 13 20:40:02.276374 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:40:02.293076 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:40:02.295767 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:40:02.296031 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:40:02.298716 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:40:02.312110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:40:02.346644 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:40:02.356272 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Apr 13 20:40:02.356612 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 13 20:40:02.357685 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 13 20:40:02.357977 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 13 20:40:02.359753 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:40:02.363862 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:40:02.372205 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:40:02.372266 kernel: GPT:17805311 != 33554431 Apr 13 20:40:02.372292 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:40:02.372318 kernel: GPT:17805311 != 33554431 Apr 13 20:40:02.372341 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:40:02.372365 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:40:02.372388 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 13 20:40:02.406008 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:40:02.425688 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (457) Apr 13 20:40:02.427692 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (447) Apr 13 20:40:02.433977 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Apr 13 20:40:02.459525 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Apr 13 20:40:02.466542 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Apr 13 20:40:02.466784 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Apr 13 20:40:02.479139 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 13 20:40:02.484997 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:40:02.500737 disk-uuid[551]: Primary Header is updated. Apr 13 20:40:02.500737 disk-uuid[551]: Secondary Entries is updated. Apr 13 20:40:02.500737 disk-uuid[551]: Secondary Header is updated. Apr 13 20:40:02.513344 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:40:02.525696 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:40:02.539699 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:40:03.535942 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:40:03.536488 disk-uuid[552]: The operation has completed successfully. Apr 13 20:40:03.616186 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:40:03.616367 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:40:03.639871 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:40:03.675919 sh[569]: Success Apr 13 20:40:03.697853 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 13 20:40:03.777174 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:40:03.784217 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:40:03.813180 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:40:03.851486 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:40:03.851531 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:40:03.851555 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:40:03.867609 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:40:03.867687 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:40:03.894694 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:40:03.899528 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:40:03.908626 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:40:03.914870 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:40:03.985841 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:40:03.985883 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:40:03.985908 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:40:03.985933 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:40:03.985958 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:40:03.981918 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:40:04.008837 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:40:04.026125 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:40:04.042940 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:40:04.218009 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:40:04.240054 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:40:04.250496 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:40:04.243584 ignition[638]: Ignition 2.19.0 Apr 13 20:40:04.243597 ignition[638]: Stage: fetch-offline Apr 13 20:40:04.243729 ignition[638]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:40:04.275261 systemd-networkd[756]: lo: Link UP Apr 13 20:40:04.243748 ignition[638]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:40:04.275266 systemd-networkd[756]: lo: Gained carrier Apr 13 20:40:04.243897 ignition[638]: parsed url from cmdline: "" Apr 13 20:40:04.277189 systemd-networkd[756]: Enumeration completed Apr 13 20:40:04.243904 ignition[638]: no config URL provided Apr 13 20:40:04.277786 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:40:04.243914 ignition[638]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:40:04.277793 systemd-networkd[756]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:40:04.243928 ignition[638]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:40:04.279966 systemd-networkd[756]: eth0: Link UP Apr 13 20:40:04.243940 ignition[638]: failed to fetch config: resource requires networking Apr 13 20:40:04.279973 systemd-networkd[756]: eth0: Gained carrier Apr 13 20:40:04.244259 ignition[638]: Ignition finished successfully Apr 13 20:40:04.279985 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:40:04.371667 ignition[760]: Ignition 2.19.0 Apr 13 20:40:04.294744 systemd-networkd[756]: eth0: DHCPv4 address 10.128.0.46/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 13 20:40:04.371682 ignition[760]: Stage: fetch Apr 13 20:40:04.300183 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:40:04.371900 ignition[760]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:40:04.307216 systemd[1]: Reached target network.target - Network. Apr 13 20:40:04.371915 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:40:04.330882 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:40:04.372056 ignition[760]: parsed url from cmdline: "" Apr 13 20:40:04.379812 unknown[760]: fetched base config from "system" Apr 13 20:40:04.372065 ignition[760]: no config URL provided Apr 13 20:40:04.379825 unknown[760]: fetched base config from "system" Apr 13 20:40:04.372077 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:40:04.379836 unknown[760]: fetched user config from "gcp" Apr 13 20:40:04.372090 ignition[760]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:40:04.382428 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:40:04.372115 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 13 20:40:04.406920 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:40:04.375253 ignition[760]: GET result: OK Apr 13 20:40:04.433052 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:40:04.375425 ignition[760]: parsing config with SHA512: 8623638941c7f93b1824329e648d322fefa1d687f47a0ece86d8fb97154c012689a1f4e9ae0178be94da6288ae443654d50c9da8b75087cbca16d56b051d241c Apr 13 20:40:04.465897 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:40:04.380357 ignition[760]: fetch: fetch complete Apr 13 20:40:04.490389 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:40:04.380370 ignition[760]: fetch: fetch passed Apr 13 20:40:04.510677 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:40:04.380445 ignition[760]: Ignition finished successfully Apr 13 20:40:04.534823 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:40:04.430240 ignition[766]: Ignition 2.19.0 Apr 13 20:40:04.553812 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:40:04.430250 ignition[766]: Stage: kargs Apr 13 20:40:04.568800 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:40:04.430564 ignition[766]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:40:04.583782 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:40:04.430579 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:40:04.607864 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:40:04.431670 ignition[766]: kargs: kargs passed Apr 13 20:40:04.431746 ignition[766]: Ignition finished successfully Apr 13 20:40:04.484324 ignition[771]: Ignition 2.19.0 Apr 13 20:40:04.484336 ignition[771]: Stage: disks Apr 13 20:40:04.484546 ignition[771]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:40:04.484559 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:40:04.485793 ignition[771]: disks: disks passed Apr 13 20:40:04.485852 ignition[771]: Ignition finished successfully Apr 13 20:40:04.648483 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 20:40:04.842763 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:40:04.874856 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:40:04.991821 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:40:04.991566 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:40:05.006467 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:40:05.023793 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:40:05.047813 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:40:05.048666 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:40:05.126825 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (789) Apr 13 20:40:05.126872 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:40:05.126899 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:40:05.126932 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:40:05.126957 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:40:05.126983 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:40:05.048758 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:40:05.048805 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:40:05.120046 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:40:05.142344 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:40:05.167862 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:40:05.283363 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:40:05.293799 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:40:05.303786 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:40:05.313788 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:40:05.445957 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:40:05.473834 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:40:05.501839 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:40:05.493013 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:40:05.510986 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:40:05.544679 ignition[901]: INFO : Ignition 2.19.0 Apr 13 20:40:05.544679 ignition[901]: INFO : Stage: mount Apr 13 20:40:05.544679 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:40:05.544679 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:40:05.593836 ignition[901]: INFO : mount: mount passed Apr 13 20:40:05.593836 ignition[901]: INFO : Ignition finished successfully Apr 13 20:40:05.549091 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:40:05.559322 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:40:05.581780 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:40:05.876850 systemd-networkd[756]: eth0: Gained IPv6LL Apr 13 20:40:05.998909 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:40:06.034690 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (913) Apr 13 20:40:06.052437 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:40:06.052527 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:40:06.052552 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:40:06.073942 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:40:06.074029 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:40:06.077086 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:40:06.113921 ignition[930]: INFO : Ignition 2.19.0 Apr 13 20:40:06.113921 ignition[930]: INFO : Stage: files Apr 13 20:40:06.128841 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:40:06.128841 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:40:06.128841 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:40:06.128841 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:40:06.128841 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:40:06.128841 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:40:06.128841 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:40:06.128841 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:40:06.126181 unknown[930]: wrote ssh authorized keys file for user: core Apr 13 20:40:06.229808 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 20:40:06.229808 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 20:40:06.229808 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:40:06.229808 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:40:06.293800 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 20:40:06.450493 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:40:06.450493 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 13 20:40:06.991891 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 13 20:40:07.688358 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:40:07.688358 ignition[930]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:40:07.727835 ignition[930]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:40:07.727835 ignition[930]: INFO : files: files passed Apr 13 20:40:07.727835 ignition[930]: INFO : Ignition finished successfully Apr 13 20:40:07.694513 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:40:07.723887 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:40:07.753880 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:40:07.785274 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:40:08.018794 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:40:08.018794 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:40:07.785434 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:40:08.074853 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:40:07.813128 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:40:07.820119 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:40:07.853006 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:40:07.971965 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:40:07.972084 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:40:07.990561 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:40:08.010924 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:40:08.029092 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:40:08.035979 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:40:08.096494 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:40:08.117875 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:40:08.154417 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:40:08.166995 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:40:08.189030 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:40:08.206980 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:40:08.207150 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:40:08.241094 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:40:08.262000 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:40:08.280998 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:40:08.302012 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:40:08.323027 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:40:08.344064 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:40:08.363065 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:40:08.382009 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:40:08.403019 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:40:08.423076 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:40:08.439009 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:40:08.439214 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:40:08.469055 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:40:08.488960 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:40:08.509958 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:40:08.637838 ignition[982]: INFO : Ignition 2.19.0 Apr 13 20:40:08.637838 ignition[982]: INFO : Stage: umount Apr 13 20:40:08.637838 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:40:08.637838 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:40:08.637838 ignition[982]: INFO : umount: umount passed Apr 13 20:40:08.637838 ignition[982]: INFO : Ignition finished successfully Apr 13 20:40:08.510141 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:40:08.531975 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:40:08.532136 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:40:08.562068 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:40:08.562309 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:40:08.580048 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:40:08.580187 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:40:08.604950 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:40:08.651956 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:40:08.660789 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:40:08.661112 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:40:08.709161 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:40:08.709364 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:40:08.743933 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:40:08.745228 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:40:08.745388 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:40:08.760522 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:40:08.760638 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:40:08.782165 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:40:08.782290 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:40:08.791956 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:40:08.792020 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:40:08.821051 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:40:08.821127 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:40:08.829051 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:40:08.829118 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:40:08.846079 systemd[1]: Stopped target network.target - Network. Apr 13 20:40:08.863018 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:40:08.863108 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:40:08.895989 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:40:08.906030 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:40:08.909749 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:40:08.921030 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:40:08.954912 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:40:08.973001 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:40:08.973080 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:40:08.981033 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:40:08.981103 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:40:09.012879 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:40:09.012982 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:40:09.030852 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:40:09.030943 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:40:09.051873 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:40:09.051960 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:40:09.070105 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:40:09.076732 systemd-networkd[756]: eth0: DHCPv6 lease lost Apr 13 20:40:09.086118 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:40:09.120376 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:40:09.120514 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:40:09.129616 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:40:09.130634 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:40:09.164721 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:40:09.164776 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:40:09.195813 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:40:09.207781 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:40:09.207894 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:40:09.218932 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:40:09.219024 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:40:09.241913 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:40:09.242023 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:40:09.259877 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:40:09.688780 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Apr 13 20:40:09.259985 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:40:09.279044 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:40:09.298316 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:40:09.298490 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:40:09.313139 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:40:09.313205 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:40:09.334170 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:40:09.334229 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:40:09.343843 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:40:09.343953 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:40:09.364126 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:40:09.364229 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:40:09.389988 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:40:09.390212 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:40:09.453952 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:40:09.471782 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:40:09.471914 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:40:09.482901 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 20:40:09.482991 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:40:09.493835 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:40:09.493926 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:40:09.512896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:40:09.512987 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:40:09.534353 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:40:09.534484 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:40:09.552218 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:40:09.552340 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:40:09.574143 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:40:09.586916 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:40:09.636636 systemd[1]: Switching root. Apr 13 20:40:09.985799 systemd-journald[184]: Journal stopped Apr 13 20:40:01.078630 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:40:01.080321 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:40:01.080341 kernel: BIOS-provided physical RAM map: Apr 13 20:40:01.080356 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 13 20:40:01.080371 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 13 20:40:01.080385 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 13 20:40:01.080403 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 13 20:40:01.080422 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 13 20:40:01.080437 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Apr 13 20:40:01.080450 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Apr 13 20:40:01.080465 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Apr 13 20:40:01.080480 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Apr 13 20:40:01.080495 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 13 20:40:01.080510 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 13 20:40:01.080532 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 13 20:40:01.080549 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 13 20:40:01.080574 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 13 20:40:01.080589 kernel: NX (Execute Disable) protection: active Apr 13 20:40:01.080605 kernel: APIC: Static calls initialized Apr 13 20:40:01.080621 kernel: efi: EFI v2.7 by EDK II Apr 13 20:40:01.080639 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Apr 13 20:40:01.080674 kernel: SMBIOS 2.4 present. Apr 13 20:40:01.080690 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Apr 13 20:40:01.080705 kernel: Hypervisor detected: KVM Apr 13 20:40:01.080725 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:40:01.080741 kernel: kvm-clock: using sched offset of 12293226812 cycles Apr 13 20:40:01.080756 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:40:01.080772 kernel: tsc: Detected 2299.998 MHz processor Apr 13 20:40:01.080788 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:40:01.080805 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:40:01.080821 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 13 20:40:01.080838 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Apr 13 20:40:01.080854 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:40:01.080874 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 13 20:40:01.080891 kernel: Using GB pages for direct mapping Apr 13 20:40:01.080906 kernel: Secure boot disabled Apr 13 20:40:01.080922 kernel: ACPI: Early table checksum verification disabled Apr 13 20:40:01.080937 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 13 20:40:01.080952 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 13 20:40:01.080968 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 13 20:40:01.080990 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 13 20:40:01.081011 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 13 20:40:01.081028 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Apr 13 20:40:01.081045 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 13 20:40:01.081062 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 13 20:40:01.081080 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 13 20:40:01.081097 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 13 20:40:01.081118 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 13 20:40:01.081136 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 13 20:40:01.081155 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 13 20:40:01.081173 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 13 20:40:01.081191 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 13 20:40:01.081210 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 13 20:40:01.081228 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 13 20:40:01.081247 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 13 20:40:01.081265 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 13 20:40:01.081288 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 13 20:40:01.081306 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 13 20:40:01.081324 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 13 20:40:01.081341 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 13 20:40:01.081358 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 13 20:40:01.081377 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 13 20:40:01.081396 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Apr 13 20:40:01.081415 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Apr 13 20:40:01.081434 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Apr 13 20:40:01.081457 kernel: Zone ranges: Apr 13 20:40:01.081476 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:40:01.081495 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:40:01.081514 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 13 20:40:01.081532 kernel: Movable zone start for each node Apr 13 20:40:01.081559 kernel: Early memory node ranges Apr 13 20:40:01.081575 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 13 20:40:01.081591 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 13 20:40:01.081608 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Apr 13 20:40:01.081624 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 13 20:40:01.081684 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 13 20:40:01.081705 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 13 20:40:01.081723 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:40:01.081741 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 13 20:40:01.081760 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 13 20:40:01.081779 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 13 20:40:01.081797 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 13 20:40:01.081816 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 13 20:40:01.081833 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:40:01.081855 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:40:01.081874 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:40:01.081893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:40:01.081911 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:40:01.081929 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:40:01.081948 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:40:01.081966 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:40:01.081984 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 13 20:40:01.082002 kernel: Booting paravirtualized kernel on KVM Apr 13 20:40:01.082025 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:40:01.082043 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:40:01.082061 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:40:01.082080 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:40:01.082097 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:40:01.082115 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:40:01.082134 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:40:01.082154 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:40:01.082177 kernel: random: crng init done Apr 13 20:40:01.082196 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 13 20:40:01.082214 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:40:01.082232 kernel: Fallback order for Node 0: 0 Apr 13 20:40:01.082251 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Apr 13 20:40:01.082270 kernel: Policy zone: Normal Apr 13 20:40:01.082288 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:40:01.082306 kernel: software IO TLB: area num 2. Apr 13 20:40:01.082325 kernel: Memory: 7513184K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 347140K reserved, 0K cma-reserved) Apr 13 20:40:01.082347 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:40:01.082365 kernel: Kernel/User page tables isolation: enabled Apr 13 20:40:01.082383 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:40:01.082402 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:40:01.082420 kernel: Dynamic Preempt: voluntary Apr 13 20:40:01.082438 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:40:01.082458 kernel: rcu: RCU event tracing is enabled. Apr 13 20:40:01.082478 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:40:01.082514 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:40:01.082534 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:40:01.082560 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:40:01.082580 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:40:01.082603 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:40:01.082622 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:40:01.082641 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:40:01.082810 kernel: Console: colour dummy device 80x25 Apr 13 20:40:01.082835 kernel: printk: console [ttyS0] enabled Apr 13 20:40:01.082854 kernel: ACPI: Core revision 20230628 Apr 13 20:40:01.082873 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:40:01.082892 kernel: x2apic enabled Apr 13 20:40:01.082911 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:40:01.082931 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 13 20:40:01.082951 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 13 20:40:01.082971 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 13 20:40:01.082991 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 13 20:40:01.083010 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 13 20:40:01.083033 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:40:01.083050 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 13 20:40:01.083069 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 13 20:40:01.083086 kernel: Spectre V2 : Mitigation: IBRS Apr 13 20:40:01.083106 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:40:01.083142 kernel: RETBleed: Mitigation: IBRS Apr 13 20:40:01.083160 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:40:01.083180 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Apr 13 20:40:01.083205 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:40:01.083223 kernel: MDS: Mitigation: Clear CPU buffers Apr 13 20:40:01.083242 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:40:01.083261 kernel: active return thunk: its_return_thunk Apr 13 20:40:01.083279 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 20:40:01.083298 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:40:01.083316 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:40:01.083334 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:40:01.083353 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:40:01.083377 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 13 20:40:01.083397 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:40:01.083416 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:40:01.083434 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:40:01.083453 kernel: landlock: Up and running. Apr 13 20:40:01.083471 kernel: SELinux: Initializing. Apr 13 20:40:01.083489 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:40:01.083508 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:40:01.083527 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 13 20:40:01.083565 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:40:01.083586 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:40:01.083606 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:40:01.083626 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 13 20:40:01.083757 kernel: signal: max sigframe size: 1776 Apr 13 20:40:01.083784 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:40:01.083802 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:40:01.083820 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 20:40:01.083837 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:40:01.083862 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:40:01.083879 kernel: .... node #0, CPUs: #1 Apr 13 20:40:01.083898 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 13 20:40:01.083918 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 13 20:40:01.083936 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:40:01.083955 kernel: smpboot: Max logical packages: 1 Apr 13 20:40:01.083973 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 13 20:40:01.084002 kernel: devtmpfs: initialized Apr 13 20:40:01.084026 kernel: x86/mm: Memory block size: 128MB Apr 13 20:40:01.084046 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 13 20:40:01.084065 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:40:01.084084 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:40:01.084103 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:40:01.084121 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:40:01.084139 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:40:01.084157 kernel: audit: type=2000 audit(1776112799.574:1): state=initialized audit_enabled=0 res=1 Apr 13 20:40:01.084175 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:40:01.084224 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:40:01.084244 kernel: cpuidle: using governor menu Apr 13 20:40:01.084263 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:40:01.084283 kernel: dca service started, version 1.12.1 Apr 13 20:40:01.084302 kernel: PCI: Using configuration type 1 for base access Apr 13 20:40:01.084321 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:40:01.084342 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:40:01.084362 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:40:01.084381 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:40:01.084423 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:40:01.084443 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:40:01.084463 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:40:01.084482 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:40:01.084502 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 13 20:40:01.084522 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:40:01.084542 kernel: ACPI: Interpreter enabled Apr 13 20:40:01.084569 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 20:40:01.084588 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:40:01.084608 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:40:01.084631 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 13 20:40:01.084666 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 13 20:40:01.084686 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:40:01.084958 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:40:01.085165 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 13 20:40:01.085354 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 13 20:40:01.085380 kernel: PCI host bridge to bus 0000:00 Apr 13 20:40:01.085578 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:40:01.085793 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:40:01.085966 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:40:01.086137 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 13 20:40:01.086306 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:40:01.086541 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 13 20:40:01.087031 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Apr 13 20:40:01.087231 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 13 20:40:01.087415 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 13 20:40:01.087614 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Apr 13 20:40:01.087829 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 13 20:40:01.088013 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Apr 13 20:40:01.088205 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 13 20:40:01.088398 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Apr 13 20:40:01.088589 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Apr 13 20:40:01.088817 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Apr 13 20:40:01.089000 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 13 20:40:01.089181 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Apr 13 20:40:01.089204 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:40:01.089224 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:40:01.089249 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:40:01.089266 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:40:01.089285 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 13 20:40:01.089303 kernel: iommu: Default domain type: Translated Apr 13 20:40:01.089322 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:40:01.089341 kernel: efivars: Registered efivars operations Apr 13 20:40:01.089360 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:40:01.089379 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:40:01.089397 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 13 20:40:01.089419 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 13 20:40:01.089436 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 13 20:40:01.089454 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 13 20:40:01.089473 kernel: vgaarb: loaded Apr 13 20:40:01.089491 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:40:01.089510 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:40:01.089527 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:40:01.089546 kernel: pnp: PnP ACPI init Apr 13 20:40:01.089573 kernel: pnp: PnP ACPI: found 7 devices Apr 13 20:40:01.089592 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:40:01.089615 kernel: NET: Registered PF_INET protocol family Apr 13 20:40:01.089634 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 13 20:40:01.089676 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 13 20:40:01.089695 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:40:01.089714 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:40:01.089732 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 13 20:40:01.089750 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 13 20:40:01.089769 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 13 20:40:01.089792 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 13 20:40:01.089810 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:40:01.089829 kernel: NET: Registered PF_XDP protocol family Apr 13 20:40:01.090003 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:40:01.090169 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:40:01.090334 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:40:01.090498 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 13 20:40:01.090774 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 13 20:40:01.090805 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:40:01.090824 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:40:01.090841 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Apr 13 20:40:01.091046 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 20:40:01.091065 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 13 20:40:01.091083 kernel: clocksource: Switched to clocksource tsc Apr 13 20:40:01.091101 kernel: Initialise system trusted keyrings Apr 13 20:40:01.091119 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 13 20:40:01.091143 kernel: Key type asymmetric registered Apr 13 20:40:01.091160 kernel: Asymmetric key parser 'x509' registered Apr 13 20:40:01.091177 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:40:01.091195 kernel: io scheduler mq-deadline registered Apr 13 20:40:01.091213 kernel: io scheduler kyber registered Apr 13 20:40:01.091231 kernel: io scheduler bfq registered Apr 13 20:40:01.091249 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:40:01.091268 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 13 20:40:01.092805 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 13 20:40:01.092844 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 13 20:40:01.093043 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 13 20:40:01.093068 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 13 20:40:01.093260 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 13 20:40:01.093285 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:40:01.093306 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:40:01.093326 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 13 20:40:01.093346 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 13 20:40:01.093367 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 13 20:40:01.093642 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 13 20:40:01.094968 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:40:01.094990 kernel: i8042: Warning: Keylock active Apr 13 20:40:01.095010 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:40:01.095031 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:40:01.095251 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 13 20:40:01.095447 kernel: rtc_cmos 00:00: registered as rtc0 Apr 13 20:40:01.095988 kernel: rtc_cmos 00:00: setting system clock to 2026-04-13T20:40:00 UTC (1776112800) Apr 13 20:40:01.096207 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 13 20:40:01.096234 kernel: intel_pstate: CPU model not supported Apr 13 20:40:01.096254 kernel: pstore: Using crash dump compression: deflate Apr 13 20:40:01.096273 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 20:40:01.096292 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:40:01.096310 kernel: Segment Routing with IPv6 Apr 13 20:40:01.096329 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:40:01.096349 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:40:01.096374 kernel: Key type dns_resolver registered Apr 13 20:40:01.096392 kernel: IPI shorthand broadcast: enabled Apr 13 20:40:01.096412 kernel: sched_clock: Marking stable (847003964, 134298661)->(994368419, -13065794) Apr 13 20:40:01.096431 kernel: registered taskstats version 1 Apr 13 20:40:01.096450 kernel: Loading compiled-in X.509 certificates Apr 13 20:40:01.096469 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:40:01.096488 kernel: Key type .fscrypt registered Apr 13 20:40:01.096507 kernel: Key type fscrypt-provisioning registered Apr 13 20:40:01.096526 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:40:01.096559 kernel: ima: No architecture policies found Apr 13 20:40:01.096579 kernel: clk: Disabling unused clocks Apr 13 20:40:01.096598 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:40:01.096616 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:40:01.096635 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:40:01.096798 kernel: Run /init as init process Apr 13 20:40:01.096819 kernel: with arguments: Apr 13 20:40:01.096838 kernel: /init Apr 13 20:40:01.096857 kernel: with environment: Apr 13 20:40:01.096881 kernel: HOME=/ Apr 13 20:40:01.096900 kernel: TERM=linux Apr 13 20:40:01.096920 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:40:01.096944 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:40:01.096967 systemd[1]: Detected virtualization google. Apr 13 20:40:01.096988 systemd[1]: Detected architecture x86-64. Apr 13 20:40:01.097007 systemd[1]: Running in initrd. Apr 13 20:40:01.097030 systemd[1]: No hostname configured, using default hostname. Apr 13 20:40:01.097050 systemd[1]: Hostname set to . Apr 13 20:40:01.097070 systemd[1]: Initializing machine ID from random generator. Apr 13 20:40:01.097090 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:40:01.097110 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:40:01.097130 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:40:01.097152 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:40:01.097172 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:40:01.097196 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:40:01.097216 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:40:01.097239 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:40:01.097260 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:40:01.097281 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:40:01.097301 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:40:01.097320 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:40:01.097345 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:40:01.097385 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:40:01.097409 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:40:01.097430 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:40:01.097451 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:40:01.097472 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:40:01.097497 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:40:01.097518 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:40:01.097539 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:40:01.097569 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:40:01.097590 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:40:01.097611 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:40:01.097632 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:40:01.097674 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:40:01.097696 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:40:01.097722 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:40:01.097743 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:40:01.097764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:40:01.097818 systemd-journald[184]: Collecting audit messages is disabled. Apr 13 20:40:01.097867 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:40:01.097888 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:40:01.097909 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:40:01.097932 systemd-journald[184]: Journal started Apr 13 20:40:01.097977 systemd-journald[184]: Runtime Journal (/run/log/journal/1e639bcad65948a2bb2037655c686cdd) is 8.0M, max 148.7M, 140.7M free. Apr 13 20:40:01.104920 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:40:01.102764 systemd-modules-load[185]: Inserted module 'overlay' Apr 13 20:40:01.113675 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:40:01.125819 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:40:01.126579 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:40:01.134530 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:40:01.140612 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:40:01.156826 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:40:01.164041 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:40:01.169790 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:40:01.169833 kernel: Bridge firewalling registered Apr 13 20:40:01.168464 systemd-modules-load[185]: Inserted module 'br_netfilter' Apr 13 20:40:01.170230 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:40:01.187941 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:40:01.201915 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:40:01.207150 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:40:01.216987 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:40:01.221570 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:40:01.234898 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:40:01.269644 dracut-cmdline[218]: dracut-dracut-053 Apr 13 20:40:01.274892 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:40:01.278007 systemd-resolved[212]: Positive Trust Anchors: Apr 13 20:40:01.278155 systemd-resolved[212]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:40:01.278226 systemd-resolved[212]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:40:01.286000 systemd-resolved[212]: Defaulting to hostname 'linux'. Apr 13 20:40:01.289166 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:40:01.315906 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:40:01.374689 kernel: SCSI subsystem initialized Apr 13 20:40:01.385686 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:40:01.397686 kernel: iscsi: registered transport (tcp) Apr 13 20:40:01.422701 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:40:01.422781 kernel: QLogic iSCSI HBA Driver Apr 13 20:40:01.475330 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:40:01.481889 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:40:01.523215 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:40:01.523318 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:40:01.523347 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:40:01.568700 kernel: raid6: avx2x4 gen() 18065 MB/s Apr 13 20:40:01.585691 kernel: raid6: avx2x2 gen() 18139 MB/s Apr 13 20:40:01.603044 kernel: raid6: avx2x1 gen() 14058 MB/s Apr 13 20:40:01.603076 kernel: raid6: using algorithm avx2x2 gen() 18139 MB/s Apr 13 20:40:01.621083 kernel: raid6: .... xor() 17694 MB/s, rmw enabled Apr 13 20:40:01.621122 kernel: raid6: using avx2x2 recovery algorithm Apr 13 20:40:01.643685 kernel: xor: automatically using best checksumming function avx Apr 13 20:40:01.815690 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:40:01.829103 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:40:01.835875 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:40:01.861024 systemd-udevd[401]: Using default interface naming scheme 'v255'. Apr 13 20:40:01.868304 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:40:01.875878 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:40:01.901953 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Apr 13 20:40:01.939289 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:40:01.949021 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:40:02.056541 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:40:02.067928 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:40:02.104434 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:40:02.110915 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:40:02.120810 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:40:02.127633 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:40:02.141895 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:40:02.176560 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:40:02.189669 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:40:02.256240 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:40:02.256384 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:40:02.256415 kernel: blk-mq: reduced tag depth to 10240 Apr 13 20:40:02.257677 kernel: AES CTR mode by8 optimization enabled Apr 13 20:40:02.276167 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:40:02.288099 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 13 20:40:02.276374 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:40:02.293076 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:40:02.295767 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:40:02.296031 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:40:02.298716 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:40:02.312110 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:40:02.346644 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:40:02.356272 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Apr 13 20:40:02.356612 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 13 20:40:02.357685 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 13 20:40:02.357977 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 13 20:40:02.359753 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:40:02.363862 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:40:02.372205 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:40:02.372266 kernel: GPT:17805311 != 33554431 Apr 13 20:40:02.372292 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:40:02.372318 kernel: GPT:17805311 != 33554431 Apr 13 20:40:02.372341 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:40:02.372365 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:40:02.372388 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 13 20:40:02.406008 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:40:02.425688 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (457) Apr 13 20:40:02.427692 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (447) Apr 13 20:40:02.433977 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Apr 13 20:40:02.459525 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Apr 13 20:40:02.466542 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Apr 13 20:40:02.466784 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Apr 13 20:40:02.479139 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 13 20:40:02.484997 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:40:02.500737 disk-uuid[551]: Primary Header is updated. Apr 13 20:40:02.500737 disk-uuid[551]: Secondary Entries is updated. Apr 13 20:40:02.500737 disk-uuid[551]: Secondary Header is updated. Apr 13 20:40:02.513344 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:40:02.525696 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:40:02.539699 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:40:03.535942 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:40:03.536488 disk-uuid[552]: The operation has completed successfully. Apr 13 20:40:03.616186 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:40:03.616367 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:40:03.639871 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:40:03.675919 sh[569]: Success Apr 13 20:40:03.697853 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 13 20:40:03.777174 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:40:03.784217 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:40:03.813180 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:40:03.851486 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:40:03.851531 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:40:03.851555 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:40:03.867609 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:40:03.867687 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:40:03.894694 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:40:03.899528 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:40:03.908626 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:40:03.914870 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:40:03.985841 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:40:03.985883 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:40:03.985908 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:40:03.985933 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:40:03.985958 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:40:03.981918 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:40:04.008837 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:40:04.026125 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:40:04.042940 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:40:04.218009 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:40:04.240054 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:40:04.250496 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:40:04.243584 ignition[638]: Ignition 2.19.0 Apr 13 20:40:04.243597 ignition[638]: Stage: fetch-offline Apr 13 20:40:04.243729 ignition[638]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:40:04.275261 systemd-networkd[756]: lo: Link UP Apr 13 20:40:04.243748 ignition[638]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:40:04.275266 systemd-networkd[756]: lo: Gained carrier Apr 13 20:40:04.243897 ignition[638]: parsed url from cmdline: "" Apr 13 20:40:04.277189 systemd-networkd[756]: Enumeration completed Apr 13 20:40:04.243904 ignition[638]: no config URL provided Apr 13 20:40:04.277786 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:40:04.243914 ignition[638]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:40:04.277793 systemd-networkd[756]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:40:04.243928 ignition[638]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:40:04.279966 systemd-networkd[756]: eth0: Link UP Apr 13 20:40:04.243940 ignition[638]: failed to fetch config: resource requires networking Apr 13 20:40:04.279973 systemd-networkd[756]: eth0: Gained carrier Apr 13 20:40:04.244259 ignition[638]: Ignition finished successfully Apr 13 20:40:04.279985 systemd-networkd[756]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:40:04.371667 ignition[760]: Ignition 2.19.0 Apr 13 20:40:04.294744 systemd-networkd[756]: eth0: DHCPv4 address 10.128.0.46/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 13 20:40:04.371682 ignition[760]: Stage: fetch Apr 13 20:40:04.300183 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:40:04.371900 ignition[760]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:40:04.307216 systemd[1]: Reached target network.target - Network. Apr 13 20:40:04.371915 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:40:04.330882 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:40:04.372056 ignition[760]: parsed url from cmdline: "" Apr 13 20:40:04.379812 unknown[760]: fetched base config from "system" Apr 13 20:40:04.372065 ignition[760]: no config URL provided Apr 13 20:40:04.379825 unknown[760]: fetched base config from "system" Apr 13 20:40:04.372077 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:40:04.379836 unknown[760]: fetched user config from "gcp" Apr 13 20:40:04.372090 ignition[760]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:40:04.382428 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:40:04.372115 ignition[760]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 13 20:40:04.406920 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:40:04.375253 ignition[760]: GET result: OK Apr 13 20:40:04.433052 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:40:04.375425 ignition[760]: parsing config with SHA512: 8623638941c7f93b1824329e648d322fefa1d687f47a0ece86d8fb97154c012689a1f4e9ae0178be94da6288ae443654d50c9da8b75087cbca16d56b051d241c Apr 13 20:40:04.465897 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:40:04.380357 ignition[760]: fetch: fetch complete Apr 13 20:40:04.490389 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:40:04.380370 ignition[760]: fetch: fetch passed Apr 13 20:40:04.510677 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:40:04.380445 ignition[760]: Ignition finished successfully Apr 13 20:40:04.534823 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:40:04.430240 ignition[766]: Ignition 2.19.0 Apr 13 20:40:04.553812 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:40:04.430250 ignition[766]: Stage: kargs Apr 13 20:40:04.568800 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:40:04.430564 ignition[766]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:40:04.583782 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:40:04.430579 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:40:04.607864 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:40:04.431670 ignition[766]: kargs: kargs passed Apr 13 20:40:04.431746 ignition[766]: Ignition finished successfully Apr 13 20:40:04.484324 ignition[771]: Ignition 2.19.0 Apr 13 20:40:04.484336 ignition[771]: Stage: disks Apr 13 20:40:04.484546 ignition[771]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:40:04.484559 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:40:04.485793 ignition[771]: disks: disks passed Apr 13 20:40:04.485852 ignition[771]: Ignition finished successfully Apr 13 20:40:04.648483 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 20:40:04.842763 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:40:04.874856 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:40:04.991821 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:40:04.991566 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:40:05.006467 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:40:05.023793 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:40:05.047813 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:40:05.048666 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:40:05.126825 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (789) Apr 13 20:40:05.126872 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:40:05.126899 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:40:05.126932 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:40:05.126957 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:40:05.126983 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:40:05.048758 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:40:05.048805 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:40:05.120046 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:40:05.142344 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:40:05.167862 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:40:05.283363 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:40:05.293799 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:40:05.303786 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:40:05.313788 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:40:05.445957 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:40:05.473834 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:40:05.501839 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:40:05.493013 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:40:05.510986 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:40:05.544679 ignition[901]: INFO : Ignition 2.19.0 Apr 13 20:40:05.544679 ignition[901]: INFO : Stage: mount Apr 13 20:40:05.544679 ignition[901]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:40:05.544679 ignition[901]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:40:05.593836 ignition[901]: INFO : mount: mount passed Apr 13 20:40:05.593836 ignition[901]: INFO : Ignition finished successfully Apr 13 20:40:05.549091 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:40:05.559322 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:40:05.581780 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:40:05.876850 systemd-networkd[756]: eth0: Gained IPv6LL Apr 13 20:40:05.998909 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:40:06.034690 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (913) Apr 13 20:40:06.052437 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:40:06.052527 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:40:06.052552 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:40:06.073942 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:40:06.074029 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:40:06.077086 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:40:06.113921 ignition[930]: INFO : Ignition 2.19.0 Apr 13 20:40:06.113921 ignition[930]: INFO : Stage: files Apr 13 20:40:06.128841 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:40:06.128841 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:40:06.128841 ignition[930]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:40:06.128841 ignition[930]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:40:06.128841 ignition[930]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:40:06.128841 ignition[930]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:40:06.128841 ignition[930]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:40:06.128841 ignition[930]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:40:06.126181 unknown[930]: wrote ssh authorized keys file for user: core Apr 13 20:40:06.229808 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 20:40:06.229808 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Apr 13 20:40:06.229808 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:40:06.229808 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:40:06.293800 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Apr 13 20:40:06.450493 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:40:06.450493 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:40:06.482818 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Apr 13 20:40:06.991891 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Apr 13 20:40:07.688358 ignition[930]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Apr 13 20:40:07.688358 ignition[930]: INFO : files: op(c): [started] processing unit "containerd.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(c): [finished] processing unit "containerd.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:40:07.727835 ignition[930]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:40:07.727835 ignition[930]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:40:07.727835 ignition[930]: INFO : files: files passed Apr 13 20:40:07.727835 ignition[930]: INFO : Ignition finished successfully Apr 13 20:40:07.694513 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:40:07.723887 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:40:07.753880 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:40:07.785274 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:40:08.018794 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:40:08.018794 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:40:07.785434 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:40:08.074853 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:40:07.813128 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:40:07.820119 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:40:07.853006 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:40:07.971965 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:40:07.972084 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:40:07.990561 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:40:08.010924 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:40:08.029092 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:40:08.035979 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:40:08.096494 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:40:08.117875 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:40:08.154417 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:40:08.166995 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:40:08.189030 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:40:08.206980 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:40:08.207150 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:40:08.241094 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:40:08.262000 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:40:08.280998 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:40:08.302012 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:40:08.323027 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:40:08.344064 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:40:08.363065 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:40:08.382009 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:40:08.403019 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:40:08.423076 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:40:08.439009 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:40:08.439214 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:40:08.469055 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:40:08.488960 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:40:08.509958 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:40:08.637838 ignition[982]: INFO : Ignition 2.19.0 Apr 13 20:40:08.637838 ignition[982]: INFO : Stage: umount Apr 13 20:40:08.637838 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:40:08.637838 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:40:08.637838 ignition[982]: INFO : umount: umount passed Apr 13 20:40:08.637838 ignition[982]: INFO : Ignition finished successfully Apr 13 20:40:08.510141 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:40:08.531975 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:40:08.532136 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:40:08.562068 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:40:08.562309 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:40:08.580048 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:40:08.580187 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:40:08.604950 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:40:08.651956 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:40:08.660789 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:40:08.661112 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:40:08.709161 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:40:08.709364 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:40:08.743933 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:40:08.745228 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:40:08.745388 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:40:08.760522 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:40:08.760638 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:40:08.782165 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:40:08.782290 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:40:08.791956 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:40:08.792020 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:40:08.821051 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:40:08.821127 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:40:08.829051 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:40:08.829118 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:40:08.846079 systemd[1]: Stopped target network.target - Network. Apr 13 20:40:08.863018 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:40:08.863108 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:40:08.895989 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:40:08.906030 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:40:08.909749 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:40:08.921030 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:40:08.954912 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:40:08.973001 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:40:08.973080 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:40:08.981033 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:40:08.981103 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:40:09.012879 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:40:09.012982 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:40:09.030852 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:40:09.030943 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:40:09.051873 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:40:09.051960 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:40:09.070105 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:40:09.076732 systemd-networkd[756]: eth0: DHCPv6 lease lost Apr 13 20:40:09.086118 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:40:09.120376 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:40:09.120514 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:40:09.129616 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:40:09.130634 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:40:09.164721 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:40:09.164776 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:40:09.195813 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:40:09.207781 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:40:09.207894 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:40:09.218932 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:40:09.219024 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:40:09.241913 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:40:09.242023 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:40:09.259877 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:40:09.688780 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Apr 13 20:40:09.259985 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:40:09.279044 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:40:09.298316 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:40:09.298490 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:40:09.313139 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:40:09.313205 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:40:09.334170 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:40:09.334229 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:40:09.343843 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:40:09.343953 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:40:09.364126 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:40:09.364229 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:40:09.389988 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:40:09.390212 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:40:09.453952 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:40:09.471782 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:40:09.471914 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:40:09.482901 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Apr 13 20:40:09.482991 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:40:09.493835 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:40:09.493926 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:40:09.512896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:40:09.512987 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:40:09.534353 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:40:09.534484 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:40:09.552218 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:40:09.552340 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:40:09.574143 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:40:09.586916 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:40:09.636636 systemd[1]: Switching root. Apr 13 20:40:09.985799 systemd-journald[184]: Journal stopped Apr 13 20:40:12.325752 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 20:40:12.325797 kernel: SELinux: policy capability open_perms=1 Apr 13 20:40:12.325811 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 20:40:12.325823 kernel: SELinux: policy capability always_check_network=0 Apr 13 20:40:12.325833 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 20:40:12.325845 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 20:40:12.325857 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 20:40:12.325872 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 20:40:12.325884 kernel: audit: type=1403 audit(1776112810.380:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 20:40:12.325898 systemd[1]: Successfully loaded SELinux policy in 79.371ms. Apr 13 20:40:12.325913 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.706ms. Apr 13 20:40:12.325926 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:40:12.325939 systemd[1]: Detected virtualization google. Apr 13 20:40:12.325952 systemd[1]: Detected architecture x86-64. Apr 13 20:40:12.325968 systemd[1]: Detected first boot. Apr 13 20:40:12.325982 systemd[1]: Initializing machine ID from random generator. Apr 13 20:40:12.325996 zram_generator::config[1040]: No configuration found. Apr 13 20:40:12.326010 systemd[1]: Populated /etc with preset unit settings. Apr 13 20:40:12.326023 systemd[1]: Queued start job for default target multi-user.target. Apr 13 20:40:12.326040 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 20:40:12.326054 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 20:40:12.326067 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 20:40:12.326080 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 20:40:12.326094 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 20:40:12.326116 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 20:40:12.326138 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 20:40:12.326166 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 20:40:12.326188 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 20:40:12.326210 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:40:12.326226 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:40:12.326240 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 20:40:12.326256 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 20:40:12.326269 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 20:40:12.326283 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:40:12.326300 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 20:40:12.326313 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:40:12.326327 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 20:40:12.326340 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:40:12.326353 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:40:12.326367 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:40:12.326384 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:40:12.326398 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 20:40:12.326412 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 20:40:12.326429 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:40:12.326442 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:40:12.326456 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:40:12.326470 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:40:12.326483 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:40:12.326497 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 20:40:12.326510 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 20:40:12.326535 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 20:40:12.326550 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 20:40:12.326565 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:40:12.326579 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 20:40:12.326596 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 20:40:12.326609 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 20:40:12.326623 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 20:40:12.326637 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:40:12.326677 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:40:12.326701 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 20:40:12.326716 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:40:12.326731 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:40:12.326745 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:40:12.326764 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 20:40:12.326778 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:40:12.326792 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 20:40:12.326805 kernel: ACPI: bus type drm_connector registered Apr 13 20:40:12.326819 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Apr 13 20:40:12.326832 kernel: fuse: init (API version 7.39) Apr 13 20:40:12.326845 kernel: loop: module loaded Apr 13 20:40:12.326858 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Apr 13 20:40:12.326875 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:40:12.326889 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:40:12.326933 systemd-journald[1146]: Collecting audit messages is disabled. Apr 13 20:40:12.326963 systemd-journald[1146]: Journal started Apr 13 20:40:12.326993 systemd-journald[1146]: Runtime Journal (/run/log/journal/4752a4c89f914ef28fdf9ff144019466) is 8.0M, max 148.7M, 140.7M free. Apr 13 20:40:12.350683 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 20:40:12.375687 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 20:40:12.387707 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:40:12.412690 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:40:12.425723 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:40:12.442110 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 20:40:12.452974 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 20:40:12.462965 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 20:40:12.472976 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 20:40:12.482962 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 20:40:12.492962 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 20:40:12.503343 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 20:40:12.515351 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:40:12.527316 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 20:40:12.527637 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 20:40:12.539297 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:40:12.539603 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:40:12.551182 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:40:12.551442 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:40:12.561157 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:40:12.561416 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:40:12.573133 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 20:40:12.573394 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 20:40:12.583104 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:40:12.583359 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:40:12.593189 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:40:12.603125 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 20:40:12.616143 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 20:40:12.628202 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:40:12.652131 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 20:40:12.673791 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 20:40:12.688778 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 20:40:12.698845 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 20:40:12.707878 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 20:40:12.724910 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 20:40:12.736840 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:40:12.745783 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 20:40:12.755832 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:40:12.763532 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:40:12.768808 systemd-journald[1146]: Time spent on flushing to /var/log/journal/4752a4c89f914ef28fdf9ff144019466 is 106.801ms for 918 entries. Apr 13 20:40:12.768808 systemd-journald[1146]: System Journal (/var/log/journal/4752a4c89f914ef28fdf9ff144019466) is 8.0M, max 584.8M, 576.8M free. Apr 13 20:40:12.904797 systemd-journald[1146]: Received client request to flush runtime journal. Apr 13 20:40:12.795041 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:40:12.817884 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 20:40:12.840487 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 20:40:12.851964 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 20:40:12.863308 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 20:40:12.880124 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 20:40:12.892403 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:40:12.911803 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 20:40:12.925219 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Apr 13 20:40:12.926459 systemd-tmpfiles[1183]: ACLs are not supported, ignoring. Apr 13 20:40:12.929135 udevadm[1185]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 13 20:40:12.940389 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:40:12.960882 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 20:40:13.024044 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 20:40:13.040913 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:40:13.084744 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Apr 13 20:40:13.085287 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Apr 13 20:40:13.095209 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:40:13.527130 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 20:40:13.545916 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:40:13.577746 systemd-udevd[1208]: Using default interface naming scheme 'v255'. Apr 13 20:40:13.611131 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:40:13.638100 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:40:13.676933 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 20:40:13.705686 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Apr 13 20:40:13.803755 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 20:40:13.871974 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 13 20:40:13.886682 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 13 20:40:13.954875 kernel: ACPI: button: Power Button [PWRF] Apr 13 20:40:13.976687 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Apr 13 20:40:13.998696 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Apr 13 20:40:14.011764 systemd-networkd[1218]: lo: Link UP Apr 13 20:40:14.012233 systemd-networkd[1218]: lo: Gained carrier Apr 13 20:40:14.016977 systemd-networkd[1218]: Enumeration completed Apr 13 20:40:14.017196 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:40:14.018565 systemd-networkd[1218]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:40:14.020719 systemd-networkd[1218]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:40:14.021434 systemd-networkd[1218]: eth0: Link UP Apr 13 20:40:14.021441 systemd-networkd[1218]: eth0: Gained carrier Apr 13 20:40:14.021465 systemd-networkd[1218]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:40:14.029683 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1212) Apr 13 20:40:14.040671 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 20:40:14.052683 kernel: EDAC MC: Ver: 3.0.0 Apr 13 20:40:14.050728 systemd-networkd[1218]: eth0: DHCPv4 address 10.128.0.46/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 13 20:40:14.070689 kernel: ACPI: button: Sleep Button [SLPF] Apr 13 20:40:14.074124 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 20:40:14.139440 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 13 20:40:14.151323 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 20:40:14.169921 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 20:40:14.188527 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:40:14.207631 lvm[1249]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:40:14.246909 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 20:40:14.247433 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:40:14.255988 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 20:40:14.265222 lvm[1255]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:40:14.298661 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 20:40:14.299822 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:40:14.299925 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 20:40:14.299962 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:40:14.300028 systemd[1]: Reached target machines.target - Containers. Apr 13 20:40:14.302306 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 20:40:14.309975 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 20:40:14.317931 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 20:40:14.318197 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:40:14.321129 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 20:40:14.377644 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 20:40:14.397869 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 20:40:14.407280 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 20:40:14.409828 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:40:14.421366 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 20:40:14.436670 kernel: loop0: detected capacity change from 0 to 140768 Apr 13 20:40:14.439614 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 20:40:14.454486 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 20:40:14.507205 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 20:40:14.535689 kernel: loop1: detected capacity change from 0 to 54824 Apr 13 20:40:14.591703 kernel: loop2: detected capacity change from 0 to 142488 Apr 13 20:40:14.677693 kernel: loop3: detected capacity change from 0 to 228704 Apr 13 20:40:14.732688 kernel: loop4: detected capacity change from 0 to 140768 Apr 13 20:40:14.780721 kernel: loop5: detected capacity change from 0 to 54824 Apr 13 20:40:14.807711 kernel: loop6: detected capacity change from 0 to 142488 Apr 13 20:40:14.847698 kernel: loop7: detected capacity change from 0 to 228704 Apr 13 20:40:14.876674 (sd-merge)[1280]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Apr 13 20:40:14.877382 (sd-merge)[1280]: Merged extensions into '/usr'. Apr 13 20:40:14.884982 systemd[1]: Reloading requested from client PID 1266 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 20:40:14.885007 systemd[1]: Reloading... Apr 13 20:40:14.982826 zram_generator::config[1305]: No configuration found. Apr 13 20:40:15.242422 ldconfig[1261]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 20:40:15.258947 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:40:15.347068 systemd[1]: Reloading finished in 461 ms. Apr 13 20:40:15.364957 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 20:40:15.376296 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 20:40:15.400982 systemd[1]: Starting ensure-sysext.service... Apr 13 20:40:15.412872 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:40:15.431816 systemd[1]: Reloading requested from client PID 1356 ('systemctl') (unit ensure-sysext.service)... Apr 13 20:40:15.432039 systemd[1]: Reloading... Apr 13 20:40:15.461542 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 20:40:15.462838 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 20:40:15.464798 systemd-tmpfiles[1358]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 20:40:15.465394 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Apr 13 20:40:15.465533 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Apr 13 20:40:15.471310 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:40:15.471328 systemd-tmpfiles[1358]: Skipping /boot Apr 13 20:40:15.494424 systemd-tmpfiles[1358]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:40:15.494451 systemd-tmpfiles[1358]: Skipping /boot Apr 13 20:40:15.558717 zram_generator::config[1387]: No configuration found. Apr 13 20:40:15.714459 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:40:15.796811 systemd-networkd[1218]: eth0: Gained IPv6LL Apr 13 20:40:15.808742 systemd[1]: Reloading finished in 375 ms. Apr 13 20:40:15.826232 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 20:40:15.842439 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:40:15.866550 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:40:15.885899 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 20:40:15.909112 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 20:40:15.930085 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:40:15.949028 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 20:40:15.956095 augenrules[1456]: No rules Apr 13 20:40:15.959864 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:40:15.983926 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:40:15.984300 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:40:15.991033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:40:16.011743 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:40:16.032713 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:40:16.044819 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:40:16.045074 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:40:16.048326 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 20:40:16.060787 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:40:16.061077 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:40:16.073643 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:40:16.074018 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:40:16.086558 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:40:16.087716 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:40:16.099554 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 20:40:16.112722 systemd-resolved[1453]: Positive Trust Anchors: Apr 13 20:40:16.112745 systemd-resolved[1453]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:40:16.112818 systemd-resolved[1453]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:40:16.122262 systemd-resolved[1453]: Defaulting to hostname 'linux'. Apr 13 20:40:16.124717 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:40:16.125186 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:40:16.131798 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:40:16.157083 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:40:16.181401 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:40:16.191943 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:40:16.201462 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 20:40:16.211779 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:40:16.212187 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:40:16.217828 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:40:16.229361 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 20:40:16.241557 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:40:16.241849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:40:16.253504 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:40:16.253811 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:40:16.265459 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:40:16.265753 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:40:16.276519 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 20:40:16.296543 systemd[1]: Reached target network.target - Network. Apr 13 20:40:16.304965 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 20:40:16.314933 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:40:16.325941 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:40:16.326343 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:40:16.333024 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:40:16.355035 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:40:16.373032 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:40:16.389027 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:40:16.403020 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 13 20:40:16.411915 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:40:16.412262 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 20:40:16.422948 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:40:16.423125 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:40:16.426570 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:40:16.427035 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:40:16.440590 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:40:16.440884 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:40:16.451443 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:40:16.451745 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:40:16.463437 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:40:16.463868 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:40:16.487291 systemd[1]: Finished ensure-sysext.service. Apr 13 20:40:16.496620 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 13 20:40:16.515875 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Apr 13 20:40:16.525823 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:40:16.525895 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:40:16.535975 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 20:40:16.546875 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 20:40:16.557970 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 20:40:16.567958 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 20:40:16.578780 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 20:40:16.589788 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 20:40:16.589851 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:40:16.597753 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:40:16.606588 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 20:40:16.618603 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 20:40:16.627091 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:40:16.628136 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Apr 13 20:40:16.640034 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 20:40:16.656604 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 20:40:16.666812 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:40:16.676813 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:40:16.685096 systemd[1]: System is tainted: cgroupsv1 Apr 13 20:40:16.685192 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:40:16.685240 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:40:16.690798 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 20:40:16.714885 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 20:40:16.736475 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 20:40:16.769334 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 20:40:16.787967 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 20:40:16.795785 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 20:40:16.800888 jq[1533]: false Apr 13 20:40:16.814307 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:40:16.820409 coreos-metadata[1530]: Apr 13 20:40:16.820 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Apr 13 20:40:16.822173 coreos-metadata[1530]: Apr 13 20:40:16.821 INFO Fetch successful Apr 13 20:40:16.822458 coreos-metadata[1530]: Apr 13 20:40:16.822 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Apr 13 20:40:16.822825 coreos-metadata[1530]: Apr 13 20:40:16.822 INFO Fetch successful Apr 13 20:40:16.823148 coreos-metadata[1530]: Apr 13 20:40:16.823 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Apr 13 20:40:16.823405 coreos-metadata[1530]: Apr 13 20:40:16.823 INFO Fetch successful Apr 13 20:40:16.823405 coreos-metadata[1530]: Apr 13 20:40:16.823 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Apr 13 20:40:16.829681 coreos-metadata[1530]: Apr 13 20:40:16.827 INFO Fetch successful Apr 13 20:40:16.833442 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 20:40:16.849929 extend-filesystems[1536]: Found loop4 Apr 13 20:40:16.849929 extend-filesystems[1536]: Found loop5 Apr 13 20:40:16.849929 extend-filesystems[1536]: Found loop6 Apr 13 20:40:16.849929 extend-filesystems[1536]: Found loop7 Apr 13 20:40:16.849929 extend-filesystems[1536]: Found sda Apr 13 20:40:16.849929 extend-filesystems[1536]: Found sda1 Apr 13 20:40:16.849929 extend-filesystems[1536]: Found sda2 Apr 13 20:40:16.849929 extend-filesystems[1536]: Found sda3 Apr 13 20:40:16.849929 extend-filesystems[1536]: Found usr Apr 13 20:40:16.849929 extend-filesystems[1536]: Found sda4 Apr 13 20:40:16.849929 extend-filesystems[1536]: Found sda6 Apr 13 20:40:16.849929 extend-filesystems[1536]: Found sda7 Apr 13 20:40:16.849929 extend-filesystems[1536]: Found sda9 Apr 13 20:40:16.849929 extend-filesystems[1536]: Checking size of /dev/sda9 Apr 13 20:40:16.971164 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Apr 13 20:40:16.923716 dbus-daemon[1532]: [system] SELinux support is enabled Apr 13 20:40:16.853906 systemd[1]: Started ntpd.service - Network Time Service. Apr 13 20:40:16.973259 extend-filesystems[1536]: Resized partition /dev/sda9 Apr 13 20:40:16.941968 dbus-daemon[1532]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1218 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 20:40:16.878335 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 20:40:17.005846 extend-filesystems[1552]: resize2fs 1.47.1 (20-May-2024) Apr 13 20:40:17.052725 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1571) Apr 13 20:40:16.983796 ntpd[1544]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 18:02:33 UTC 2026 (1): Starting Apr 13 20:40:16.936867 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:16 ntpd[1544]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 18:02:33 UTC 2026 (1): Starting Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:16 ntpd[1544]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:16 ntpd[1544]: ---------------------------------------------------- Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:16 ntpd[1544]: ntp-4 is maintained by Network Time Foundation, Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:16 ntpd[1544]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:16 ntpd[1544]: corporation. Support and training for ntp-4 are Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:16 ntpd[1544]: available at https://www.nwtime.org/support Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:16 ntpd[1544]: ---------------------------------------------------- Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:16 ntpd[1544]: proto: precision = 0.076 usec (-24) Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:16 ntpd[1544]: basedate set to 2026-04-01 Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:16 ntpd[1544]: gps base set to 2026-04-05 (week 2413) Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:17 ntpd[1544]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:17 ntpd[1544]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:17 ntpd[1544]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:17 ntpd[1544]: Listen normally on 3 eth0 10.128.0.46:123 Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:17 ntpd[1544]: Listen normally on 4 lo [::1]:123 Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:17 ntpd[1544]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:2e%2]:123 Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:17 ntpd[1544]: Listening on routing socket on fd #22 for interface updates Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:17 ntpd[1544]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:40:17.059144 ntpd[1544]: 13 Apr 20:40:17 ntpd[1544]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:40:16.983827 ntpd[1544]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 20:40:16.959800 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 20:40:16.983842 ntpd[1544]: ---------------------------------------------------- Apr 13 20:40:17.078834 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Apr 13 20:40:16.979910 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 20:40:17.079145 init.sh[1559]: + '[' -e /etc/default/instance_configs.cfg.template ']' Apr 13 20:40:17.079145 init.sh[1559]: + echo -e '[InstanceSetup]\nset_host_keys = false' Apr 13 20:40:17.079145 init.sh[1559]: + /usr/bin/google_instance_setup Apr 13 20:40:16.983856 ntpd[1544]: ntp-4 is maintained by Network Time Foundation, Apr 13 20:40:17.040912 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 20:40:16.983870 ntpd[1544]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 20:40:16.983884 ntpd[1544]: corporation. Support and training for ntp-4 are Apr 13 20:40:16.983898 ntpd[1544]: available at https://www.nwtime.org/support Apr 13 20:40:16.983912 ntpd[1544]: ---------------------------------------------------- Apr 13 20:40:16.989725 ntpd[1544]: proto: precision = 0.076 usec (-24) Apr 13 20:40:16.990995 ntpd[1544]: basedate set to 2026-04-01 Apr 13 20:40:16.991021 ntpd[1544]: gps base set to 2026-04-05 (week 2413) Apr 13 20:40:17.002446 ntpd[1544]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 20:40:17.002516 ntpd[1544]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 20:40:17.002824 ntpd[1544]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 20:40:17.002886 ntpd[1544]: Listen normally on 3 eth0 10.128.0.46:123 Apr 13 20:40:17.002947 ntpd[1544]: Listen normally on 4 lo [::1]:123 Apr 13 20:40:17.003009 ntpd[1544]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:2e%2]:123 Apr 13 20:40:17.003075 ntpd[1544]: Listening on routing socket on fd #22 for interface updates Apr 13 20:40:17.007389 ntpd[1544]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:40:17.007432 ntpd[1544]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:40:17.085254 extend-filesystems[1552]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 20:40:17.085254 extend-filesystems[1552]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 13 20:40:17.085254 extend-filesystems[1552]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Apr 13 20:40:17.129842 extend-filesystems[1536]: Resized filesystem in /dev/sda9 Apr 13 20:40:17.085882 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 20:40:17.117474 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Apr 13 20:40:17.132919 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 20:40:17.163029 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 20:40:17.177806 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 20:40:17.191731 jq[1589]: true Apr 13 20:40:17.219477 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 20:40:17.219905 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 20:40:17.220455 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 20:40:17.224934 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 20:40:17.234697 update_engine[1587]: I20260413 20:40:17.231323 1587 main.cc:92] Flatcar Update Engine starting Apr 13 20:40:17.234697 update_engine[1587]: I20260413 20:40:17.233434 1587 update_check_scheduler.cc:74] Next update check in 6m58s Apr 13 20:40:17.249007 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 20:40:17.249415 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 20:40:17.259477 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 20:40:17.275250 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 20:40:17.275687 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 20:40:17.325463 jq[1597]: true Apr 13 20:40:17.349005 (ntainerd)[1598]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 20:40:17.377967 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 20:40:17.395602 systemd-logind[1582]: Watching system buttons on /dev/input/event1 (Power Button) Apr 13 20:40:17.396175 systemd-logind[1582]: Watching system buttons on /dev/input/event3 (Sleep Button) Apr 13 20:40:17.396494 systemd-logind[1582]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 20:40:17.400847 dbus-daemon[1532]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 13 20:40:17.402950 systemd-logind[1582]: New seat seat0. Apr 13 20:40:17.419727 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 20:40:17.440156 systemd[1]: Started update-engine.service - Update Engine. Apr 13 20:40:17.457796 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 20:40:17.461441 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 20:40:17.461732 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 20:40:17.482924 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 20:40:17.490392 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 20:40:17.490672 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 20:40:17.502779 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 20:40:17.516018 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 20:40:17.536463 tar[1596]: linux-amd64/LICENSE Apr 13 20:40:17.576623 tar[1596]: linux-amd64/helm Apr 13 20:40:17.603846 bash[1636]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:40:17.601908 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 20:40:17.629823 systemd[1]: Starting sshkeys.service... Apr 13 20:40:17.690593 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 20:40:17.718868 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 20:40:17.846311 locksmithd[1634]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 20:40:17.939750 coreos-metadata[1641]: Apr 13 20:40:17.937 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Apr 13 20:40:17.948723 coreos-metadata[1641]: Apr 13 20:40:17.941 INFO Fetch failed with 404: resource not found Apr 13 20:40:17.948723 coreos-metadata[1641]: Apr 13 20:40:17.941 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Apr 13 20:40:17.948723 coreos-metadata[1641]: Apr 13 20:40:17.942 INFO Fetch successful Apr 13 20:40:17.948723 coreos-metadata[1641]: Apr 13 20:40:17.943 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Apr 13 20:40:17.949478 coreos-metadata[1641]: Apr 13 20:40:17.949 INFO Fetch failed with 404: resource not found Apr 13 20:40:17.949478 coreos-metadata[1641]: Apr 13 20:40:17.949 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Apr 13 20:40:17.956701 coreos-metadata[1641]: Apr 13 20:40:17.956 INFO Fetch failed with 404: resource not found Apr 13 20:40:17.956701 coreos-metadata[1641]: Apr 13 20:40:17.956 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Apr 13 20:40:17.959828 coreos-metadata[1641]: Apr 13 20:40:17.958 INFO Fetch successful Apr 13 20:40:17.967469 unknown[1641]: wrote ssh authorized keys file for user: core Apr 13 20:40:18.046798 update-ssh-keys[1651]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:40:18.041929 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 20:40:18.061097 systemd[1]: Finished sshkeys.service. Apr 13 20:40:18.178434 dbus-daemon[1532]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 20:40:18.178683 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 20:40:18.181546 dbus-daemon[1532]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1630 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 20:40:18.201718 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 20:40:18.276517 polkitd[1660]: Started polkitd version 121 Apr 13 20:40:18.315609 polkitd[1660]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 20:40:18.325842 polkitd[1660]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 20:40:18.334010 polkitd[1660]: Finished loading, compiling and executing 2 rules Apr 13 20:40:18.334843 dbus-daemon[1532]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 20:40:18.335602 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 20:40:18.335981 polkitd[1660]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 20:40:18.397965 systemd-hostnamed[1630]: Hostname set to (transient) Apr 13 20:40:18.400285 systemd-resolved[1453]: System hostname changed to 'ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal'. Apr 13 20:40:18.567738 containerd[1598]: time="2026-04-13T20:40:18.566505918Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 20:40:18.596591 sshd_keygen[1588]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 20:40:18.715629 containerd[1598]: time="2026-04-13T20:40:18.715336511Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:40:18.722758 containerd[1598]: time="2026-04-13T20:40:18.722277302Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:40:18.722758 containerd[1598]: time="2026-04-13T20:40:18.722335739Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 20:40:18.722758 containerd[1598]: time="2026-04-13T20:40:18.722381404Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 20:40:18.722758 containerd[1598]: time="2026-04-13T20:40:18.722689632Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 20:40:18.722758 containerd[1598]: time="2026-04-13T20:40:18.722725984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 20:40:18.723510 containerd[1598]: time="2026-04-13T20:40:18.723444156Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:40:18.723510 containerd[1598]: time="2026-04-13T20:40:18.723480236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:40:18.726188 containerd[1598]: time="2026-04-13T20:40:18.725478361Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:40:18.726188 containerd[1598]: time="2026-04-13T20:40:18.725534570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 20:40:18.726188 containerd[1598]: time="2026-04-13T20:40:18.725563210Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:40:18.726188 containerd[1598]: time="2026-04-13T20:40:18.725582745Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 20:40:18.726188 containerd[1598]: time="2026-04-13T20:40:18.725793208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:40:18.726605 containerd[1598]: time="2026-04-13T20:40:18.726412019Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:40:18.728373 containerd[1598]: time="2026-04-13T20:40:18.727904812Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:40:18.728373 containerd[1598]: time="2026-04-13T20:40:18.727970250Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 20:40:18.728373 containerd[1598]: time="2026-04-13T20:40:18.728174423Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 20:40:18.728373 containerd[1598]: time="2026-04-13T20:40:18.728306900Z" level=info msg="metadata content store policy set" policy=shared Apr 13 20:40:18.742321 containerd[1598]: time="2026-04-13T20:40:18.741514444Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 20:40:18.742321 containerd[1598]: time="2026-04-13T20:40:18.741639754Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 20:40:18.742321 containerd[1598]: time="2026-04-13T20:40:18.741733356Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 20:40:18.742321 containerd[1598]: time="2026-04-13T20:40:18.741761299Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 20:40:18.742321 containerd[1598]: time="2026-04-13T20:40:18.741803935Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 20:40:18.742321 containerd[1598]: time="2026-04-13T20:40:18.742087214Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 20:40:18.744171 containerd[1598]: time="2026-04-13T20:40:18.743146687Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 20:40:18.744171 containerd[1598]: time="2026-04-13T20:40:18.743457922Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 20:40:18.744171 containerd[1598]: time="2026-04-13T20:40:18.743488124Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 20:40:18.744171 containerd[1598]: time="2026-04-13T20:40:18.743534368Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 20:40:18.744171 containerd[1598]: time="2026-04-13T20:40:18.743559005Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 20:40:18.744171 containerd[1598]: time="2026-04-13T20:40:18.743581711Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 20:40:18.744171 containerd[1598]: time="2026-04-13T20:40:18.743619921Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 20:40:18.744171 containerd[1598]: time="2026-04-13T20:40:18.743643810Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 20:40:18.744171 containerd[1598]: time="2026-04-13T20:40:18.743696707Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 20:40:18.744171 containerd[1598]: time="2026-04-13T20:40:18.743719842Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 20:40:18.744171 containerd[1598]: time="2026-04-13T20:40:18.743760363Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 20:40:18.744171 containerd[1598]: time="2026-04-13T20:40:18.743783138Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 20:40:18.744171 containerd[1598]: time="2026-04-13T20:40:18.743831472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.744171 containerd[1598]: time="2026-04-13T20:40:18.743855831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.744851 containerd[1598]: time="2026-04-13T20:40:18.743876316Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.744851 containerd[1598]: time="2026-04-13T20:40:18.743915974Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.744851 containerd[1598]: time="2026-04-13T20:40:18.743963108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.744851 containerd[1598]: time="2026-04-13T20:40:18.744011684Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.744851 containerd[1598]: time="2026-04-13T20:40:18.744033238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.744851 containerd[1598]: time="2026-04-13T20:40:18.744098585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.744851 containerd[1598]: time="2026-04-13T20:40:18.744125323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.750674 containerd[1598]: time="2026-04-13T20:40:18.745218524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.750674 containerd[1598]: time="2026-04-13T20:40:18.747446272Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.750674 containerd[1598]: time="2026-04-13T20:40:18.747503274Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.750674 containerd[1598]: time="2026-04-13T20:40:18.747537738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.750674 containerd[1598]: time="2026-04-13T20:40:18.747594498Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 20:40:18.750674 containerd[1598]: time="2026-04-13T20:40:18.747634773Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.750674 containerd[1598]: time="2026-04-13T20:40:18.747686100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.750674 containerd[1598]: time="2026-04-13T20:40:18.747707667Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 20:40:18.750674 containerd[1598]: time="2026-04-13T20:40:18.747902020Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 20:40:18.750674 containerd[1598]: time="2026-04-13T20:40:18.747934337Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 20:40:18.750674 containerd[1598]: time="2026-04-13T20:40:18.747955571Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 20:40:18.750674 containerd[1598]: time="2026-04-13T20:40:18.747994335Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 20:40:18.750674 containerd[1598]: time="2026-04-13T20:40:18.748014223Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.751299 containerd[1598]: time="2026-04-13T20:40:18.748036364Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 20:40:18.751299 containerd[1598]: time="2026-04-13T20:40:18.748072218Z" level=info msg="NRI interface is disabled by configuration." Apr 13 20:40:18.751299 containerd[1598]: time="2026-04-13T20:40:18.748092857Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 20:40:18.751438 containerd[1598]: time="2026-04-13T20:40:18.750373356Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 20:40:18.751438 containerd[1598]: time="2026-04-13T20:40:18.750497401Z" level=info msg="Connect containerd service" Apr 13 20:40:18.751438 containerd[1598]: time="2026-04-13T20:40:18.750557745Z" level=info msg="using legacy CRI server" Apr 13 20:40:18.751438 containerd[1598]: time="2026-04-13T20:40:18.750572086Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 20:40:18.753313 containerd[1598]: time="2026-04-13T20:40:18.752020901Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 20:40:18.755369 containerd[1598]: time="2026-04-13T20:40:18.755320681Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:40:18.761687 containerd[1598]: time="2026-04-13T20:40:18.755947498Z" level=info msg="Start subscribing containerd event" Apr 13 20:40:18.761687 containerd[1598]: time="2026-04-13T20:40:18.756033127Z" level=info msg="Start recovering state" Apr 13 20:40:18.761687 containerd[1598]: time="2026-04-13T20:40:18.756126375Z" level=info msg="Start event monitor" Apr 13 20:40:18.761687 containerd[1598]: time="2026-04-13T20:40:18.756166688Z" level=info msg="Start snapshots syncer" Apr 13 20:40:18.761687 containerd[1598]: time="2026-04-13T20:40:18.756182366Z" level=info msg="Start cni network conf syncer for default" Apr 13 20:40:18.761687 containerd[1598]: time="2026-04-13T20:40:18.756196638Z" level=info msg="Start streaming server" Apr 13 20:40:18.761687 containerd[1598]: time="2026-04-13T20:40:18.760042327Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 20:40:18.761687 containerd[1598]: time="2026-04-13T20:40:18.760111328Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 20:40:18.761687 containerd[1598]: time="2026-04-13T20:40:18.760553089Z" level=info msg="containerd successfully booted in 0.200116s" Apr 13 20:40:18.760389 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 20:40:18.786323 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 20:40:18.803078 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 20:40:18.830860 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 20:40:18.831315 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 20:40:18.853399 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 20:40:18.864903 instance-setup[1568]: INFO Running google_set_multiqueue. Apr 13 20:40:18.897307 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 20:40:18.918207 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 20:40:18.921081 instance-setup[1568]: INFO Set channels for eth0 to 2. Apr 13 20:40:18.932612 instance-setup[1568]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Apr 13 20:40:18.935862 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 20:40:18.938073 instance-setup[1568]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Apr 13 20:40:18.939687 instance-setup[1568]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Apr 13 20:40:18.942417 instance-setup[1568]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Apr 13 20:40:18.944232 instance-setup[1568]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Apr 13 20:40:18.946983 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 20:40:18.947831 instance-setup[1568]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Apr 13 20:40:18.947883 instance-setup[1568]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Apr 13 20:40:18.950106 instance-setup[1568]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Apr 13 20:40:18.962410 instance-setup[1568]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Apr 13 20:40:18.967508 instance-setup[1568]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Apr 13 20:40:18.969950 instance-setup[1568]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Apr 13 20:40:18.969995 instance-setup[1568]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Apr 13 20:40:18.994241 init.sh[1559]: + /usr/bin/google_metadata_script_runner --script-type startup Apr 13 20:40:19.205974 startup-script[1722]: INFO Starting startup scripts. Apr 13 20:40:19.213616 startup-script[1722]: INFO No startup scripts found in metadata. Apr 13 20:40:19.213731 startup-script[1722]: INFO Finished running startup scripts. Apr 13 20:40:19.240761 init.sh[1559]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Apr 13 20:40:19.240761 init.sh[1559]: + daemon_pids=() Apr 13 20:40:19.240761 init.sh[1559]: + for d in accounts clock_skew network Apr 13 20:40:19.240761 init.sh[1559]: + daemon_pids+=($!) Apr 13 20:40:19.240761 init.sh[1559]: + for d in accounts clock_skew network Apr 13 20:40:19.240761 init.sh[1559]: + daemon_pids+=($!) Apr 13 20:40:19.240761 init.sh[1559]: + for d in accounts clock_skew network Apr 13 20:40:19.240761 init.sh[1559]: + daemon_pids+=($!) Apr 13 20:40:19.240761 init.sh[1559]: + NOTIFY_SOCKET=/run/systemd/notify Apr 13 20:40:19.240761 init.sh[1559]: + /usr/bin/systemd-notify --ready Apr 13 20:40:19.241953 init.sh[1725]: + /usr/bin/google_accounts_daemon Apr 13 20:40:19.242401 init.sh[1726]: + /usr/bin/google_clock_skew_daemon Apr 13 20:40:19.244232 init.sh[1727]: + /usr/bin/google_network_daemon Apr 13 20:40:19.269895 systemd[1]: Started oem-gce.service - GCE Linux Agent. Apr 13 20:40:19.285830 init.sh[1559]: + wait -n 1725 1726 1727 Apr 13 20:40:19.336787 tar[1596]: linux-amd64/README.md Apr 13 20:40:19.365294 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 20:40:19.613980 google-networking[1727]: INFO Starting Google Networking daemon. Apr 13 20:40:19.686208 google-clock-skew[1726]: INFO Starting Google Clock Skew daemon. Apr 13 20:40:19.695577 google-clock-skew[1726]: INFO Clock drift token has changed: 0. Apr 13 20:40:19.707421 groupadd[1741]: group added to /etc/group: name=google-sudoers, GID=1000 Apr 13 20:40:19.711733 groupadd[1741]: group added to /etc/gshadow: name=google-sudoers Apr 13 20:40:19.772904 groupadd[1741]: new group: name=google-sudoers, GID=1000 Apr 13 20:40:19.801397 google-accounts[1725]: INFO Starting Google Accounts daemon. Apr 13 20:40:19.814014 google-accounts[1725]: WARNING OS Login not installed. Apr 13 20:40:19.815305 google-accounts[1725]: INFO Creating a new user account for 0. Apr 13 20:40:19.820162 init.sh[1750]: useradd: invalid user name '0': use --badname to ignore Apr 13 20:40:19.820724 google-accounts[1725]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Apr 13 20:40:19.890918 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:40:19.902890 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 20:40:19.912969 systemd[1]: Startup finished in 10.629s (kernel) + 9.608s (userspace) = 20.238s. Apr 13 20:40:19.915409 (kubelet)[1760]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:40:20.000133 systemd-resolved[1453]: Clock change detected. Flushing caches. Apr 13 20:40:20.001630 google-clock-skew[1726]: INFO Synced system time with hardware clock. Apr 13 20:40:20.457321 kubelet[1760]: E0413 20:40:20.457160 1760 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:40:20.460410 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:40:20.460890 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:40:25.991221 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 20:40:25.996429 systemd[1]: Started sshd@0-10.128.0.46:22-20.229.252.112:44526.service - OpenSSH per-connection server daemon (20.229.252.112:44526). Apr 13 20:40:26.726111 sshd[1772]: Accepted publickey for core from 20.229.252.112 port 44526 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:40:26.728113 sshd[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:40:26.739229 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 20:40:26.745412 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 20:40:26.751181 systemd-logind[1582]: New session 1 of user core. Apr 13 20:40:26.767587 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 20:40:26.778950 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 20:40:26.807469 (systemd)[1778]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 20:40:26.943159 systemd[1778]: Queued start job for default target default.target. Apr 13 20:40:26.943832 systemd[1778]: Created slice app.slice - User Application Slice. Apr 13 20:40:26.943875 systemd[1778]: Reached target paths.target - Paths. Apr 13 20:40:26.943902 systemd[1778]: Reached target timers.target - Timers. Apr 13 20:40:26.949213 systemd[1778]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 20:40:26.967372 systemd[1778]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 20:40:26.967476 systemd[1778]: Reached target sockets.target - Sockets. Apr 13 20:40:26.967502 systemd[1778]: Reached target basic.target - Basic System. Apr 13 20:40:26.967574 systemd[1778]: Reached target default.target - Main User Target. Apr 13 20:40:26.967630 systemd[1778]: Startup finished in 151ms. Apr 13 20:40:26.968429 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 20:40:26.979751 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 20:40:27.487432 systemd[1]: Started sshd@1-10.128.0.46:22-20.229.252.112:44534.service - OpenSSH per-connection server daemon (20.229.252.112:44534). Apr 13 20:40:28.203459 sshd[1790]: Accepted publickey for core from 20.229.252.112 port 44534 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:40:28.205296 sshd[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:40:28.211859 systemd-logind[1582]: New session 2 of user core. Apr 13 20:40:28.221438 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 20:40:28.700770 sshd[1790]: pam_unix(sshd:session): session closed for user core Apr 13 20:40:28.708557 systemd[1]: sshd@1-10.128.0.46:22-20.229.252.112:44534.service: Deactivated successfully. Apr 13 20:40:28.712506 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 20:40:28.713440 systemd-logind[1582]: Session 2 logged out. Waiting for processes to exit. Apr 13 20:40:28.714842 systemd-logind[1582]: Removed session 2. Apr 13 20:40:28.820468 systemd[1]: Started sshd@2-10.128.0.46:22-20.229.252.112:44538.service - OpenSSH per-connection server daemon (20.229.252.112:44538). Apr 13 20:40:29.539533 sshd[1798]: Accepted publickey for core from 20.229.252.112 port 44538 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:40:29.541401 sshd[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:40:29.548327 systemd-logind[1582]: New session 3 of user core. Apr 13 20:40:29.554517 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 20:40:30.035202 sshd[1798]: pam_unix(sshd:session): session closed for user core Apr 13 20:40:30.040871 systemd-logind[1582]: Session 3 logged out. Waiting for processes to exit. Apr 13 20:40:30.042946 systemd[1]: sshd@2-10.128.0.46:22-20.229.252.112:44538.service: Deactivated successfully. Apr 13 20:40:30.047533 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 20:40:30.048696 systemd-logind[1582]: Removed session 3. Apr 13 20:40:30.156520 systemd[1]: Started sshd@3-10.128.0.46:22-20.229.252.112:44540.service - OpenSSH per-connection server daemon (20.229.252.112:44540). Apr 13 20:40:30.710947 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 20:40:30.718355 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:40:30.843865 sshd[1806]: Accepted publickey for core from 20.229.252.112 port 44540 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:40:30.845739 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:40:30.852135 systemd-logind[1582]: New session 4 of user core. Apr 13 20:40:30.859486 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 20:40:31.080307 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:40:31.084700 (kubelet)[1822]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:40:31.138296 kubelet[1822]: E0413 20:40:31.138202 1822 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:40:31.142801 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:40:31.143256 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:40:31.325676 sshd[1806]: pam_unix(sshd:session): session closed for user core Apr 13 20:40:31.330428 systemd[1]: sshd@3-10.128.0.46:22-20.229.252.112:44540.service: Deactivated successfully. Apr 13 20:40:31.335574 systemd-logind[1582]: Session 4 logged out. Waiting for processes to exit. Apr 13 20:40:31.336780 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 20:40:31.339949 systemd-logind[1582]: Removed session 4. Apr 13 20:40:31.452473 systemd[1]: Started sshd@4-10.128.0.46:22-20.229.252.112:44550.service - OpenSSH per-connection server daemon (20.229.252.112:44550). Apr 13 20:40:32.166111 sshd[1834]: Accepted publickey for core from 20.229.252.112 port 44550 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:40:32.167358 sshd[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:40:32.172962 systemd-logind[1582]: New session 5 of user core. Apr 13 20:40:32.179521 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 20:40:32.573602 sudo[1838]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 20:40:32.574146 sudo[1838]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:40:32.587859 sudo[1838]: pam_unix(sudo:session): session closed for user root Apr 13 20:40:32.702928 sshd[1834]: pam_unix(sshd:session): session closed for user core Apr 13 20:40:32.708775 systemd[1]: sshd@4-10.128.0.46:22-20.229.252.112:44550.service: Deactivated successfully. Apr 13 20:40:32.714174 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 20:40:32.715146 systemd-logind[1582]: Session 5 logged out. Waiting for processes to exit. Apr 13 20:40:32.716579 systemd-logind[1582]: Removed session 5. Apr 13 20:40:32.822476 systemd[1]: Started sshd@5-10.128.0.46:22-20.229.252.112:44556.service - OpenSSH per-connection server daemon (20.229.252.112:44556). Apr 13 20:40:33.539342 sshd[1843]: Accepted publickey for core from 20.229.252.112 port 44556 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:40:33.541241 sshd[1843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:40:33.547854 systemd-logind[1582]: New session 6 of user core. Apr 13 20:40:33.553491 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 20:40:33.931910 sudo[1848]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 20:40:33.932424 sudo[1848]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:40:33.937701 sudo[1848]: pam_unix(sudo:session): session closed for user root Apr 13 20:40:33.951096 sudo[1847]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 20:40:33.951584 sudo[1847]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:40:33.967434 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 20:40:33.978504 auditctl[1851]: No rules Apr 13 20:40:33.979191 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 20:40:33.979604 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 20:40:33.990607 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:40:34.022376 augenrules[1870]: No rules Apr 13 20:40:34.024312 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:40:34.027887 sudo[1847]: pam_unix(sudo:session): session closed for user root Apr 13 20:40:34.143513 sshd[1843]: pam_unix(sshd:session): session closed for user core Apr 13 20:40:34.148112 systemd[1]: sshd@5-10.128.0.46:22-20.229.252.112:44556.service: Deactivated successfully. Apr 13 20:40:34.153729 systemd-logind[1582]: Session 6 logged out. Waiting for processes to exit. Apr 13 20:40:34.154879 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 20:40:34.156533 systemd-logind[1582]: Removed session 6. Apr 13 20:40:34.269772 systemd[1]: Started sshd@6-10.128.0.46:22-20.229.252.112:44558.service - OpenSSH per-connection server daemon (20.229.252.112:44558). Apr 13 20:40:34.981052 sshd[1879]: Accepted publickey for core from 20.229.252.112 port 44558 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:40:34.981868 sshd[1879]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:40:34.988155 systemd-logind[1582]: New session 7 of user core. Apr 13 20:40:34.998424 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 20:40:35.373231 sudo[1883]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 20:40:35.373749 sudo[1883]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:40:35.819535 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 20:40:35.819901 (dockerd)[1898]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 20:40:36.249977 dockerd[1898]: time="2026-04-13T20:40:36.249813602Z" level=info msg="Starting up" Apr 13 20:40:36.364444 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1564589720-merged.mount: Deactivated successfully. Apr 13 20:40:36.582574 dockerd[1898]: time="2026-04-13T20:40:36.582173316Z" level=info msg="Loading containers: start." Apr 13 20:40:36.738087 kernel: Initializing XFRM netlink socket Apr 13 20:40:36.838786 systemd-networkd[1218]: docker0: Link UP Apr 13 20:40:36.871462 dockerd[1898]: time="2026-04-13T20:40:36.871401222Z" level=info msg="Loading containers: done." Apr 13 20:40:36.889853 dockerd[1898]: time="2026-04-13T20:40:36.889788261Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 20:40:36.890105 dockerd[1898]: time="2026-04-13T20:40:36.889909793Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 20:40:36.890105 dockerd[1898]: time="2026-04-13T20:40:36.890055183Z" level=info msg="Daemon has completed initialization" Apr 13 20:40:36.927691 dockerd[1898]: time="2026-04-13T20:40:36.927603337Z" level=info msg="API listen on /run/docker.sock" Apr 13 20:40:36.928367 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 20:40:37.670433 containerd[1598]: time="2026-04-13T20:40:37.670363501Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\"" Apr 13 20:40:38.254423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1133913822.mount: Deactivated successfully. Apr 13 20:40:39.949562 containerd[1598]: time="2026-04-13T20:40:39.949495821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:39.951247 containerd[1598]: time="2026-04-13T20:40:39.951175729Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.10: active requests=0, bytes read=29990250" Apr 13 20:40:39.952265 containerd[1598]: time="2026-04-13T20:40:39.952188038Z" level=info msg="ImageCreate event name:\"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:39.955872 containerd[1598]: time="2026-04-13T20:40:39.955810757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:39.957749 containerd[1598]: time="2026-04-13T20:40:39.957482916Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.10\" with image id \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:bbff81e41af4bfca88a1d05a066a48e12e2689c534d073a8c688e3ad6c8701e3\", size \"29986018\" in 2.287063305s" Apr 13 20:40:39.957749 containerd[1598]: time="2026-04-13T20:40:39.957534008Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.10\" returns image reference \"sha256:e1586f2f8635ddb8eb665e8155e4aadb66d9ca499906c11db63a79ae66456b74\"" Apr 13 20:40:39.958501 containerd[1598]: time="2026-04-13T20:40:39.958443069Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\"" Apr 13 20:40:41.248811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 20:40:41.256360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:40:41.551320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:40:41.563671 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:40:41.617305 containerd[1598]: time="2026-04-13T20:40:41.617245224Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:41.620396 containerd[1598]: time="2026-04-13T20:40:41.618172645Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.10: active requests=0, bytes read=26022155" Apr 13 20:40:41.620965 kubelet[2111]: E0413 20:40:41.620888 2111 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:40:41.621327 containerd[1598]: time="2026-04-13T20:40:41.621278399Z" level=info msg="ImageCreate event name:\"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:41.624508 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:40:41.624815 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:40:41.631094 containerd[1598]: time="2026-04-13T20:40:41.630555767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:41.633595 containerd[1598]: time="2026-04-13T20:40:41.633554004Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.10\" with image id \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:b0880d6ee19f2b9148d3d37008c5ee9fc73976e8edad4d0709f11d32ab3ee709\", size \"27552094\" in 1.674808s" Apr 13 20:40:41.633746 containerd[1598]: time="2026-04-13T20:40:41.633721521Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.10\" returns image reference \"sha256:26db35ccbf4330e5ada4a2786276aac158e92aced08cecce6cb614146e224230\"" Apr 13 20:40:41.634653 containerd[1598]: time="2026-04-13T20:40:41.634619851Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\"" Apr 13 20:40:42.864711 containerd[1598]: time="2026-04-13T20:40:42.864646362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:42.866388 containerd[1598]: time="2026-04-13T20:40:42.866320997Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.10: active requests=0, bytes read=20162981" Apr 13 20:40:42.867381 containerd[1598]: time="2026-04-13T20:40:42.867309151Z" level=info msg="ImageCreate event name:\"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:42.870841 containerd[1598]: time="2026-04-13T20:40:42.870782998Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:42.872585 containerd[1598]: time="2026-04-13T20:40:42.872369287Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.10\" with image id \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:dc1a1aec3bb0ed126b1adff795935124f719969356b24a159fc1a2a0883b89bc\", size \"21692956\" in 1.237704379s" Apr 13 20:40:42.872585 containerd[1598]: time="2026-04-13T20:40:42.872414231Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.10\" returns image reference \"sha256:7f5d3f3b598c23877c138d7739627d8f0160b0a91d321108e9b5affad54f85f7\"" Apr 13 20:40:42.873496 containerd[1598]: time="2026-04-13T20:40:42.873197755Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\"" Apr 13 20:40:44.000222 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1943476751.mount: Deactivated successfully. Apr 13 20:40:44.675544 containerd[1598]: time="2026-04-13T20:40:44.675476148Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:44.676832 containerd[1598]: time="2026-04-13T20:40:44.676765620Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.10: active requests=0, bytes read=31828970" Apr 13 20:40:44.678120 containerd[1598]: time="2026-04-13T20:40:44.678029235Z" level=info msg="ImageCreate event name:\"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:44.680434 containerd[1598]: time="2026-04-13T20:40:44.680373086Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:44.681595 containerd[1598]: time="2026-04-13T20:40:44.681383010Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.10\" with image id \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\", repo tag \"registry.k8s.io/kube-proxy:v1.33.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:e8151e38ef22f032dba686cc1bba5a3e525dedbe2d549fa44e653fe79426e261\", size \"31827782\" in 1.80814161s" Apr 13 20:40:44.681595 containerd[1598]: time="2026-04-13T20:40:44.681432237Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.10\" returns image reference \"sha256:bed75257625288e2a7e106a7fe6bf8373eaa2bc2b14805d32033c7655e882f76\"" Apr 13 20:40:44.682014 containerd[1598]: time="2026-04-13T20:40:44.681970582Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Apr 13 20:40:45.173212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1907491317.mount: Deactivated successfully. Apr 13 20:40:46.426873 containerd[1598]: time="2026-04-13T20:40:46.426801579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:46.428623 containerd[1598]: time="2026-04-13T20:40:46.428558146Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942972" Apr 13 20:40:46.429769 containerd[1598]: time="2026-04-13T20:40:46.429480845Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:46.433117 containerd[1598]: time="2026-04-13T20:40:46.433076455Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:46.434870 containerd[1598]: time="2026-04-13T20:40:46.434666211Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.752639261s" Apr 13 20:40:46.434870 containerd[1598]: time="2026-04-13T20:40:46.434714664Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Apr 13 20:40:46.435733 containerd[1598]: time="2026-04-13T20:40:46.435415211Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Apr 13 20:40:46.876359 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2406163379.mount: Deactivated successfully. Apr 13 20:40:46.881759 containerd[1598]: time="2026-04-13T20:40:46.881698454Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:46.882947 containerd[1598]: time="2026-04-13T20:40:46.882885280Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321228" Apr 13 20:40:46.884230 containerd[1598]: time="2026-04-13T20:40:46.883841397Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:46.887045 containerd[1598]: time="2026-04-13T20:40:46.886966530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:46.888568 containerd[1598]: time="2026-04-13T20:40:46.887967378Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 452.514627ms" Apr 13 20:40:46.888568 containerd[1598]: time="2026-04-13T20:40:46.888013312Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Apr 13 20:40:46.889076 containerd[1598]: time="2026-04-13T20:40:46.889020258Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Apr 13 20:40:47.347525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1368199222.mount: Deactivated successfully. Apr 13 20:40:48.164230 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 20:40:48.608369 containerd[1598]: time="2026-04-13T20:40:48.608298302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:48.610053 containerd[1598]: time="2026-04-13T20:40:48.609988865Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23719521" Apr 13 20:40:48.610857 containerd[1598]: time="2026-04-13T20:40:48.610787728Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:48.614411 containerd[1598]: time="2026-04-13T20:40:48.614354359Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:40:48.616112 containerd[1598]: time="2026-04-13T20:40:48.615924145Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.726865715s" Apr 13 20:40:48.616112 containerd[1598]: time="2026-04-13T20:40:48.615972946Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Apr 13 20:40:51.644571 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Apr 13 20:40:51.650845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:40:51.675775 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:40:51.675948 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:40:51.676441 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:40:51.693410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:40:51.737674 systemd[1]: Reloading requested from client PID 2289 ('systemctl') (unit session-7.scope)... Apr 13 20:40:51.737692 systemd[1]: Reloading... Apr 13 20:40:51.879106 zram_generator::config[2326]: No configuration found. Apr 13 20:40:52.072483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:40:52.174920 systemd[1]: Reloading finished in 436 ms. Apr 13 20:40:52.222683 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:40:52.222848 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:40:52.223326 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:40:52.229460 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:40:52.551303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:40:52.566728 (kubelet)[2389]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:40:52.623481 kubelet[2389]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:40:52.623481 kubelet[2389]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:40:52.623481 kubelet[2389]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:40:52.624422 kubelet[2389]: I0413 20:40:52.624361 2389 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:40:53.324865 kubelet[2389]: I0413 20:40:53.324804 2389 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 20:40:53.324865 kubelet[2389]: I0413 20:40:53.324840 2389 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:40:53.325275 kubelet[2389]: I0413 20:40:53.325234 2389 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:40:53.374041 kubelet[2389]: E0413 20:40:53.373967 2389 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.46:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:40:53.376121 kubelet[2389]: I0413 20:40:53.376085 2389 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:40:53.384758 kubelet[2389]: E0413 20:40:53.384692 2389 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:40:53.384758 kubelet[2389]: I0413 20:40:53.384739 2389 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 20:40:53.389099 kubelet[2389]: I0413 20:40:53.388880 2389 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 20:40:53.390554 kubelet[2389]: I0413 20:40:53.390485 2389 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:40:53.390793 kubelet[2389]: I0413 20:40:53.390545 2389 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 20:40:53.390994 kubelet[2389]: I0413 20:40:53.390793 2389 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:40:53.390994 kubelet[2389]: I0413 20:40:53.390814 2389 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 20:40:53.391132 kubelet[2389]: I0413 20:40:53.391008 2389 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:40:53.398509 kubelet[2389]: I0413 20:40:53.398466 2389 kubelet.go:480] "Attempting to sync node with API server" Apr 13 20:40:53.398509 kubelet[2389]: I0413 20:40:53.398513 2389 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:40:53.399035 kubelet[2389]: I0413 20:40:53.398555 2389 kubelet.go:386] "Adding apiserver pod source" Apr 13 20:40:53.399035 kubelet[2389]: I0413 20:40:53.398589 2389 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:40:53.406819 kubelet[2389]: I0413 20:40:53.405941 2389 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:40:53.406819 kubelet[2389]: I0413 20:40:53.406730 2389 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:40:53.408680 kubelet[2389]: W0413 20:40:53.408026 2389 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:40:53.413084 kubelet[2389]: E0413 20:40:53.412954 2389 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:40:53.414093 kubelet[2389]: E0413 20:40:53.413802 2389 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:40:53.429423 kubelet[2389]: I0413 20:40:53.429385 2389 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 20:40:53.429423 kubelet[2389]: I0413 20:40:53.429462 2389 server.go:1289] "Started kubelet" Apr 13 20:40:53.431982 kubelet[2389]: I0413 20:40:53.431926 2389 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:40:53.432642 kubelet[2389]: I0413 20:40:53.432579 2389 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:40:53.434207 kubelet[2389]: I0413 20:40:53.434008 2389 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:40:53.436414 kubelet[2389]: E0413 20:40:53.434541 2389 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.46:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.46:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal.18a6053addaa7171 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,UID:ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,},FirstTimestamp:2026-04-13 20:40:53.429416305 +0000 UTC m=+0.855866523,LastTimestamp:2026-04-13 20:40:53.429416305 +0000 UTC m=+0.855866523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,}" Apr 13 20:40:53.438518 kubelet[2389]: I0413 20:40:53.438180 2389 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:40:53.439550 kubelet[2389]: I0413 20:40:53.438923 2389 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:40:53.443088 kubelet[2389]: I0413 20:40:53.443032 2389 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 20:40:53.444148 kubelet[2389]: E0413 20:40:53.444114 2389 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" not found" Apr 13 20:40:53.444938 kubelet[2389]: I0413 20:40:53.444481 2389 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 20:40:53.444938 kubelet[2389]: I0413 20:40:53.444549 2389 reconciler.go:26] "Reconciler: start to sync state" Apr 13 20:40:53.445100 kubelet[2389]: E0413 20:40:53.445048 2389 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:40:53.445519 kubelet[2389]: E0413 20:40:53.445184 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.46:6443: connect: connection refused" interval="200ms" Apr 13 20:40:53.446165 kubelet[2389]: I0413 20:40:53.446130 2389 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:40:53.446930 kubelet[2389]: I0413 20:40:53.446897 2389 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:40:53.447084 kubelet[2389]: I0413 20:40:53.447041 2389 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:40:53.451300 kubelet[2389]: E0413 20:40:53.451126 2389 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:40:53.451802 kubelet[2389]: I0413 20:40:53.451688 2389 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:40:53.492973 kubelet[2389]: I0413 20:40:53.491970 2389 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 20:40:53.494141 kubelet[2389]: I0413 20:40:53.494077 2389 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 20:40:53.494141 kubelet[2389]: I0413 20:40:53.494113 2389 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 20:40:53.494323 kubelet[2389]: I0413 20:40:53.494157 2389 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:40:53.494323 kubelet[2389]: I0413 20:40:53.494170 2389 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 20:40:53.494323 kubelet[2389]: E0413 20:40:53.494235 2389 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:40:53.503888 kubelet[2389]: E0413 20:40:53.503833 2389 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:40:53.504705 kubelet[2389]: I0413 20:40:53.504381 2389 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:40:53.504705 kubelet[2389]: I0413 20:40:53.504403 2389 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:40:53.504705 kubelet[2389]: I0413 20:40:53.504426 2389 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:40:53.507472 kubelet[2389]: I0413 20:40:53.507242 2389 policy_none.go:49] "None policy: Start" Apr 13 20:40:53.507472 kubelet[2389]: I0413 20:40:53.507264 2389 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 20:40:53.507472 kubelet[2389]: I0413 20:40:53.507276 2389 state_mem.go:35] "Initializing new in-memory state store" Apr 13 20:40:53.512424 kubelet[2389]: E0413 20:40:53.512381 2389 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:40:53.512632 kubelet[2389]: I0413 20:40:53.512600 2389 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:40:53.512720 kubelet[2389]: I0413 20:40:53.512621 2389 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:40:53.514350 kubelet[2389]: I0413 20:40:53.514310 2389 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:40:53.521813 kubelet[2389]: E0413 20:40:53.521652 2389 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:40:53.521813 kubelet[2389]: E0413 20:40:53.521714 2389 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" not found" Apr 13 20:40:53.621225 kubelet[2389]: E0413 20:40:53.620795 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.625302 kubelet[2389]: I0413 20:40:53.625233 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.627604 kubelet[2389]: E0413 20:40:53.627116 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.46:6443/api/v1/nodes\": dial tcp 10.128.0.46:6443: connect: connection refused" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.628653 kubelet[2389]: E0413 20:40:53.628622 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.635123 kubelet[2389]: E0413 20:40:53.634709 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.645952 kubelet[2389]: E0413 20:40:53.645877 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.46:6443: connect: connection refused" interval="400ms" Apr 13 20:40:53.746330 kubelet[2389]: I0413 20:40:53.746217 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b65bba48028861a3261a0afc5541a282-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"b65bba48028861a3261a0afc5541a282\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.746330 kubelet[2389]: I0413 20:40:53.746279 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b65bba48028861a3261a0afc5541a282-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"b65bba48028861a3261a0afc5541a282\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.746587 kubelet[2389]: I0413 20:40:53.746335 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b65bba48028861a3261a0afc5541a282-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"b65bba48028861a3261a0afc5541a282\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.746587 kubelet[2389]: I0413 20:40:53.746418 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b65bba48028861a3261a0afc5541a282-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"b65bba48028861a3261a0afc5541a282\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.746587 kubelet[2389]: I0413 20:40:53.746449 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5107666967720f3ed450c32a34e3cb86-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"5107666967720f3ed450c32a34e3cb86\") " pod="kube-system/kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.746587 kubelet[2389]: I0413 20:40:53.746485 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5107666967720f3ed450c32a34e3cb86-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"5107666967720f3ed450c32a34e3cb86\") " pod="kube-system/kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.746816 kubelet[2389]: I0413 20:40:53.746522 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67303d805069f456587490a3f6955eba-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"67303d805069f456587490a3f6955eba\") " pod="kube-system/kube-scheduler-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.746816 kubelet[2389]: I0413 20:40:53.746549 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5107666967720f3ed450c32a34e3cb86-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"5107666967720f3ed450c32a34e3cb86\") " pod="kube-system/kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.746816 kubelet[2389]: I0413 20:40:53.746589 2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b65bba48028861a3261a0afc5541a282-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"b65bba48028861a3261a0afc5541a282\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.833710 kubelet[2389]: I0413 20:40:53.833669 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.834178 kubelet[2389]: E0413 20:40:53.834113 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.46:6443/api/v1/nodes\": dial tcp 10.128.0.46:6443: connect: connection refused" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:53.922531 containerd[1598]: time="2026-04-13T20:40:53.922347404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,Uid:5107666967720f3ed450c32a34e3cb86,Namespace:kube-system,Attempt:0,}" Apr 13 20:40:53.930337 containerd[1598]: time="2026-04-13T20:40:53.930263866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,Uid:b65bba48028861a3261a0afc5541a282,Namespace:kube-system,Attempt:0,}" Apr 13 20:40:53.936604 containerd[1598]: time="2026-04-13T20:40:53.936026086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,Uid:67303d805069f456587490a3f6955eba,Namespace:kube-system,Attempt:0,}" Apr 13 20:40:54.046860 kubelet[2389]: E0413 20:40:54.046778 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.46:6443: connect: connection refused" interval="800ms" Apr 13 20:40:54.241425 kubelet[2389]: I0413 20:40:54.241235 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:54.242441 kubelet[2389]: E0413 20:40:54.242227 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.46:6443/api/v1/nodes\": dial tcp 10.128.0.46:6443: connect: connection refused" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:54.380633 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3883332933.mount: Deactivated successfully. Apr 13 20:40:54.388144 containerd[1598]: time="2026-04-13T20:40:54.388090630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:40:54.389675 containerd[1598]: time="2026-04-13T20:40:54.389607450Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:40:54.391107 containerd[1598]: time="2026-04-13T20:40:54.391018038Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312146" Apr 13 20:40:54.391437 containerd[1598]: time="2026-04-13T20:40:54.391358748Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:40:54.392105 containerd[1598]: time="2026-04-13T20:40:54.391817840Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:40:54.393106 containerd[1598]: time="2026-04-13T20:40:54.393043332Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:40:54.395367 containerd[1598]: time="2026-04-13T20:40:54.394838091Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:40:54.397457 containerd[1598]: time="2026-04-13T20:40:54.397113570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:40:54.400896 containerd[1598]: time="2026-04-13T20:40:54.400047716Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 469.687079ms" Apr 13 20:40:54.403169 containerd[1598]: time="2026-04-13T20:40:54.403001259Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 480.543707ms" Apr 13 20:40:54.410152 containerd[1598]: time="2026-04-13T20:40:54.410106478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 473.981295ms" Apr 13 20:40:54.525087 kubelet[2389]: E0413 20:40:54.524463 2389 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.46:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Apr 13 20:40:54.601272 containerd[1598]: time="2026-04-13T20:40:54.601156210Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:40:54.601696 containerd[1598]: time="2026-04-13T20:40:54.601426024Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:40:54.601696 containerd[1598]: time="2026-04-13T20:40:54.601535480Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:40:54.603424 containerd[1598]: time="2026-04-13T20:40:54.602823975Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:40:54.603424 containerd[1598]: time="2026-04-13T20:40:54.602888432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:40:54.603424 containerd[1598]: time="2026-04-13T20:40:54.602926288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:40:54.603424 containerd[1598]: time="2026-04-13T20:40:54.603047792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:40:54.603424 containerd[1598]: time="2026-04-13T20:40:54.602566445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:40:54.606822 containerd[1598]: time="2026-04-13T20:40:54.604588835Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:40:54.606822 containerd[1598]: time="2026-04-13T20:40:54.604647593Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:40:54.606822 containerd[1598]: time="2026-04-13T20:40:54.604692025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:40:54.606822 containerd[1598]: time="2026-04-13T20:40:54.604864516Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:40:54.750892 containerd[1598]: time="2026-04-13T20:40:54.750837480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,Uid:67303d805069f456587490a3f6955eba,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdc9ad6f1db850b6b84e20c8a2a87e8da133a5565f73eb8577a6b97e0852714f\"" Apr 13 20:40:54.751877 kubelet[2389]: E0413 20:40:54.751830 2389 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Apr 13 20:40:54.755211 kubelet[2389]: E0413 20:40:54.755163 2389 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-21291" Apr 13 20:40:54.756739 containerd[1598]: time="2026-04-13T20:40:54.756605903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,Uid:5107666967720f3ed450c32a34e3cb86,Namespace:kube-system,Attempt:0,} returns sandbox id \"61f7a70c5a5be410d59b6924ae8cea36c0e53b76d27c78a1bacd048fe6cfe03f\"" Apr 13 20:40:54.760215 kubelet[2389]: E0413 20:40:54.760007 2389 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-21291" Apr 13 20:40:54.762089 containerd[1598]: time="2026-04-13T20:40:54.761888301Z" level=info msg="CreateContainer within sandbox \"bdc9ad6f1db850b6b84e20c8a2a87e8da133a5565f73eb8577a6b97e0852714f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:40:54.762963 containerd[1598]: time="2026-04-13T20:40:54.762916557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,Uid:b65bba48028861a3261a0afc5541a282,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a5ce848f0a8a2190c54300eb238e26e1e7d6c36ff9a8646e8daca811fccdab6\"" Apr 13 20:40:54.766326 containerd[1598]: time="2026-04-13T20:40:54.763883562Z" level=info msg="CreateContainer within sandbox \"61f7a70c5a5be410d59b6924ae8cea36c0e53b76d27c78a1bacd048fe6cfe03f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:40:54.768992 kubelet[2389]: E0413 20:40:54.768616 2389 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flat" Apr 13 20:40:54.773678 containerd[1598]: time="2026-04-13T20:40:54.773637839Z" level=info msg="CreateContainer within sandbox \"2a5ce848f0a8a2190c54300eb238e26e1e7d6c36ff9a8646e8daca811fccdab6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:40:54.787152 containerd[1598]: time="2026-04-13T20:40:54.786944857Z" level=info msg="CreateContainer within sandbox \"bdc9ad6f1db850b6b84e20c8a2a87e8da133a5565f73eb8577a6b97e0852714f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5c6eeee59002698c40c6f499076e5ccda491e6d9e9c05c8d1dceee7620917538\"" Apr 13 20:40:54.789544 containerd[1598]: time="2026-04-13T20:40:54.789507718Z" level=info msg="StartContainer for \"5c6eeee59002698c40c6f499076e5ccda491e6d9e9c05c8d1dceee7620917538\"" Apr 13 20:40:54.799090 containerd[1598]: time="2026-04-13T20:40:54.798217890Z" level=info msg="CreateContainer within sandbox \"61f7a70c5a5be410d59b6924ae8cea36c0e53b76d27c78a1bacd048fe6cfe03f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1a583dd98faf0f74318a78f946f26348deaf3e5f9c384b45cd4e01c96050e386\"" Apr 13 20:40:54.799090 containerd[1598]: time="2026-04-13T20:40:54.798916670Z" level=info msg="StartContainer for \"1a583dd98faf0f74318a78f946f26348deaf3e5f9c384b45cd4e01c96050e386\"" Apr 13 20:40:54.804972 containerd[1598]: time="2026-04-13T20:40:54.804925146Z" level=info msg="CreateContainer within sandbox \"2a5ce848f0a8a2190c54300eb238e26e1e7d6c36ff9a8646e8daca811fccdab6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b861d6b7ed949b5f023bd027f60bad933039da27f6b11a360355bb82e35d8ca5\"" Apr 13 20:40:54.806719 containerd[1598]: time="2026-04-13T20:40:54.806687238Z" level=info msg="StartContainer for \"b861d6b7ed949b5f023bd027f60bad933039da27f6b11a360355bb82e35d8ca5\"" Apr 13 20:40:54.843530 kubelet[2389]: E0413 20:40:54.843481 2389 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Apr 13 20:40:54.847349 kubelet[2389]: E0413 20:40:54.847299 2389 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.46:6443: connect: connection refused" interval="1.6s" Apr 13 20:40:54.854748 kubelet[2389]: E0413 20:40:54.854704 2389 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal&limit=500&resourceVersion=0\": dial tcp 10.128.0.46:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Apr 13 20:40:54.957419 containerd[1598]: time="2026-04-13T20:40:54.957367547Z" level=info msg="StartContainer for \"1a583dd98faf0f74318a78f946f26348deaf3e5f9c384b45cd4e01c96050e386\" returns successfully" Apr 13 20:40:54.965202 containerd[1598]: time="2026-04-13T20:40:54.964985207Z" level=info msg="StartContainer for \"b861d6b7ed949b5f023bd027f60bad933039da27f6b11a360355bb82e35d8ca5\" returns successfully" Apr 13 20:40:55.032330 containerd[1598]: time="2026-04-13T20:40:55.032264355Z" level=info msg="StartContainer for \"5c6eeee59002698c40c6f499076e5ccda491e6d9e9c05c8d1dceee7620917538\" returns successfully" Apr 13 20:40:55.049472 kubelet[2389]: I0413 20:40:55.049340 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:55.049853 kubelet[2389]: E0413 20:40:55.049789 2389 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.46:6443/api/v1/nodes\": dial tcp 10.128.0.46:6443: connect: connection refused" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:55.532943 kubelet[2389]: E0413 20:40:55.532898 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:55.537556 kubelet[2389]: E0413 20:40:55.537519 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:55.538085 kubelet[2389]: E0413 20:40:55.537902 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:56.531588 kubelet[2389]: E0413 20:40:56.531545 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:56.533304 kubelet[2389]: E0413 20:40:56.532055 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:56.655137 kubelet[2389]: I0413 20:40:56.655100 2389 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:57.361675 kubelet[2389]: E0413 20:40:57.361628 2389 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:57.871325 kubelet[2389]: E0413 20:40:57.871272 2389 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:57.909392 kubelet[2389]: E0413 20:40:57.909221 2389 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal.18a6053addaa7171 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,UID:ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,},FirstTimestamp:2026-04-13 20:40:53.429416305 +0000 UTC m=+0.855866523,LastTimestamp:2026-04-13 20:40:53.429416305 +0000 UTC m=+0.855866523,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,}" Apr 13 20:40:57.974492 kubelet[2389]: I0413 20:40:57.974436 2389 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:57.974697 kubelet[2389]: E0413 20:40:57.974515 2389 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\": node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" not found" Apr 13 20:40:58.028705 kubelet[2389]: E0413 20:40:58.028558 2389 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal.18a6053addd14c42 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,UID:ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:cgroup v1 support is in maintenance mode, please migrate to cgroup v2,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,},FirstTimestamp:2026-04-13 20:40:53.43196269 +0000 UTC m=+0.858412921,LastTimestamp:2026-04-13 20:40:53.43196269 +0000 UTC m=+0.858412921,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal,}" Apr 13 20:40:58.045111 kubelet[2389]: I0413 20:40:58.045036 2389 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:58.092143 kubelet[2389]: E0413 20:40:58.091714 2389 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:58.092143 kubelet[2389]: I0413 20:40:58.092025 2389 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:58.100170 kubelet[2389]: E0413 20:40:58.099495 2389 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:58.100170 kubelet[2389]: I0413 20:40:58.099554 2389 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:58.104995 kubelet[2389]: E0413 20:40:58.104081 2389 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:40:58.405432 kubelet[2389]: I0413 20:40:58.405382 2389 apiserver.go:52] "Watching apiserver" Apr 13 20:40:58.444680 kubelet[2389]: I0413 20:40:58.444630 2389 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 20:40:59.545126 systemd[1]: Reloading requested from client PID 2670 ('systemctl') (unit session-7.scope)... Apr 13 20:40:59.545149 systemd[1]: Reloading... Apr 13 20:40:59.685701 zram_generator::config[2710]: No configuration found. Apr 13 20:40:59.841836 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:40:59.962403 systemd[1]: Reloading finished in 416 ms. Apr 13 20:41:00.012490 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:41:00.025611 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:41:00.025927 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:41:00.038085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:41:00.372311 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:41:00.380092 (kubelet)[2768]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:41:00.447241 kubelet[2768]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:41:00.447241 kubelet[2768]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Apr 13 20:41:00.447241 kubelet[2768]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:41:00.447241 kubelet[2768]: I0413 20:41:00.446123 2768 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 13 20:41:00.453467 kubelet[2768]: I0413 20:41:00.453419 2768 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Apr 13 20:41:00.453467 kubelet[2768]: I0413 20:41:00.453453 2768 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:41:00.453796 kubelet[2768]: I0413 20:41:00.453773 2768 server.go:956] "Client rotation is on, will bootstrap in background" Apr 13 20:41:00.455501 kubelet[2768]: I0413 20:41:00.455467 2768 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:41:00.461912 kubelet[2768]: I0413 20:41:00.461716 2768 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:41:00.465952 kubelet[2768]: E0413 20:41:00.465893 2768 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:41:00.465952 kubelet[2768]: I0413 20:41:00.465929 2768 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 13 20:41:00.469282 kubelet[2768]: I0413 20:41:00.469242 2768 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 13 20:41:00.469853 kubelet[2768]: I0413 20:41:00.469806 2768 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:41:00.470082 kubelet[2768]: I0413 20:41:00.469836 2768 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Apr 13 20:41:00.470082 kubelet[2768]: I0413 20:41:00.470076 2768 topology_manager.go:138] "Creating topology manager with none policy" Apr 13 20:41:00.470303 kubelet[2768]: I0413 20:41:00.470096 2768 container_manager_linux.go:303] "Creating device plugin manager" Apr 13 20:41:00.470303 kubelet[2768]: I0413 20:41:00.470170 2768 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:41:00.470411 kubelet[2768]: I0413 20:41:00.470395 2768 kubelet.go:480] "Attempting to sync node with API server" Apr 13 20:41:00.470461 kubelet[2768]: I0413 20:41:00.470422 2768 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:41:00.470514 kubelet[2768]: I0413 20:41:00.470469 2768 kubelet.go:386] "Adding apiserver pod source" Apr 13 20:41:00.470514 kubelet[2768]: I0413 20:41:00.470498 2768 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:41:00.478894 kubelet[2768]: I0413 20:41:00.473033 2768 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:41:00.478894 kubelet[2768]: I0413 20:41:00.473850 2768 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:41:00.502978 kubelet[2768]: I0413 20:41:00.502955 2768 watchdog_linux.go:99] "Systemd watchdog is not enabled" Apr 13 20:41:00.503207 kubelet[2768]: I0413 20:41:00.503190 2768 server.go:1289] "Started kubelet" Apr 13 20:41:00.505259 kubelet[2768]: I0413 20:41:00.505214 2768 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 13 20:41:00.506693 kubelet[2768]: I0413 20:41:00.506651 2768 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:41:00.513181 kubelet[2768]: I0413 20:41:00.512753 2768 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:41:00.516699 kubelet[2768]: I0413 20:41:00.507136 2768 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:41:00.516699 kubelet[2768]: I0413 20:41:00.516697 2768 volume_manager.go:297] "Starting Kubelet Volume Manager" Apr 13 20:41:00.521469 kubelet[2768]: I0413 20:41:00.521417 2768 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Apr 13 20:41:00.521701 kubelet[2768]: I0413 20:41:00.521652 2768 reconciler.go:26] "Reconciler: start to sync state" Apr 13 20:41:00.524091 kubelet[2768]: I0413 20:41:00.522876 2768 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:41:00.531696 kubelet[2768]: I0413 20:41:00.523503 2768 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:41:00.543967 kubelet[2768]: I0413 20:41:00.543937 2768 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:41:00.544198 kubelet[2768]: I0413 20:41:00.544153 2768 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:41:00.548492 kubelet[2768]: I0413 20:41:00.548447 2768 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:41:00.548719 kubelet[2768]: E0413 20:41:00.548692 2768 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:41:00.570134 kubelet[2768]: I0413 20:41:00.570075 2768 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Apr 13 20:41:00.571865 kubelet[2768]: I0413 20:41:00.571833 2768 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Apr 13 20:41:00.571865 kubelet[2768]: I0413 20:41:00.571860 2768 status_manager.go:230] "Starting to sync pod status with apiserver" Apr 13 20:41:00.573167 kubelet[2768]: I0413 20:41:00.571885 2768 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:41:00.573167 kubelet[2768]: I0413 20:41:00.571897 2768 kubelet.go:2436] "Starting kubelet main sync loop" Apr 13 20:41:00.573167 kubelet[2768]: E0413 20:41:00.571952 2768 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:41:00.658737 kubelet[2768]: I0413 20:41:00.658609 2768 cpu_manager.go:221] "Starting CPU manager" policy="none" Apr 13 20:41:00.658737 kubelet[2768]: I0413 20:41:00.658637 2768 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Apr 13 20:41:00.658737 kubelet[2768]: I0413 20:41:00.658662 2768 state_mem.go:36] "Initialized new in-memory state store" Apr 13 20:41:00.659008 kubelet[2768]: I0413 20:41:00.658842 2768 state_mem.go:88] "Updated default CPUSet" cpuSet="" Apr 13 20:41:00.659008 kubelet[2768]: I0413 20:41:00.658856 2768 state_mem.go:96] "Updated CPUSet assignments" assignments={} Apr 13 20:41:00.659008 kubelet[2768]: I0413 20:41:00.658883 2768 policy_none.go:49] "None policy: Start" Apr 13 20:41:00.659008 kubelet[2768]: I0413 20:41:00.658898 2768 memory_manager.go:186] "Starting memorymanager" policy="None" Apr 13 20:41:00.659008 kubelet[2768]: I0413 20:41:00.658914 2768 state_mem.go:35] "Initializing new in-memory state store" Apr 13 20:41:00.659282 kubelet[2768]: I0413 20:41:00.659050 2768 state_mem.go:75] "Updated machine memory state" Apr 13 20:41:00.661173 kubelet[2768]: E0413 20:41:00.661144 2768 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:41:00.661410 kubelet[2768]: I0413 20:41:00.661366 2768 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 13 20:41:00.661410 kubelet[2768]: I0413 20:41:00.661391 2768 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:41:00.667549 kubelet[2768]: I0413 20:41:00.667407 2768 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 13 20:41:00.669149 kubelet[2768]: E0413 20:41:00.668116 2768 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:41:00.673702 kubelet[2768]: I0413 20:41:00.672738 2768 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:00.674773 kubelet[2768]: I0413 20:41:00.674751 2768 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:00.678375 kubelet[2768]: I0413 20:41:00.678332 2768 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:00.687663 kubelet[2768]: I0413 20:41:00.686345 2768 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:41:00.687943 kubelet[2768]: I0413 20:41:00.687923 2768 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:41:00.688568 kubelet[2768]: I0413 20:41:00.688408 2768 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:41:00.722911 kubelet[2768]: I0413 20:41:00.722787 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5107666967720f3ed450c32a34e3cb86-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"5107666967720f3ed450c32a34e3cb86\") " pod="kube-system/kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:00.722911 kubelet[2768]: I0413 20:41:00.722841 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5107666967720f3ed450c32a34e3cb86-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"5107666967720f3ed450c32a34e3cb86\") " pod="kube-system/kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:00.722911 kubelet[2768]: I0413 20:41:00.722900 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b65bba48028861a3261a0afc5541a282-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"b65bba48028861a3261a0afc5541a282\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:00.723249 kubelet[2768]: I0413 20:41:00.722929 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b65bba48028861a3261a0afc5541a282-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"b65bba48028861a3261a0afc5541a282\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:00.723249 kubelet[2768]: I0413 20:41:00.722978 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b65bba48028861a3261a0afc5541a282-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"b65bba48028861a3261a0afc5541a282\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:00.723249 kubelet[2768]: I0413 20:41:00.723004 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b65bba48028861a3261a0afc5541a282-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"b65bba48028861a3261a0afc5541a282\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:00.723249 kubelet[2768]: I0413 20:41:00.723048 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b65bba48028861a3261a0afc5541a282-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"b65bba48028861a3261a0afc5541a282\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:00.723541 kubelet[2768]: I0413 20:41:00.723104 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5107666967720f3ed450c32a34e3cb86-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"5107666967720f3ed450c32a34e3cb86\") " pod="kube-system/kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:00.783296 kubelet[2768]: I0413 20:41:00.783122 2768 kubelet_node_status.go:75] "Attempting to register node" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:00.795231 kubelet[2768]: I0413 20:41:00.794859 2768 kubelet_node_status.go:124] "Node was previously registered" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:00.795231 kubelet[2768]: I0413 20:41:00.794957 2768 kubelet_node_status.go:78] "Successfully registered node" node="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:00.824781 kubelet[2768]: I0413 20:41:00.824431 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67303d805069f456587490a3f6955eba-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" (UID: \"67303d805069f456587490a3f6955eba\") " pod="kube-system/kube-scheduler-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:01.478430 kubelet[2768]: I0413 20:41:01.477637 2768 apiserver.go:52] "Watching apiserver" Apr 13 20:41:01.522102 kubelet[2768]: I0413 20:41:01.522043 2768 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Apr 13 20:41:01.607616 kubelet[2768]: I0413 20:41:01.606617 2768 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:01.616354 kubelet[2768]: I0413 20:41:01.616244 2768 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:41:01.616819 kubelet[2768]: E0413 20:41:01.616462 2768 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:01.687001 kubelet[2768]: I0413 20:41:01.686921 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" podStartSLOduration=1.68689716 podStartE2EDuration="1.68689716s" podCreationTimestamp="2026-04-13 20:41:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:41:01.661344897 +0000 UTC m=+1.273670401" watchObservedRunningTime="2026-04-13 20:41:01.68689716 +0000 UTC m=+1.299222660" Apr 13 20:41:01.687245 kubelet[2768]: I0413 20:41:01.687051 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" podStartSLOduration=1.687045683 podStartE2EDuration="1.687045683s" podCreationTimestamp="2026-04-13 20:41:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:41:01.68376893 +0000 UTC m=+1.296094432" watchObservedRunningTime="2026-04-13 20:41:01.687045683 +0000 UTC m=+1.299371185" Apr 13 20:41:01.716737 kubelet[2768]: I0413 20:41:01.716642 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" podStartSLOduration=1.716618264 podStartE2EDuration="1.716618264s" podCreationTimestamp="2026-04-13 20:41:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:41:01.716349839 +0000 UTC m=+1.328675344" watchObservedRunningTime="2026-04-13 20:41:01.716618264 +0000 UTC m=+1.328943770" Apr 13 20:41:02.446364 update_engine[1587]: I20260413 20:41:02.446277 1587 update_attempter.cc:509] Updating boot flags... Apr 13 20:41:02.516090 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2822) Apr 13 20:41:02.695201 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2826) Apr 13 20:41:02.809141 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2826) Apr 13 20:41:07.375490 kubelet[2768]: I0413 20:41:07.375447 2768 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:41:07.376562 containerd[1598]: time="2026-04-13T20:41:07.376403031Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:41:07.377267 kubelet[2768]: I0413 20:41:07.376672 2768 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:41:07.873973 kubelet[2768]: I0413 20:41:07.873846 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d8655be4-ed32-4fc9-b52c-3e43df213224-lib-modules\") pod \"kube-proxy-vhl4t\" (UID: \"d8655be4-ed32-4fc9-b52c-3e43df213224\") " pod="kube-system/kube-proxy-vhl4t" Apr 13 20:41:07.873973 kubelet[2768]: I0413 20:41:07.873913 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d8655be4-ed32-4fc9-b52c-3e43df213224-kube-proxy\") pod \"kube-proxy-vhl4t\" (UID: \"d8655be4-ed32-4fc9-b52c-3e43df213224\") " pod="kube-system/kube-proxy-vhl4t" Apr 13 20:41:07.873973 kubelet[2768]: I0413 20:41:07.873941 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d8655be4-ed32-4fc9-b52c-3e43df213224-xtables-lock\") pod \"kube-proxy-vhl4t\" (UID: \"d8655be4-ed32-4fc9-b52c-3e43df213224\") " pod="kube-system/kube-proxy-vhl4t" Apr 13 20:41:07.873973 kubelet[2768]: I0413 20:41:07.873971 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5lch\" (UniqueName: \"kubernetes.io/projected/d8655be4-ed32-4fc9-b52c-3e43df213224-kube-api-access-d5lch\") pod \"kube-proxy-vhl4t\" (UID: \"d8655be4-ed32-4fc9-b52c-3e43df213224\") " pod="kube-system/kube-proxy-vhl4t" Apr 13 20:41:07.983170 kubelet[2768]: E0413 20:41:07.983110 2768 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Apr 13 20:41:07.983170 kubelet[2768]: E0413 20:41:07.983156 2768 projected.go:194] Error preparing data for projected volume kube-api-access-d5lch for pod kube-system/kube-proxy-vhl4t: configmap "kube-root-ca.crt" not found Apr 13 20:41:07.983423 kubelet[2768]: E0413 20:41:07.983255 2768 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d8655be4-ed32-4fc9-b52c-3e43df213224-kube-api-access-d5lch podName:d8655be4-ed32-4fc9-b52c-3e43df213224 nodeName:}" failed. No retries permitted until 2026-04-13 20:41:08.483223828 +0000 UTC m=+8.095549324 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-d5lch" (UniqueName: "kubernetes.io/projected/d8655be4-ed32-4fc9-b52c-3e43df213224-kube-api-access-d5lch") pod "kube-proxy-vhl4t" (UID: "d8655be4-ed32-4fc9-b52c-3e43df213224") : configmap "kube-root-ca.crt" not found Apr 13 20:41:08.580091 kubelet[2768]: I0413 20:41:08.580011 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3955d42a-feba-447a-85d5-5fb5ddb93fd0-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-9pskl\" (UID: \"3955d42a-feba-447a-85d5-5fb5ddb93fd0\") " pod="tigera-operator/tigera-operator-6bf85f8dd-9pskl" Apr 13 20:41:08.581513 kubelet[2768]: I0413 20:41:08.580150 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d84hs\" (UniqueName: \"kubernetes.io/projected/3955d42a-feba-447a-85d5-5fb5ddb93fd0-kube-api-access-d84hs\") pod \"tigera-operator-6bf85f8dd-9pskl\" (UID: \"3955d42a-feba-447a-85d5-5fb5ddb93fd0\") " pod="tigera-operator/tigera-operator-6bf85f8dd-9pskl" Apr 13 20:41:08.721022 containerd[1598]: time="2026-04-13T20:41:08.720915090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vhl4t,Uid:d8655be4-ed32-4fc9-b52c-3e43df213224,Namespace:kube-system,Attempt:0,}" Apr 13 20:41:08.757823 containerd[1598]: time="2026-04-13T20:41:08.757699506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:41:08.758402 containerd[1598]: time="2026-04-13T20:41:08.758121709Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:41:08.758402 containerd[1598]: time="2026-04-13T20:41:08.758196563Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:08.758613 containerd[1598]: time="2026-04-13T20:41:08.758376325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:08.831354 containerd[1598]: time="2026-04-13T20:41:08.831195292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vhl4t,Uid:d8655be4-ed32-4fc9-b52c-3e43df213224,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa9de1ad63a5e0ebe523165a16537d7b537267a74af77838d9dc411d4805ca14\"" Apr 13 20:41:08.838276 containerd[1598]: time="2026-04-13T20:41:08.838229154Z" level=info msg="CreateContainer within sandbox \"aa9de1ad63a5e0ebe523165a16537d7b537267a74af77838d9dc411d4805ca14\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:41:08.847757 containerd[1598]: time="2026-04-13T20:41:08.847089381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-9pskl,Uid:3955d42a-feba-447a-85d5-5fb5ddb93fd0,Namespace:tigera-operator,Attempt:0,}" Apr 13 20:41:08.854892 containerd[1598]: time="2026-04-13T20:41:08.854850477Z" level=info msg="CreateContainer within sandbox \"aa9de1ad63a5e0ebe523165a16537d7b537267a74af77838d9dc411d4805ca14\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"04c4be5e935245f75344af5fe25338d0e96a9479b578b22192aee5edb6335a52\"" Apr 13 20:41:08.856597 containerd[1598]: time="2026-04-13T20:41:08.856449637Z" level=info msg="StartContainer for \"04c4be5e935245f75344af5fe25338d0e96a9479b578b22192aee5edb6335a52\"" Apr 13 20:41:08.886902 containerd[1598]: time="2026-04-13T20:41:08.886718838Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:41:08.886902 containerd[1598]: time="2026-04-13T20:41:08.886805459Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:41:08.886902 containerd[1598]: time="2026-04-13T20:41:08.886827084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:08.887360 containerd[1598]: time="2026-04-13T20:41:08.886943380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:08.963415 containerd[1598]: time="2026-04-13T20:41:08.963262371Z" level=info msg="StartContainer for \"04c4be5e935245f75344af5fe25338d0e96a9479b578b22192aee5edb6335a52\" returns successfully" Apr 13 20:41:08.994924 containerd[1598]: time="2026-04-13T20:41:08.994739695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-9pskl,Uid:3955d42a-feba-447a-85d5-5fb5ddb93fd0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1b6d74a9031aad93a2dfdb2cd53ac34c441e470862199abf6dc28ea5e76470f8\"" Apr 13 20:41:08.999016 containerd[1598]: time="2026-04-13T20:41:08.998561711Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 20:41:10.097718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount10102533.mount: Deactivated successfully. Apr 13 20:41:10.996020 kubelet[2768]: I0413 20:41:10.995836 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-vhl4t" podStartSLOduration=3.995813731 podStartE2EDuration="3.995813731s" podCreationTimestamp="2026-04-13 20:41:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:41:09.652777488 +0000 UTC m=+9.265102995" watchObservedRunningTime="2026-04-13 20:41:10.995813731 +0000 UTC m=+10.608139233" Apr 13 20:41:11.525911 containerd[1598]: time="2026-04-13T20:41:11.525838356Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:11.527458 containerd[1598]: time="2026-04-13T20:41:11.527378309Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 13 20:41:11.528710 containerd[1598]: time="2026-04-13T20:41:11.528615132Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:11.531655 containerd[1598]: time="2026-04-13T20:41:11.531579097Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:11.532948 containerd[1598]: time="2026-04-13T20:41:11.532610779Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 2.533992449s" Apr 13 20:41:11.532948 containerd[1598]: time="2026-04-13T20:41:11.532659466Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 13 20:41:11.537605 containerd[1598]: time="2026-04-13T20:41:11.537553092Z" level=info msg="CreateContainer within sandbox \"1b6d74a9031aad93a2dfdb2cd53ac34c441e470862199abf6dc28ea5e76470f8\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 20:41:11.552094 containerd[1598]: time="2026-04-13T20:41:11.550326170Z" level=info msg="CreateContainer within sandbox \"1b6d74a9031aad93a2dfdb2cd53ac34c441e470862199abf6dc28ea5e76470f8\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"eb218db6938e9dc91fbdb6cc66a5ba36235753dfef2b5b39c6951e062e4be29b\"" Apr 13 20:41:11.558168 containerd[1598]: time="2026-04-13T20:41:11.556118645Z" level=info msg="StartContainer for \"eb218db6938e9dc91fbdb6cc66a5ba36235753dfef2b5b39c6951e062e4be29b\"" Apr 13 20:41:11.559034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3447083750.mount: Deactivated successfully. Apr 13 20:41:11.637878 containerd[1598]: time="2026-04-13T20:41:11.636694459Z" level=info msg="StartContainer for \"eb218db6938e9dc91fbdb6cc66a5ba36235753dfef2b5b39c6951e062e4be29b\" returns successfully" Apr 13 20:41:14.545410 kubelet[2768]: I0413 20:41:14.545307 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-9pskl" podStartSLOduration=4.008654894 podStartE2EDuration="6.545279921s" podCreationTimestamp="2026-04-13 20:41:08 +0000 UTC" firstStartedPulling="2026-04-13 20:41:08.997354598 +0000 UTC m=+8.609680075" lastFinishedPulling="2026-04-13 20:41:11.533979617 +0000 UTC m=+11.146305102" observedRunningTime="2026-04-13 20:41:12.656627655 +0000 UTC m=+12.268953155" watchObservedRunningTime="2026-04-13 20:41:14.545279921 +0000 UTC m=+14.157605421" Apr 13 20:41:18.839355 sudo[1883]: pam_unix(sudo:session): session closed for user root Apr 13 20:41:18.957025 sshd[1879]: pam_unix(sshd:session): session closed for user core Apr 13 20:41:18.972244 systemd[1]: sshd@6-10.128.0.46:22-20.229.252.112:44558.service: Deactivated successfully. Apr 13 20:41:18.997661 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 20:41:19.009284 systemd-logind[1582]: Session 7 logged out. Waiting for processes to exit. Apr 13 20:41:19.014239 systemd-logind[1582]: Removed session 7. Apr 13 20:41:23.074791 kubelet[2768]: I0413 20:41:23.074735 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3256bee0-35a7-4799-b8c8-c6827e956e52-typha-certs\") pod \"calico-typha-76c659769b-bxvqn\" (UID: \"3256bee0-35a7-4799-b8c8-c6827e956e52\") " pod="calico-system/calico-typha-76c659769b-bxvqn" Apr 13 20:41:23.077249 kubelet[2768]: I0413 20:41:23.074800 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xsgm\" (UniqueName: \"kubernetes.io/projected/3256bee0-35a7-4799-b8c8-c6827e956e52-kube-api-access-6xsgm\") pod \"calico-typha-76c659769b-bxvqn\" (UID: \"3256bee0-35a7-4799-b8c8-c6827e956e52\") " pod="calico-system/calico-typha-76c659769b-bxvqn" Apr 13 20:41:23.077249 kubelet[2768]: I0413 20:41:23.074836 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3256bee0-35a7-4799-b8c8-c6827e956e52-tigera-ca-bundle\") pod \"calico-typha-76c659769b-bxvqn\" (UID: \"3256bee0-35a7-4799-b8c8-c6827e956e52\") " pod="calico-system/calico-typha-76c659769b-bxvqn" Apr 13 20:41:23.276590 kubelet[2768]: I0413 20:41:23.276508 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/4870b7a5-1be0-4200-845c-fc897141fba2-nodeproc\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.276590 kubelet[2768]: I0413 20:41:23.276593 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/4870b7a5-1be0-4200-845c-fc897141fba2-var-run-calico\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.276861 kubelet[2768]: I0413 20:41:23.276627 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/4870b7a5-1be0-4200-845c-fc897141fba2-bpffs\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.276861 kubelet[2768]: I0413 20:41:23.276658 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/4870b7a5-1be0-4200-845c-fc897141fba2-node-certs\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.276861 kubelet[2768]: I0413 20:41:23.276684 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4870b7a5-1be0-4200-845c-fc897141fba2-var-lib-calico\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.276861 kubelet[2768]: I0413 20:41:23.276736 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4870b7a5-1be0-4200-845c-fc897141fba2-xtables-lock\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.276861 kubelet[2768]: I0413 20:41:23.276765 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6sr\" (UniqueName: \"kubernetes.io/projected/4870b7a5-1be0-4200-845c-fc897141fba2-kube-api-access-wf6sr\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.277187 kubelet[2768]: I0413 20:41:23.276798 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/4870b7a5-1be0-4200-845c-fc897141fba2-cni-log-dir\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.277187 kubelet[2768]: I0413 20:41:23.276840 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/4870b7a5-1be0-4200-845c-fc897141fba2-cni-net-dir\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.277187 kubelet[2768]: I0413 20:41:23.276869 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/4870b7a5-1be0-4200-845c-fc897141fba2-flexvol-driver-host\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.277187 kubelet[2768]: I0413 20:41:23.276903 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/4870b7a5-1be0-4200-845c-fc897141fba2-sys-fs\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.277187 kubelet[2768]: I0413 20:41:23.276939 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/4870b7a5-1be0-4200-845c-fc897141fba2-cni-bin-dir\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.278349 kubelet[2768]: I0413 20:41:23.276973 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4870b7a5-1be0-4200-845c-fc897141fba2-lib-modules\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.278349 kubelet[2768]: I0413 20:41:23.277004 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/4870b7a5-1be0-4200-845c-fc897141fba2-policysync\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.278349 kubelet[2768]: I0413 20:41:23.277033 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4870b7a5-1be0-4200-845c-fc897141fba2-tigera-ca-bundle\") pod \"calico-node-fcfm9\" (UID: \"4870b7a5-1be0-4200-845c-fc897141fba2\") " pod="calico-system/calico-node-fcfm9" Apr 13 20:41:23.295911 kubelet[2768]: E0413 20:41:23.295838 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzf9p" podUID="0a802a66-82ba-4481-9d13-dc399ccc739d" Apr 13 20:41:23.379741 kubelet[2768]: I0413 20:41:23.377827 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0a802a66-82ba-4481-9d13-dc399ccc739d-kubelet-dir\") pod \"csi-node-driver-wzf9p\" (UID: \"0a802a66-82ba-4481-9d13-dc399ccc739d\") " pod="calico-system/csi-node-driver-wzf9p" Apr 13 20:41:23.379741 kubelet[2768]: I0413 20:41:23.377922 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0a802a66-82ba-4481-9d13-dc399ccc739d-registration-dir\") pod \"csi-node-driver-wzf9p\" (UID: \"0a802a66-82ba-4481-9d13-dc399ccc739d\") " pod="calico-system/csi-node-driver-wzf9p" Apr 13 20:41:23.379741 kubelet[2768]: I0413 20:41:23.378006 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0a802a66-82ba-4481-9d13-dc399ccc739d-socket-dir\") pod \"csi-node-driver-wzf9p\" (UID: \"0a802a66-82ba-4481-9d13-dc399ccc739d\") " pod="calico-system/csi-node-driver-wzf9p" Apr 13 20:41:23.379741 kubelet[2768]: I0413 20:41:23.378031 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0a802a66-82ba-4481-9d13-dc399ccc739d-varrun\") pod \"csi-node-driver-wzf9p\" (UID: \"0a802a66-82ba-4481-9d13-dc399ccc739d\") " pod="calico-system/csi-node-driver-wzf9p" Apr 13 20:41:23.379741 kubelet[2768]: I0413 20:41:23.378103 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmgcx\" (UniqueName: \"kubernetes.io/projected/0a802a66-82ba-4481-9d13-dc399ccc739d-kube-api-access-nmgcx\") pod \"csi-node-driver-wzf9p\" (UID: \"0a802a66-82ba-4481-9d13-dc399ccc739d\") " pod="calico-system/csi-node-driver-wzf9p" Apr 13 20:41:23.380143 containerd[1598]: time="2026-04-13T20:41:23.379292386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76c659769b-bxvqn,Uid:3256bee0-35a7-4799-b8c8-c6827e956e52,Namespace:calico-system,Attempt:0,}" Apr 13 20:41:23.380839 kubelet[2768]: E0413 20:41:23.380816 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.380986 kubelet[2768]: W0413 20:41:23.380966 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.381130 kubelet[2768]: E0413 20:41:23.381113 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.381655 kubelet[2768]: E0413 20:41:23.381623 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.381781 kubelet[2768]: W0413 20:41:23.381762 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.381917 kubelet[2768]: E0413 20:41:23.381899 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.383056 kubelet[2768]: E0413 20:41:23.383037 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.383703 kubelet[2768]: W0413 20:41:23.383607 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.383844 kubelet[2768]: E0413 20:41:23.383824 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.384546 kubelet[2768]: E0413 20:41:23.384527 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.384729 kubelet[2768]: W0413 20:41:23.384679 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.384729 kubelet[2768]: E0413 20:41:23.384706 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.385511 kubelet[2768]: E0413 20:41:23.385346 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.385511 kubelet[2768]: W0413 20:41:23.385374 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.385511 kubelet[2768]: E0413 20:41:23.385395 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.386199 kubelet[2768]: E0413 20:41:23.386031 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.386199 kubelet[2768]: W0413 20:41:23.386048 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.386199 kubelet[2768]: E0413 20:41:23.386080 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.386922 kubelet[2768]: E0413 20:41:23.386735 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.386922 kubelet[2768]: W0413 20:41:23.386753 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.386922 kubelet[2768]: E0413 20:41:23.386770 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.387543 kubelet[2768]: E0413 20:41:23.387374 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.387543 kubelet[2768]: W0413 20:41:23.387393 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.387543 kubelet[2768]: E0413 20:41:23.387422 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.390280 kubelet[2768]: E0413 20:41:23.388761 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.390280 kubelet[2768]: W0413 20:41:23.388780 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.390280 kubelet[2768]: E0413 20:41:23.388797 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.390280 kubelet[2768]: E0413 20:41:23.389885 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.390280 kubelet[2768]: W0413 20:41:23.389902 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.390280 kubelet[2768]: E0413 20:41:23.389919 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.392362 kubelet[2768]: E0413 20:41:23.392338 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.392495 kubelet[2768]: W0413 20:41:23.392470 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.392609 kubelet[2768]: E0413 20:41:23.392592 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.393121 kubelet[2768]: E0413 20:41:23.393036 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.395487 kubelet[2768]: W0413 20:41:23.395465 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.395752 kubelet[2768]: E0413 20:41:23.395615 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.396010 kubelet[2768]: E0413 20:41:23.395961 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.396010 kubelet[2768]: W0413 20:41:23.395977 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.396010 kubelet[2768]: E0413 20:41:23.395992 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.398421 kubelet[2768]: E0413 20:41:23.398291 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.398421 kubelet[2768]: W0413 20:41:23.398309 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.398421 kubelet[2768]: E0413 20:41:23.398325 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.399118 kubelet[2768]: E0413 20:41:23.398906 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.399118 kubelet[2768]: W0413 20:41:23.398923 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.399118 kubelet[2768]: E0413 20:41:23.398938 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.404100 kubelet[2768]: E0413 20:41:23.400908 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.404100 kubelet[2768]: W0413 20:41:23.400936 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.404100 kubelet[2768]: E0413 20:41:23.400955 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.407269 kubelet[2768]: E0413 20:41:23.405086 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.407269 kubelet[2768]: W0413 20:41:23.405105 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.407269 kubelet[2768]: E0413 20:41:23.405123 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.407269 kubelet[2768]: E0413 20:41:23.407210 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.407269 kubelet[2768]: W0413 20:41:23.407228 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.407269 kubelet[2768]: E0413 20:41:23.407244 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.412004 kubelet[2768]: E0413 20:41:23.411670 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.412004 kubelet[2768]: W0413 20:41:23.411692 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.412004 kubelet[2768]: E0413 20:41:23.411712 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.414132 kubelet[2768]: E0413 20:41:23.413564 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.414132 kubelet[2768]: W0413 20:41:23.413592 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.414132 kubelet[2768]: E0413 20:41:23.413611 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.414923 kubelet[2768]: E0413 20:41:23.414904 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.416756 kubelet[2768]: W0413 20:41:23.415293 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.416756 kubelet[2768]: E0413 20:41:23.415322 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.417371 kubelet[2768]: E0413 20:41:23.417221 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.417371 kubelet[2768]: W0413 20:41:23.417239 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.417371 kubelet[2768]: E0413 20:41:23.417257 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.418132 kubelet[2768]: E0413 20:41:23.417885 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.418132 kubelet[2768]: W0413 20:41:23.417903 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.418132 kubelet[2768]: E0413 20:41:23.417919 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.418418 kubelet[2768]: E0413 20:41:23.418402 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.418621 kubelet[2768]: W0413 20:41:23.418509 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.418621 kubelet[2768]: E0413 20:41:23.418532 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.419258 kubelet[2768]: E0413 20:41:23.419144 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.419258 kubelet[2768]: W0413 20:41:23.419162 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.419258 kubelet[2768]: E0413 20:41:23.419179 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.420091 kubelet[2768]: E0413 20:41:23.419937 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.420091 kubelet[2768]: W0413 20:41:23.419954 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.420091 kubelet[2768]: E0413 20:41:23.419970 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.420865 kubelet[2768]: E0413 20:41:23.420626 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.420865 kubelet[2768]: W0413 20:41:23.420643 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.420865 kubelet[2768]: E0413 20:41:23.420661 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.421437 kubelet[2768]: E0413 20:41:23.421260 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.421437 kubelet[2768]: W0413 20:41:23.421277 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.421437 kubelet[2768]: E0413 20:41:23.421294 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.422090 kubelet[2768]: E0413 20:41:23.421908 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.422090 kubelet[2768]: W0413 20:41:23.421927 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.422090 kubelet[2768]: E0413 20:41:23.421944 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.424145 kubelet[2768]: E0413 20:41:23.424119 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.424145 kubelet[2768]: W0413 20:41:23.424144 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.424304 kubelet[2768]: E0413 20:41:23.424162 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.425690 kubelet[2768]: E0413 20:41:23.425465 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.425690 kubelet[2768]: W0413 20:41:23.425485 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.425690 kubelet[2768]: E0413 20:41:23.425504 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.452489 kubelet[2768]: E0413 20:41:23.452451 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.454094 kubelet[2768]: W0413 20:41:23.453885 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.454094 kubelet[2768]: E0413 20:41:23.453928 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.455538 containerd[1598]: time="2026-04-13T20:41:23.455147722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:41:23.456306 containerd[1598]: time="2026-04-13T20:41:23.455836376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:41:23.456306 containerd[1598]: time="2026-04-13T20:41:23.455967826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:23.457320 containerd[1598]: time="2026-04-13T20:41:23.456268818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:23.480240 kubelet[2768]: E0413 20:41:23.479865 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.480240 kubelet[2768]: W0413 20:41:23.479892 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.480240 kubelet[2768]: E0413 20:41:23.479919 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.480547 kubelet[2768]: E0413 20:41:23.480287 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.480547 kubelet[2768]: W0413 20:41:23.480301 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.480547 kubelet[2768]: E0413 20:41:23.480320 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.480723 kubelet[2768]: E0413 20:41:23.480618 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.480723 kubelet[2768]: W0413 20:41:23.480635 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.480723 kubelet[2768]: E0413 20:41:23.480651 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.481292 kubelet[2768]: E0413 20:41:23.481033 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.481292 kubelet[2768]: W0413 20:41:23.481050 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.481292 kubelet[2768]: E0413 20:41:23.481097 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.482089 kubelet[2768]: E0413 20:41:23.482001 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.482089 kubelet[2768]: W0413 20:41:23.482017 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.482402 kubelet[2768]: E0413 20:41:23.482185 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.483364 kubelet[2768]: E0413 20:41:23.483339 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.483364 kubelet[2768]: W0413 20:41:23.483361 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.483850 kubelet[2768]: E0413 20:41:23.483377 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.484615 kubelet[2768]: E0413 20:41:23.484593 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.484615 kubelet[2768]: W0413 20:41:23.484614 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.484882 kubelet[2768]: E0413 20:41:23.484631 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.487088 kubelet[2768]: E0413 20:41:23.486565 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.487088 kubelet[2768]: W0413 20:41:23.486599 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.487088 kubelet[2768]: E0413 20:41:23.486645 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.487859 kubelet[2768]: E0413 20:41:23.487781 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.487859 kubelet[2768]: W0413 20:41:23.487801 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.487859 kubelet[2768]: E0413 20:41:23.487819 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.490154 kubelet[2768]: E0413 20:41:23.489976 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.490154 kubelet[2768]: W0413 20:41:23.489996 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.490154 kubelet[2768]: E0413 20:41:23.490013 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.490952 kubelet[2768]: E0413 20:41:23.490554 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.490952 kubelet[2768]: W0413 20:41:23.490583 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.490952 kubelet[2768]: E0413 20:41:23.490601 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.491429 kubelet[2768]: E0413 20:41:23.491304 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.491429 kubelet[2768]: W0413 20:41:23.491322 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.491429 kubelet[2768]: E0413 20:41:23.491339 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.491965 kubelet[2768]: E0413 20:41:23.491947 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.492164 kubelet[2768]: W0413 20:41:23.492142 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.494118 kubelet[2768]: E0413 20:41:23.493918 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.494428 kubelet[2768]: E0413 20:41:23.494411 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.494622 kubelet[2768]: W0413 20:41:23.494579 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.494622 kubelet[2768]: E0413 20:41:23.494604 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.495811 kubelet[2768]: E0413 20:41:23.495126 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.495811 kubelet[2768]: W0413 20:41:23.495192 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.495811 kubelet[2768]: E0413 20:41:23.495223 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.495811 kubelet[2768]: E0413 20:41:23.495583 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.495811 kubelet[2768]: W0413 20:41:23.495597 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.495811 kubelet[2768]: E0413 20:41:23.495612 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.496546 kubelet[2768]: E0413 20:41:23.496457 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.496546 kubelet[2768]: W0413 20:41:23.496474 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.496546 kubelet[2768]: E0413 20:41:23.496491 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.497173 kubelet[2768]: E0413 20:41:23.497154 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.497399 kubelet[2768]: W0413 20:41:23.497317 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.497399 kubelet[2768]: E0413 20:41:23.497342 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.498690 kubelet[2768]: E0413 20:41:23.498671 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.498858 kubelet[2768]: W0413 20:41:23.498839 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.500082 kubelet[2768]: E0413 20:41:23.499847 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.500726 kubelet[2768]: E0413 20:41:23.500679 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.500940 kubelet[2768]: W0413 20:41:23.500895 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.500940 kubelet[2768]: E0413 20:41:23.500923 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.501793 kubelet[2768]: E0413 20:41:23.501753 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.501988 kubelet[2768]: W0413 20:41:23.501968 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.502146 kubelet[2768]: E0413 20:41:23.502128 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.503542 kubelet[2768]: E0413 20:41:23.503522 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.504836 kubelet[2768]: W0413 20:41:23.504368 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.504836 kubelet[2768]: E0413 20:41:23.504394 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.505185 kubelet[2768]: E0413 20:41:23.505038 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.505185 kubelet[2768]: W0413 20:41:23.505056 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.505185 kubelet[2768]: E0413 20:41:23.505097 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.505830 kubelet[2768]: E0413 20:41:23.505810 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.505957 kubelet[2768]: W0413 20:41:23.505940 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.506146 kubelet[2768]: E0413 20:41:23.506040 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.506615 kubelet[2768]: E0413 20:41:23.506596 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.507958 kubelet[2768]: W0413 20:41:23.507685 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.507958 kubelet[2768]: E0413 20:41:23.507719 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.521524 containerd[1598]: time="2026-04-13T20:41:23.521472863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fcfm9,Uid:4870b7a5-1be0-4200-845c-fc897141fba2,Namespace:calico-system,Attempt:0,}" Apr 13 20:41:23.533056 kubelet[2768]: E0413 20:41:23.532931 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:23.533056 kubelet[2768]: W0413 20:41:23.532977 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:23.533056 kubelet[2768]: E0413 20:41:23.533004 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:23.576496 containerd[1598]: time="2026-04-13T20:41:23.576172371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-76c659769b-bxvqn,Uid:3256bee0-35a7-4799-b8c8-c6827e956e52,Namespace:calico-system,Attempt:0,} returns sandbox id \"77618162623128a202b07a620ea9b7940404a5ac71c56eefcfab6b277df4e897\"" Apr 13 20:41:23.578400 containerd[1598]: time="2026-04-13T20:41:23.578218893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:41:23.578983 containerd[1598]: time="2026-04-13T20:41:23.578306015Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:41:23.578983 containerd[1598]: time="2026-04-13T20:41:23.578341722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:23.578983 containerd[1598]: time="2026-04-13T20:41:23.578478656Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:23.579880 containerd[1598]: time="2026-04-13T20:41:23.579780846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 20:41:23.634973 containerd[1598]: time="2026-04-13T20:41:23.634859749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fcfm9,Uid:4870b7a5-1be0-4200-845c-fc897141fba2,Namespace:calico-system,Attempt:0,} returns sandbox id \"ec76aa27e95d475a31557604c9866bb04fb78309066f91b4ad452fb1e638dfe6\"" Apr 13 20:41:24.691684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount139637115.mount: Deactivated successfully. Apr 13 20:41:25.573984 kubelet[2768]: E0413 20:41:25.573375 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzf9p" podUID="0a802a66-82ba-4481-9d13-dc399ccc739d" Apr 13 20:41:25.703974 containerd[1598]: time="2026-04-13T20:41:25.703910172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:25.705352 containerd[1598]: time="2026-04-13T20:41:25.705195483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 13 20:41:25.706243 containerd[1598]: time="2026-04-13T20:41:25.706051916Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:25.709466 containerd[1598]: time="2026-04-13T20:41:25.709407822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:25.710850 containerd[1598]: time="2026-04-13T20:41:25.710705850Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.130883879s" Apr 13 20:41:25.710850 containerd[1598]: time="2026-04-13T20:41:25.710747120Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 13 20:41:25.722083 containerd[1598]: time="2026-04-13T20:41:25.720685903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 20:41:25.737829 containerd[1598]: time="2026-04-13T20:41:25.737662905Z" level=info msg="CreateContainer within sandbox \"77618162623128a202b07a620ea9b7940404a5ac71c56eefcfab6b277df4e897\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 20:41:25.753490 containerd[1598]: time="2026-04-13T20:41:25.753431008Z" level=info msg="CreateContainer within sandbox \"77618162623128a202b07a620ea9b7940404a5ac71c56eefcfab6b277df4e897\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d96deb9acecdede07a0eb6458d7b0c5c492a7002532896d0ee6d4608065c4a20\"" Apr 13 20:41:25.754382 containerd[1598]: time="2026-04-13T20:41:25.754215461Z" level=info msg="StartContainer for \"d96deb9acecdede07a0eb6458d7b0c5c492a7002532896d0ee6d4608065c4a20\"" Apr 13 20:41:25.859668 containerd[1598]: time="2026-04-13T20:41:25.859501480Z" level=info msg="StartContainer for \"d96deb9acecdede07a0eb6458d7b0c5c492a7002532896d0ee6d4608065c4a20\" returns successfully" Apr 13 20:41:26.795628 kubelet[2768]: E0413 20:41:26.795583 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.796783 kubelet[2768]: W0413 20:41:26.796390 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.796783 kubelet[2768]: E0413 20:41:26.796436 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.797196 kubelet[2768]: E0413 20:41:26.796997 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.797364 kubelet[2768]: W0413 20:41:26.797014 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.797641 kubelet[2768]: E0413 20:41:26.797439 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.797829 kubelet[2768]: E0413 20:41:26.797812 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.798230 kubelet[2768]: W0413 20:41:26.798106 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.798793 kubelet[2768]: E0413 20:41:26.798508 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.799332 kubelet[2768]: E0413 20:41:26.799307 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.799332 kubelet[2768]: W0413 20:41:26.799330 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.799487 kubelet[2768]: E0413 20:41:26.799352 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.801041 kubelet[2768]: E0413 20:41:26.799822 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.801041 kubelet[2768]: W0413 20:41:26.799837 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.801041 kubelet[2768]: E0413 20:41:26.799852 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.801041 kubelet[2768]: E0413 20:41:26.800198 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.801041 kubelet[2768]: W0413 20:41:26.800211 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.801041 kubelet[2768]: E0413 20:41:26.800225 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.801041 kubelet[2768]: E0413 20:41:26.800544 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.801041 kubelet[2768]: W0413 20:41:26.800558 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.801041 kubelet[2768]: E0413 20:41:26.800573 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.803187 kubelet[2768]: E0413 20:41:26.801167 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.803187 kubelet[2768]: W0413 20:41:26.801183 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.803187 kubelet[2768]: E0413 20:41:26.801200 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.803187 kubelet[2768]: E0413 20:41:26.801991 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.803187 kubelet[2768]: W0413 20:41:26.802006 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.803187 kubelet[2768]: E0413 20:41:26.802022 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.803187 kubelet[2768]: E0413 20:41:26.802417 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.803187 kubelet[2768]: W0413 20:41:26.802433 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.803187 kubelet[2768]: E0413 20:41:26.802449 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.803187 kubelet[2768]: E0413 20:41:26.802765 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.805234 kubelet[2768]: W0413 20:41:26.802779 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.805234 kubelet[2768]: E0413 20:41:26.802796 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.805234 kubelet[2768]: E0413 20:41:26.803175 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.805234 kubelet[2768]: W0413 20:41:26.803189 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.805234 kubelet[2768]: E0413 20:41:26.803204 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.805234 kubelet[2768]: E0413 20:41:26.803612 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.805234 kubelet[2768]: W0413 20:41:26.803633 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.805234 kubelet[2768]: E0413 20:41:26.803655 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.805234 kubelet[2768]: E0413 20:41:26.804408 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.805234 kubelet[2768]: W0413 20:41:26.804425 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.807039 kubelet[2768]: E0413 20:41:26.804440 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.807039 kubelet[2768]: E0413 20:41:26.804779 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.807039 kubelet[2768]: W0413 20:41:26.804794 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.807039 kubelet[2768]: E0413 20:41:26.804808 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.811648 kubelet[2768]: E0413 20:41:26.811389 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.811648 kubelet[2768]: W0413 20:41:26.811409 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.811648 kubelet[2768]: E0413 20:41:26.811440 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.812122 kubelet[2768]: E0413 20:41:26.812106 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.812334 kubelet[2768]: W0413 20:41:26.812213 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.812334 kubelet[2768]: E0413 20:41:26.812248 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.814961 kubelet[2768]: E0413 20:41:26.812697 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.814961 kubelet[2768]: W0413 20:41:26.812716 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.814961 kubelet[2768]: E0413 20:41:26.812735 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.814961 kubelet[2768]: E0413 20:41:26.813137 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.814961 kubelet[2768]: W0413 20:41:26.813150 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.814961 kubelet[2768]: E0413 20:41:26.813166 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.814961 kubelet[2768]: E0413 20:41:26.813489 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.814961 kubelet[2768]: W0413 20:41:26.813502 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.814961 kubelet[2768]: E0413 20:41:26.813515 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.814961 kubelet[2768]: E0413 20:41:26.813929 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.815606 kubelet[2768]: W0413 20:41:26.813944 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.815606 kubelet[2768]: E0413 20:41:26.813962 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.815606 kubelet[2768]: E0413 20:41:26.814682 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.815606 kubelet[2768]: W0413 20:41:26.814698 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.815606 kubelet[2768]: E0413 20:41:26.814713 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.815606 kubelet[2768]: E0413 20:41:26.815123 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.815606 kubelet[2768]: W0413 20:41:26.815137 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.815606 kubelet[2768]: E0413 20:41:26.815153 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.815606 kubelet[2768]: E0413 20:41:26.815503 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.815606 kubelet[2768]: W0413 20:41:26.815516 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.816150 kubelet[2768]: E0413 20:41:26.815531 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.816150 kubelet[2768]: E0413 20:41:26.815902 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.816150 kubelet[2768]: W0413 20:41:26.815916 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.816150 kubelet[2768]: E0413 20:41:26.815932 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.816378 kubelet[2768]: E0413 20:41:26.816294 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.816378 kubelet[2768]: W0413 20:41:26.816308 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.816378 kubelet[2768]: E0413 20:41:26.816323 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.816744 kubelet[2768]: E0413 20:41:26.816724 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.816820 kubelet[2768]: W0413 20:41:26.816746 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.816820 kubelet[2768]: E0413 20:41:26.816763 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.817189 kubelet[2768]: E0413 20:41:26.817167 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.817189 kubelet[2768]: W0413 20:41:26.817189 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.817329 kubelet[2768]: E0413 20:41:26.817204 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.818430 kubelet[2768]: E0413 20:41:26.817554 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.818430 kubelet[2768]: W0413 20:41:26.817572 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.818430 kubelet[2768]: E0413 20:41:26.817588 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.818430 kubelet[2768]: E0413 20:41:26.817939 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.818430 kubelet[2768]: W0413 20:41:26.817966 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.818430 kubelet[2768]: E0413 20:41:26.817982 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.819049 kubelet[2768]: E0413 20:41:26.818607 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.819049 kubelet[2768]: W0413 20:41:26.818622 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.819049 kubelet[2768]: E0413 20:41:26.818636 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.819845 kubelet[2768]: E0413 20:41:26.819357 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.819845 kubelet[2768]: W0413 20:41:26.819374 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.819845 kubelet[2768]: E0413 20:41:26.819390 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.820456 kubelet[2768]: E0413 20:41:26.820439 2768 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:41:26.820574 kubelet[2768]: W0413 20:41:26.820560 2768 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:41:26.820667 kubelet[2768]: E0413 20:41:26.820654 2768 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:41:26.972312 containerd[1598]: time="2026-04-13T20:41:26.972247672Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:26.973625 containerd[1598]: time="2026-04-13T20:41:26.973571718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 13 20:41:26.974883 containerd[1598]: time="2026-04-13T20:41:26.974846624Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:26.978674 containerd[1598]: time="2026-04-13T20:41:26.977943383Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:26.980099 containerd[1598]: time="2026-04-13T20:41:26.979288924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.258555939s" Apr 13 20:41:26.980099 containerd[1598]: time="2026-04-13T20:41:26.979338377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 13 20:41:26.984984 containerd[1598]: time="2026-04-13T20:41:26.984947556Z" level=info msg="CreateContainer within sandbox \"ec76aa27e95d475a31557604c9866bb04fb78309066f91b4ad452fb1e638dfe6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 20:41:27.007010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount236807935.mount: Deactivated successfully. Apr 13 20:41:27.007717 containerd[1598]: time="2026-04-13T20:41:27.007671198Z" level=info msg="CreateContainer within sandbox \"ec76aa27e95d475a31557604c9866bb04fb78309066f91b4ad452fb1e638dfe6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f4899e1d37050cd40fe2dddb8a9e0fb8f6b5cdc9024884d411aaddcef205e7a6\"" Apr 13 20:41:27.010196 containerd[1598]: time="2026-04-13T20:41:27.009468187Z" level=info msg="StartContainer for \"f4899e1d37050cd40fe2dddb8a9e0fb8f6b5cdc9024884d411aaddcef205e7a6\"" Apr 13 20:41:27.093122 containerd[1598]: time="2026-04-13T20:41:27.090555978Z" level=info msg="StartContainer for \"f4899e1d37050cd40fe2dddb8a9e0fb8f6b5cdc9024884d411aaddcef205e7a6\" returns successfully" Apr 13 20:41:27.573160 kubelet[2768]: E0413 20:41:27.573087 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzf9p" podUID="0a802a66-82ba-4481-9d13-dc399ccc739d" Apr 13 20:41:27.713519 kubelet[2768]: I0413 20:41:27.713471 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:41:27.726254 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4899e1d37050cd40fe2dddb8a9e0fb8f6b5cdc9024884d411aaddcef205e7a6-rootfs.mount: Deactivated successfully. Apr 13 20:41:27.738338 kubelet[2768]: I0413 20:41:27.738251 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-76c659769b-bxvqn" podStartSLOduration=2.604523257 podStartE2EDuration="4.738213923s" podCreationTimestamp="2026-04-13 20:41:23 +0000 UTC" firstStartedPulling="2026-04-13 20:41:23.578662907 +0000 UTC m=+23.190988399" lastFinishedPulling="2026-04-13 20:41:25.712353585 +0000 UTC m=+25.324679065" observedRunningTime="2026-04-13 20:41:26.740007608 +0000 UTC m=+26.352333109" watchObservedRunningTime="2026-04-13 20:41:27.738213923 +0000 UTC m=+27.350539425" Apr 13 20:41:28.069727 containerd[1598]: time="2026-04-13T20:41:28.069649368Z" level=info msg="shim disconnected" id=f4899e1d37050cd40fe2dddb8a9e0fb8f6b5cdc9024884d411aaddcef205e7a6 namespace=k8s.io Apr 13 20:41:28.069727 containerd[1598]: time="2026-04-13T20:41:28.069723543Z" level=warning msg="cleaning up after shim disconnected" id=f4899e1d37050cd40fe2dddb8a9e0fb8f6b5cdc9024884d411aaddcef205e7a6 namespace=k8s.io Apr 13 20:41:28.069727 containerd[1598]: time="2026-04-13T20:41:28.069737190Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:41:28.720537 containerd[1598]: time="2026-04-13T20:41:28.720472996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 20:41:29.572888 kubelet[2768]: E0413 20:41:29.572565 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzf9p" podUID="0a802a66-82ba-4481-9d13-dc399ccc739d" Apr 13 20:41:31.573195 kubelet[2768]: E0413 20:41:31.572889 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzf9p" podUID="0a802a66-82ba-4481-9d13-dc399ccc739d" Apr 13 20:41:33.572849 kubelet[2768]: E0413 20:41:33.572430 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzf9p" podUID="0a802a66-82ba-4481-9d13-dc399ccc739d" Apr 13 20:41:35.131457 kubelet[2768]: I0413 20:41:35.131414 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:41:35.573413 kubelet[2768]: E0413 20:41:35.573357 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzf9p" podUID="0a802a66-82ba-4481-9d13-dc399ccc739d" Apr 13 20:41:35.603194 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount434422800.mount: Deactivated successfully. Apr 13 20:41:35.631875 containerd[1598]: time="2026-04-13T20:41:35.631797377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:35.633309 containerd[1598]: time="2026-04-13T20:41:35.633121516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 13 20:41:35.634362 containerd[1598]: time="2026-04-13T20:41:35.634290689Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:35.637292 containerd[1598]: time="2026-04-13T20:41:35.637231366Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:35.638450 containerd[1598]: time="2026-04-13T20:41:35.638137086Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 6.91749447s" Apr 13 20:41:35.638450 containerd[1598]: time="2026-04-13T20:41:35.638186007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 13 20:41:35.643429 containerd[1598]: time="2026-04-13T20:41:35.643389768Z" level=info msg="CreateContainer within sandbox \"ec76aa27e95d475a31557604c9866bb04fb78309066f91b4ad452fb1e638dfe6\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 20:41:35.663854 containerd[1598]: time="2026-04-13T20:41:35.663803603Z" level=info msg="CreateContainer within sandbox \"ec76aa27e95d475a31557604c9866bb04fb78309066f91b4ad452fb1e638dfe6\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"e058784a14c558b25c8152db56ae433977f9772d52cb7436c5631bb9cafe2001\"" Apr 13 20:41:35.665118 containerd[1598]: time="2026-04-13T20:41:35.664613359Z" level=info msg="StartContainer for \"e058784a14c558b25c8152db56ae433977f9772d52cb7436c5631bb9cafe2001\"" Apr 13 20:41:35.758098 containerd[1598]: time="2026-04-13T20:41:35.757802781Z" level=info msg="StartContainer for \"e058784a14c558b25c8152db56ae433977f9772d52cb7436c5631bb9cafe2001\" returns successfully" Apr 13 20:41:36.606401 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e058784a14c558b25c8152db56ae433977f9772d52cb7436c5631bb9cafe2001-rootfs.mount: Deactivated successfully. Apr 13 20:41:37.439712 containerd[1598]: time="2026-04-13T20:41:37.439624320Z" level=info msg="shim disconnected" id=e058784a14c558b25c8152db56ae433977f9772d52cb7436c5631bb9cafe2001 namespace=k8s.io Apr 13 20:41:37.439712 containerd[1598]: time="2026-04-13T20:41:37.439694103Z" level=warning msg="cleaning up after shim disconnected" id=e058784a14c558b25c8152db56ae433977f9772d52cb7436c5631bb9cafe2001 namespace=k8s.io Apr 13 20:41:37.439712 containerd[1598]: time="2026-04-13T20:41:37.439709252Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:41:37.573419 kubelet[2768]: E0413 20:41:37.573334 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzf9p" podUID="0a802a66-82ba-4481-9d13-dc399ccc739d" Apr 13 20:41:37.757397 containerd[1598]: time="2026-04-13T20:41:37.757038305Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 20:41:39.572977 kubelet[2768]: E0413 20:41:39.572708 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzf9p" podUID="0a802a66-82ba-4481-9d13-dc399ccc739d" Apr 13 20:41:41.033159 containerd[1598]: time="2026-04-13T20:41:41.033090369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:41.034625 containerd[1598]: time="2026-04-13T20:41:41.034572174Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 13 20:41:41.035397 containerd[1598]: time="2026-04-13T20:41:41.035128708Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:41.038792 containerd[1598]: time="2026-04-13T20:41:41.038727949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:41.040150 containerd[1598]: time="2026-04-13T20:41:41.040101357Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.282993914s" Apr 13 20:41:41.040578 containerd[1598]: time="2026-04-13T20:41:41.040155983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 13 20:41:41.045791 containerd[1598]: time="2026-04-13T20:41:41.045649630Z" level=info msg="CreateContainer within sandbox \"ec76aa27e95d475a31557604c9866bb04fb78309066f91b4ad452fb1e638dfe6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 20:41:41.065346 containerd[1598]: time="2026-04-13T20:41:41.065289308Z" level=info msg="CreateContainer within sandbox \"ec76aa27e95d475a31557604c9866bb04fb78309066f91b4ad452fb1e638dfe6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"ac54f4872e36169047a3016690f2efe2554f160deb10aa14eebb154e10973a89\"" Apr 13 20:41:41.067885 containerd[1598]: time="2026-04-13T20:41:41.066407933Z" level=info msg="StartContainer for \"ac54f4872e36169047a3016690f2efe2554f160deb10aa14eebb154e10973a89\"" Apr 13 20:41:41.156873 containerd[1598]: time="2026-04-13T20:41:41.156814046Z" level=info msg="StartContainer for \"ac54f4872e36169047a3016690f2efe2554f160deb10aa14eebb154e10973a89\" returns successfully" Apr 13 20:41:41.574095 kubelet[2768]: E0413 20:41:41.572813 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wzf9p" podUID="0a802a66-82ba-4481-9d13-dc399ccc739d" Apr 13 20:41:42.212241 containerd[1598]: time="2026-04-13T20:41:42.211786540Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:41:42.246992 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac54f4872e36169047a3016690f2efe2554f160deb10aa14eebb154e10973a89-rootfs.mount: Deactivated successfully. Apr 13 20:41:42.270643 kubelet[2768]: I0413 20:41:42.268200 2768 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Apr 13 20:41:42.538873 kubelet[2768]: I0413 20:41:42.538642 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0a0711eb-188e-496e-b764-dcc10a1782d1-whisker-backend-key-pair\") pod \"whisker-848d97df56-d5zqb\" (UID: \"0a0711eb-188e-496e-b764-dcc10a1782d1\") " pod="calico-system/whisker-848d97df56-d5zqb" Apr 13 20:41:42.538873 kubelet[2768]: I0413 20:41:42.538707 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j84vh\" (UniqueName: \"kubernetes.io/projected/0a0711eb-188e-496e-b764-dcc10a1782d1-kube-api-access-j84vh\") pod \"whisker-848d97df56-d5zqb\" (UID: \"0a0711eb-188e-496e-b764-dcc10a1782d1\") " pod="calico-system/whisker-848d97df56-d5zqb" Apr 13 20:41:42.538873 kubelet[2768]: I0413 20:41:42.538734 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a0711eb-188e-496e-b764-dcc10a1782d1-whisker-ca-bundle\") pod \"whisker-848d97df56-d5zqb\" (UID: \"0a0711eb-188e-496e-b764-dcc10a1782d1\") " pod="calico-system/whisker-848d97df56-d5zqb" Apr 13 20:41:42.538873 kubelet[2768]: I0413 20:41:42.538766 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0a0711eb-188e-496e-b764-dcc10a1782d1-nginx-config\") pod \"whisker-848d97df56-d5zqb\" (UID: \"0a0711eb-188e-496e-b764-dcc10a1782d1\") " pod="calico-system/whisker-848d97df56-d5zqb" Apr 13 20:41:42.739403 kubelet[2768]: I0413 20:41:42.739246 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2e0a0dff-fcd2-4863-8bb8-041686ac070a-config\") pod \"goldmane-5b85766d88-qch75\" (UID: \"2e0a0dff-fcd2-4863-8bb8-041686ac070a\") " pod="calico-system/goldmane-5b85766d88-qch75" Apr 13 20:41:42.739403 kubelet[2768]: I0413 20:41:42.739328 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2e0a0dff-fcd2-4863-8bb8-041686ac070a-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-qch75\" (UID: \"2e0a0dff-fcd2-4863-8bb8-041686ac070a\") " pod="calico-system/goldmane-5b85766d88-qch75" Apr 13 20:41:42.739403 kubelet[2768]: I0413 20:41:42.739357 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-km9nr\" (UniqueName: \"kubernetes.io/projected/2e0a0dff-fcd2-4863-8bb8-041686ac070a-kube-api-access-km9nr\") pod \"goldmane-5b85766d88-qch75\" (UID: \"2e0a0dff-fcd2-4863-8bb8-041686ac070a\") " pod="calico-system/goldmane-5b85766d88-qch75" Apr 13 20:41:42.739403 kubelet[2768]: I0413 20:41:42.739390 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2e0a0dff-fcd2-4863-8bb8-041686ac070a-goldmane-key-pair\") pod \"goldmane-5b85766d88-qch75\" (UID: \"2e0a0dff-fcd2-4863-8bb8-041686ac070a\") " pod="calico-system/goldmane-5b85766d88-qch75" Apr 13 20:41:42.747948 containerd[1598]: time="2026-04-13T20:41:42.747442557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-848d97df56-d5zqb,Uid:0a0711eb-188e-496e-b764-dcc10a1782d1,Namespace:calico-system,Attempt:0,}" Apr 13 20:41:42.811310 containerd[1598]: time="2026-04-13T20:41:42.808521542Z" level=info msg="shim disconnected" id=ac54f4872e36169047a3016690f2efe2554f160deb10aa14eebb154e10973a89 namespace=k8s.io Apr 13 20:41:42.811310 containerd[1598]: time="2026-04-13T20:41:42.808989173Z" level=warning msg="cleaning up after shim disconnected" id=ac54f4872e36169047a3016690f2efe2554f160deb10aa14eebb154e10973a89 namespace=k8s.io Apr 13 20:41:42.811310 containerd[1598]: time="2026-04-13T20:41:42.809009828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:41:42.842110 kubelet[2768]: I0413 20:41:42.840136 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7953eac-aed1-4972-966d-335bb475a17a-tigera-ca-bundle\") pod \"calico-kube-controllers-67b544dcfd-44f62\" (UID: \"c7953eac-aed1-4972-966d-335bb475a17a\") " pod="calico-system/calico-kube-controllers-67b544dcfd-44f62" Apr 13 20:41:42.842110 kubelet[2768]: I0413 20:41:42.840317 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdg6h\" (UniqueName: \"kubernetes.io/projected/c7953eac-aed1-4972-966d-335bb475a17a-kube-api-access-kdg6h\") pod \"calico-kube-controllers-67b544dcfd-44f62\" (UID: \"c7953eac-aed1-4972-966d-335bb475a17a\") " pod="calico-system/calico-kube-controllers-67b544dcfd-44f62" Apr 13 20:41:42.906360 containerd[1598]: time="2026-04-13T20:41:42.905262388Z" level=warning msg="cleanup warnings time=\"2026-04-13T20:41:42Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Apr 13 20:41:42.930353 containerd[1598]: time="2026-04-13T20:41:42.930280928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-qch75,Uid:2e0a0dff-fcd2-4863-8bb8-041686ac070a,Namespace:calico-system,Attempt:0,}" Apr 13 20:41:42.941824 kubelet[2768]: I0413 20:41:42.941728 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/28cbe2de-3916-40c2-b29d-4324dd024eb0-calico-apiserver-certs\") pod \"calico-apiserver-7d4775f99-4x7xd\" (UID: \"28cbe2de-3916-40c2-b29d-4324dd024eb0\") " pod="calico-system/calico-apiserver-7d4775f99-4x7xd" Apr 13 20:41:42.942238 kubelet[2768]: I0413 20:41:42.942171 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj6tr\" (UniqueName: \"kubernetes.io/projected/bf6c12d4-a453-4d40-bc8e-b49f714452b6-kube-api-access-pj6tr\") pod \"coredns-674b8bbfcf-dkdsv\" (UID: \"bf6c12d4-a453-4d40-bc8e-b49f714452b6\") " pod="kube-system/coredns-674b8bbfcf-dkdsv" Apr 13 20:41:42.944100 kubelet[2768]: I0413 20:41:42.942775 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sb9k4\" (UniqueName: \"kubernetes.io/projected/28cbe2de-3916-40c2-b29d-4324dd024eb0-kube-api-access-sb9k4\") pod \"calico-apiserver-7d4775f99-4x7xd\" (UID: \"28cbe2de-3916-40c2-b29d-4324dd024eb0\") " pod="calico-system/calico-apiserver-7d4775f99-4x7xd" Apr 13 20:41:42.944100 kubelet[2768]: I0413 20:41:42.942973 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f5a2099-cb50-4d52-9877-f1dd83710551-config-volume\") pod \"coredns-674b8bbfcf-2m2c2\" (UID: \"9f5a2099-cb50-4d52-9877-f1dd83710551\") " pod="kube-system/coredns-674b8bbfcf-2m2c2" Apr 13 20:41:42.944100 kubelet[2768]: I0413 20:41:42.943038 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/80bc0439-cac3-4b71-ae31-e9556293dc74-calico-apiserver-certs\") pod \"calico-apiserver-7d4775f99-cfwgd\" (UID: \"80bc0439-cac3-4b71-ae31-e9556293dc74\") " pod="calico-system/calico-apiserver-7d4775f99-cfwgd" Apr 13 20:41:42.944100 kubelet[2768]: I0413 20:41:42.943093 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kt4w\" (UniqueName: \"kubernetes.io/projected/80bc0439-cac3-4b71-ae31-e9556293dc74-kube-api-access-8kt4w\") pod \"calico-apiserver-7d4775f99-cfwgd\" (UID: \"80bc0439-cac3-4b71-ae31-e9556293dc74\") " pod="calico-system/calico-apiserver-7d4775f99-cfwgd" Apr 13 20:41:42.944100 kubelet[2768]: I0413 20:41:42.943125 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4zq2\" (UniqueName: \"kubernetes.io/projected/9f5a2099-cb50-4d52-9877-f1dd83710551-kube-api-access-g4zq2\") pod \"coredns-674b8bbfcf-2m2c2\" (UID: \"9f5a2099-cb50-4d52-9877-f1dd83710551\") " pod="kube-system/coredns-674b8bbfcf-2m2c2" Apr 13 20:41:42.944466 kubelet[2768]: I0413 20:41:42.943149 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf6c12d4-a453-4d40-bc8e-b49f714452b6-config-volume\") pod \"coredns-674b8bbfcf-dkdsv\" (UID: \"bf6c12d4-a453-4d40-bc8e-b49f714452b6\") " pod="kube-system/coredns-674b8bbfcf-dkdsv" Apr 13 20:41:43.013255 containerd[1598]: time="2026-04-13T20:41:43.012947736Z" level=error msg="Failed to destroy network for sandbox \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.013779 containerd[1598]: time="2026-04-13T20:41:43.013480991Z" level=error msg="encountered an error cleaning up failed sandbox \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.013779 containerd[1598]: time="2026-04-13T20:41:43.013561526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-848d97df56-d5zqb,Uid:0a0711eb-188e-496e-b764-dcc10a1782d1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.014015 kubelet[2768]: E0413 20:41:43.013855 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.014015 kubelet[2768]: E0413 20:41:43.013938 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-848d97df56-d5zqb" Apr 13 20:41:43.014015 kubelet[2768]: E0413 20:41:43.013978 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-848d97df56-d5zqb" Apr 13 20:41:43.014227 kubelet[2768]: E0413 20:41:43.014049 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-848d97df56-d5zqb_calico-system(0a0711eb-188e-496e-b764-dcc10a1782d1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-848d97df56-d5zqb_calico-system(0a0711eb-188e-496e-b764-dcc10a1782d1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-848d97df56-d5zqb" podUID="0a0711eb-188e-496e-b764-dcc10a1782d1" Apr 13 20:41:43.066124 containerd[1598]: time="2026-04-13T20:41:43.064424905Z" level=error msg="Failed to destroy network for sandbox \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.066124 containerd[1598]: time="2026-04-13T20:41:43.064888068Z" level=error msg="encountered an error cleaning up failed sandbox \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.066124 containerd[1598]: time="2026-04-13T20:41:43.064957978Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-qch75,Uid:2e0a0dff-fcd2-4863-8bb8-041686ac070a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.068407 kubelet[2768]: E0413 20:41:43.067975 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.068407 kubelet[2768]: E0413 20:41:43.068049 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-qch75" Apr 13 20:41:43.068407 kubelet[2768]: E0413 20:41:43.068094 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-qch75" Apr 13 20:41:43.068646 kubelet[2768]: E0413 20:41:43.068157 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-qch75_calico-system(2e0a0dff-fcd2-4863-8bb8-041686ac070a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-qch75_calico-system(2e0a0dff-fcd2-4863-8bb8-041686ac070a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-qch75" podUID="2e0a0dff-fcd2-4863-8bb8-041686ac070a" Apr 13 20:41:43.119261 containerd[1598]: time="2026-04-13T20:41:43.118805568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67b544dcfd-44f62,Uid:c7953eac-aed1-4972-966d-335bb475a17a,Namespace:calico-system,Attempt:0,}" Apr 13 20:41:43.174447 containerd[1598]: time="2026-04-13T20:41:43.174392901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4775f99-4x7xd,Uid:28cbe2de-3916-40c2-b29d-4324dd024eb0,Namespace:calico-system,Attempt:0,}" Apr 13 20:41:43.198766 containerd[1598]: time="2026-04-13T20:41:43.198284923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dkdsv,Uid:bf6c12d4-a453-4d40-bc8e-b49f714452b6,Namespace:kube-system,Attempt:0,}" Apr 13 20:41:43.199547 containerd[1598]: time="2026-04-13T20:41:43.199017123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2m2c2,Uid:9f5a2099-cb50-4d52-9877-f1dd83710551,Namespace:kube-system,Attempt:0,}" Apr 13 20:41:43.208171 containerd[1598]: time="2026-04-13T20:41:43.207992811Z" level=error msg="Failed to destroy network for sandbox \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.209124 containerd[1598]: time="2026-04-13T20:41:43.209054176Z" level=error msg="encountered an error cleaning up failed sandbox \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.209900 containerd[1598]: time="2026-04-13T20:41:43.209343844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67b544dcfd-44f62,Uid:c7953eac-aed1-4972-966d-335bb475a17a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.210024 kubelet[2768]: E0413 20:41:43.209732 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.210024 kubelet[2768]: E0413 20:41:43.209818 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67b544dcfd-44f62" Apr 13 20:41:43.210024 kubelet[2768]: E0413 20:41:43.209850 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67b544dcfd-44f62" Apr 13 20:41:43.210359 kubelet[2768]: E0413 20:41:43.209918 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67b544dcfd-44f62_calico-system(c7953eac-aed1-4972-966d-335bb475a17a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67b544dcfd-44f62_calico-system(c7953eac-aed1-4972-966d-335bb475a17a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67b544dcfd-44f62" podUID="c7953eac-aed1-4972-966d-335bb475a17a" Apr 13 20:41:43.217988 containerd[1598]: time="2026-04-13T20:41:43.217656131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4775f99-cfwgd,Uid:80bc0439-cac3-4b71-ae31-e9556293dc74,Namespace:calico-system,Attempt:0,}" Apr 13 20:41:43.294273 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e-shm.mount: Deactivated successfully. Apr 13 20:41:43.412539 containerd[1598]: time="2026-04-13T20:41:43.411327414Z" level=error msg="Failed to destroy network for sandbox \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.417219 containerd[1598]: time="2026-04-13T20:41:43.415572098Z" level=error msg="encountered an error cleaning up failed sandbox \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.417219 containerd[1598]: time="2026-04-13T20:41:43.415655082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4775f99-4x7xd,Uid:28cbe2de-3916-40c2-b29d-4324dd024eb0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.417461 kubelet[2768]: E0413 20:41:43.415993 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.417461 kubelet[2768]: E0413 20:41:43.416210 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d4775f99-4x7xd" Apr 13 20:41:43.417461 kubelet[2768]: E0413 20:41:43.416254 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d4775f99-4x7xd" Apr 13 20:41:43.419427 kubelet[2768]: E0413 20:41:43.417718 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4775f99-4x7xd_calico-system(28cbe2de-3916-40c2-b29d-4324dd024eb0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4775f99-4x7xd_calico-system(28cbe2de-3916-40c2-b29d-4324dd024eb0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7d4775f99-4x7xd" podUID="28cbe2de-3916-40c2-b29d-4324dd024eb0" Apr 13 20:41:43.421125 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320-shm.mount: Deactivated successfully. Apr 13 20:41:43.476639 containerd[1598]: time="2026-04-13T20:41:43.476495567Z" level=error msg="Failed to destroy network for sandbox \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.480545 containerd[1598]: time="2026-04-13T20:41:43.480232500Z" level=error msg="encountered an error cleaning up failed sandbox \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.483112 containerd[1598]: time="2026-04-13T20:41:43.481301762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dkdsv,Uid:bf6c12d4-a453-4d40-bc8e-b49f714452b6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.482981 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109-shm.mount: Deactivated successfully. Apr 13 20:41:43.483985 kubelet[2768]: E0413 20:41:43.483929 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.484124 kubelet[2768]: E0413 20:41:43.484030 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dkdsv" Apr 13 20:41:43.486659 kubelet[2768]: E0413 20:41:43.486601 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-dkdsv" Apr 13 20:41:43.489447 kubelet[2768]: E0413 20:41:43.486761 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-dkdsv_kube-system(bf6c12d4-a453-4d40-bc8e-b49f714452b6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-dkdsv_kube-system(bf6c12d4-a453-4d40-bc8e-b49f714452b6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dkdsv" podUID="bf6c12d4-a453-4d40-bc8e-b49f714452b6" Apr 13 20:41:43.510461 containerd[1598]: time="2026-04-13T20:41:43.510401106Z" level=error msg="Failed to destroy network for sandbox \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.512719 containerd[1598]: time="2026-04-13T20:41:43.511138331Z" level=error msg="encountered an error cleaning up failed sandbox \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.512963 containerd[1598]: time="2026-04-13T20:41:43.512924028Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2m2c2,Uid:9f5a2099-cb50-4d52-9877-f1dd83710551,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.514156 kubelet[2768]: E0413 20:41:43.513363 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.514375 kubelet[2768]: E0413 20:41:43.514343 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2m2c2" Apr 13 20:41:43.514516 kubelet[2768]: E0413 20:41:43.514491 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-2m2c2" Apr 13 20:41:43.514703 kubelet[2768]: E0413 20:41:43.514666 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-2m2c2_kube-system(9f5a2099-cb50-4d52-9877-f1dd83710551)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-2m2c2_kube-system(9f5a2099-cb50-4d52-9877-f1dd83710551)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2m2c2" podUID="9f5a2099-cb50-4d52-9877-f1dd83710551" Apr 13 20:41:43.516795 containerd[1598]: time="2026-04-13T20:41:43.516750527Z" level=error msg="Failed to destroy network for sandbox \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.517869 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6-shm.mount: Deactivated successfully. Apr 13 20:41:43.520294 containerd[1598]: time="2026-04-13T20:41:43.519585224Z" level=error msg="encountered an error cleaning up failed sandbox \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.520294 containerd[1598]: time="2026-04-13T20:41:43.519711184Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4775f99-cfwgd,Uid:80bc0439-cac3-4b71-ae31-e9556293dc74,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.520539 kubelet[2768]: E0413 20:41:43.520000 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.520539 kubelet[2768]: E0413 20:41:43.520162 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d4775f99-cfwgd" Apr 13 20:41:43.520539 kubelet[2768]: E0413 20:41:43.520216 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d4775f99-cfwgd" Apr 13 20:41:43.520770 kubelet[2768]: E0413 20:41:43.520303 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4775f99-cfwgd_calico-system(80bc0439-cac3-4b71-ae31-e9556293dc74)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4775f99-cfwgd_calico-system(80bc0439-cac3-4b71-ae31-e9556293dc74)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7d4775f99-cfwgd" podUID="80bc0439-cac3-4b71-ae31-e9556293dc74" Apr 13 20:41:43.576550 containerd[1598]: time="2026-04-13T20:41:43.576497300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzf9p,Uid:0a802a66-82ba-4481-9d13-dc399ccc739d,Namespace:calico-system,Attempt:0,}" Apr 13 20:41:43.650764 containerd[1598]: time="2026-04-13T20:41:43.650702002Z" level=error msg="Failed to destroy network for sandbox \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.651294 containerd[1598]: time="2026-04-13T20:41:43.651230807Z" level=error msg="encountered an error cleaning up failed sandbox \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.651398 containerd[1598]: time="2026-04-13T20:41:43.651304554Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzf9p,Uid:0a802a66-82ba-4481-9d13-dc399ccc739d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.651656 kubelet[2768]: E0413 20:41:43.651583 2768 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.652422 kubelet[2768]: E0413 20:41:43.651658 2768 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzf9p" Apr 13 20:41:43.652422 kubelet[2768]: E0413 20:41:43.651695 2768 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wzf9p" Apr 13 20:41:43.652422 kubelet[2768]: E0413 20:41:43.651778 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wzf9p_calico-system(0a802a66-82ba-4481-9d13-dc399ccc739d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wzf9p_calico-system(0a802a66-82ba-4481-9d13-dc399ccc739d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wzf9p" podUID="0a802a66-82ba-4481-9d13-dc399ccc739d" Apr 13 20:41:43.784626 kubelet[2768]: I0413 20:41:43.784532 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Apr 13 20:41:43.786551 containerd[1598]: time="2026-04-13T20:41:43.786330654Z" level=info msg="StopPodSandbox for \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\"" Apr 13 20:41:43.787994 kubelet[2768]: I0413 20:41:43.787485 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Apr 13 20:41:43.788176 containerd[1598]: time="2026-04-13T20:41:43.787612223Z" level=info msg="Ensure that sandbox bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4 in task-service has been cleanup successfully" Apr 13 20:41:43.792110 kubelet[2768]: I0413 20:41:43.791315 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Apr 13 20:41:43.793244 containerd[1598]: time="2026-04-13T20:41:43.792623559Z" level=info msg="StopPodSandbox for \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\"" Apr 13 20:41:43.793244 containerd[1598]: time="2026-04-13T20:41:43.792848918Z" level=info msg="Ensure that sandbox 7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e in task-service has been cleanup successfully" Apr 13 20:41:43.793671 containerd[1598]: time="2026-04-13T20:41:43.793638141Z" level=info msg="StopPodSandbox for \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\"" Apr 13 20:41:43.794192 containerd[1598]: time="2026-04-13T20:41:43.794128193Z" level=info msg="Ensure that sandbox 588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6 in task-service has been cleanup successfully" Apr 13 20:41:43.798664 kubelet[2768]: I0413 20:41:43.797485 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Apr 13 20:41:43.806348 containerd[1598]: time="2026-04-13T20:41:43.804812478Z" level=info msg="StopPodSandbox for \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\"" Apr 13 20:41:43.806348 containerd[1598]: time="2026-04-13T20:41:43.805118182Z" level=info msg="Ensure that sandbox 992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481 in task-service has been cleanup successfully" Apr 13 20:41:43.812478 kubelet[2768]: I0413 20:41:43.812427 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Apr 13 20:41:43.816105 containerd[1598]: time="2026-04-13T20:41:43.816014477Z" level=info msg="StopPodSandbox for \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\"" Apr 13 20:41:43.817496 containerd[1598]: time="2026-04-13T20:41:43.817457280Z" level=info msg="Ensure that sandbox 206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5 in task-service has been cleanup successfully" Apr 13 20:41:43.825019 kubelet[2768]: I0413 20:41:43.824985 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Apr 13 20:41:43.826367 containerd[1598]: time="2026-04-13T20:41:43.826219369Z" level=info msg="StopPodSandbox for \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\"" Apr 13 20:41:43.827260 containerd[1598]: time="2026-04-13T20:41:43.826466075Z" level=info msg="Ensure that sandbox 9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49 in task-service has been cleanup successfully" Apr 13 20:41:43.866823 kubelet[2768]: I0413 20:41:43.866035 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Apr 13 20:41:43.872030 containerd[1598]: time="2026-04-13T20:41:43.871227859Z" level=info msg="StopPodSandbox for \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\"" Apr 13 20:41:43.872030 containerd[1598]: time="2026-04-13T20:41:43.871490005Z" level=info msg="Ensure that sandbox ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109 in task-service has been cleanup successfully" Apr 13 20:41:43.892289 kubelet[2768]: I0413 20:41:43.892235 2768 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Apr 13 20:41:43.897403 containerd[1598]: time="2026-04-13T20:41:43.897356952Z" level=info msg="StopPodSandbox for \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\"" Apr 13 20:41:43.898093 containerd[1598]: time="2026-04-13T20:41:43.897853336Z" level=info msg="Ensure that sandbox a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320 in task-service has been cleanup successfully" Apr 13 20:41:43.898667 containerd[1598]: time="2026-04-13T20:41:43.898631586Z" level=info msg="CreateContainer within sandbox \"ec76aa27e95d475a31557604c9866bb04fb78309066f91b4ad452fb1e638dfe6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 20:41:43.974090 containerd[1598]: time="2026-04-13T20:41:43.973941805Z" level=info msg="CreateContainer within sandbox \"ec76aa27e95d475a31557604c9866bb04fb78309066f91b4ad452fb1e638dfe6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"198d405df0b78fab0d134337aea3f7fcc5ac2fd505702c9e1a3a0f332750bc2d\"" Apr 13 20:41:43.977034 containerd[1598]: time="2026-04-13T20:41:43.976868785Z" level=info msg="StartContainer for \"198d405df0b78fab0d134337aea3f7fcc5ac2fd505702c9e1a3a0f332750bc2d\"" Apr 13 20:41:43.985548 containerd[1598]: time="2026-04-13T20:41:43.985409344Z" level=error msg="StopPodSandbox for \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\" failed" error="failed to destroy network for sandbox \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:43.985889 kubelet[2768]: E0413 20:41:43.985674 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Apr 13 20:41:43.985889 kubelet[2768]: E0413 20:41:43.985750 2768 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e"} Apr 13 20:41:43.985889 kubelet[2768]: E0413 20:41:43.985825 2768 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a0711eb-188e-496e-b764-dcc10a1782d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:41:43.985889 kubelet[2768]: E0413 20:41:43.985861 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a0711eb-188e-496e-b764-dcc10a1782d1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-848d97df56-d5zqb" podUID="0a0711eb-188e-496e-b764-dcc10a1782d1" Apr 13 20:41:44.013924 containerd[1598]: time="2026-04-13T20:41:44.013842391Z" level=error msg="StopPodSandbox for \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\" failed" error="failed to destroy network for sandbox \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:44.014207 kubelet[2768]: E0413 20:41:44.014154 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Apr 13 20:41:44.014466 kubelet[2768]: E0413 20:41:44.014230 2768 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49"} Apr 13 20:41:44.014466 kubelet[2768]: E0413 20:41:44.014298 2768 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2e0a0dff-fcd2-4863-8bb8-041686ac070a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:41:44.014466 kubelet[2768]: E0413 20:41:44.014340 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2e0a0dff-fcd2-4863-8bb8-041686ac070a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-qch75" podUID="2e0a0dff-fcd2-4863-8bb8-041686ac070a" Apr 13 20:41:44.038628 containerd[1598]: time="2026-04-13T20:41:44.038486991Z" level=error msg="StopPodSandbox for \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\" failed" error="failed to destroy network for sandbox \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:44.040240 containerd[1598]: time="2026-04-13T20:41:44.039312871Z" level=error msg="StopPodSandbox for \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\" failed" error="failed to destroy network for sandbox \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:44.041929 kubelet[2768]: E0413 20:41:44.040676 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Apr 13 20:41:44.041929 kubelet[2768]: E0413 20:41:44.040745 2768 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481"} Apr 13 20:41:44.041929 kubelet[2768]: E0413 20:41:44.040799 2768 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0a802a66-82ba-4481-9d13-dc399ccc739d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:41:44.041929 kubelet[2768]: E0413 20:41:44.040837 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0a802a66-82ba-4481-9d13-dc399ccc739d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wzf9p" podUID="0a802a66-82ba-4481-9d13-dc399ccc739d" Apr 13 20:41:44.042368 kubelet[2768]: E0413 20:41:44.040882 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Apr 13 20:41:44.042368 kubelet[2768]: E0413 20:41:44.040909 2768 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4"} Apr 13 20:41:44.042368 kubelet[2768]: E0413 20:41:44.040940 2768 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"80bc0439-cac3-4b71-ae31-e9556293dc74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:41:44.042368 kubelet[2768]: E0413 20:41:44.040983 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"80bc0439-cac3-4b71-ae31-e9556293dc74\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7d4775f99-cfwgd" podUID="80bc0439-cac3-4b71-ae31-e9556293dc74" Apr 13 20:41:44.085210 containerd[1598]: time="2026-04-13T20:41:44.085115241Z" level=error msg="StopPodSandbox for \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\" failed" error="failed to destroy network for sandbox \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:44.087322 kubelet[2768]: E0413 20:41:44.085547 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Apr 13 20:41:44.087322 kubelet[2768]: E0413 20:41:44.085617 2768 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6"} Apr 13 20:41:44.087322 kubelet[2768]: E0413 20:41:44.085671 2768 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f5a2099-cb50-4d52-9877-f1dd83710551\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:41:44.087322 kubelet[2768]: E0413 20:41:44.085710 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f5a2099-cb50-4d52-9877-f1dd83710551\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-2m2c2" podUID="9f5a2099-cb50-4d52-9877-f1dd83710551" Apr 13 20:41:44.115208 containerd[1598]: time="2026-04-13T20:41:44.114318317Z" level=error msg="StopPodSandbox for \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\" failed" error="failed to destroy network for sandbox \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:44.115414 kubelet[2768]: E0413 20:41:44.114633 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Apr 13 20:41:44.115414 kubelet[2768]: E0413 20:41:44.114786 2768 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5"} Apr 13 20:41:44.115414 kubelet[2768]: E0413 20:41:44.114834 2768 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7953eac-aed1-4972-966d-335bb475a17a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:41:44.115414 kubelet[2768]: E0413 20:41:44.114877 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7953eac-aed1-4972-966d-335bb475a17a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67b544dcfd-44f62" podUID="c7953eac-aed1-4972-966d-335bb475a17a" Apr 13 20:41:44.121184 containerd[1598]: time="2026-04-13T20:41:44.121015579Z" level=error msg="StopPodSandbox for \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\" failed" error="failed to destroy network for sandbox \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:44.122174 kubelet[2768]: E0413 20:41:44.122125 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Apr 13 20:41:44.122550 kubelet[2768]: E0413 20:41:44.122190 2768 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109"} Apr 13 20:41:44.123791 kubelet[2768]: E0413 20:41:44.122245 2768 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bf6c12d4-a453-4d40-bc8e-b49f714452b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:41:44.124343 kubelet[2768]: E0413 20:41:44.123880 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bf6c12d4-a453-4d40-bc8e-b49f714452b6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-dkdsv" podUID="bf6c12d4-a453-4d40-bc8e-b49f714452b6" Apr 13 20:41:44.138393 containerd[1598]: time="2026-04-13T20:41:44.138229507Z" level=error msg="StopPodSandbox for \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\" failed" error="failed to destroy network for sandbox \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:41:44.139004 kubelet[2768]: E0413 20:41:44.138754 2768 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Apr 13 20:41:44.139004 kubelet[2768]: E0413 20:41:44.138832 2768 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320"} Apr 13 20:41:44.139004 kubelet[2768]: E0413 20:41:44.138878 2768 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"28cbe2de-3916-40c2-b29d-4324dd024eb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:41:44.139690 kubelet[2768]: E0413 20:41:44.138921 2768 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"28cbe2de-3916-40c2-b29d-4324dd024eb0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7d4775f99-4x7xd" podUID="28cbe2de-3916-40c2-b29d-4324dd024eb0" Apr 13 20:41:44.149658 containerd[1598]: time="2026-04-13T20:41:44.149578121Z" level=info msg="StartContainer for \"198d405df0b78fab0d134337aea3f7fcc5ac2fd505702c9e1a3a0f332750bc2d\" returns successfully" Apr 13 20:41:44.254838 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4-shm.mount: Deactivated successfully. Apr 13 20:41:44.905146 containerd[1598]: time="2026-04-13T20:41:44.905084145Z" level=info msg="StopPodSandbox for \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\"" Apr 13 20:41:44.938821 kubelet[2768]: I0413 20:41:44.938549 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fcfm9" podStartSLOduration=4.535391801 podStartE2EDuration="21.938524117s" podCreationTimestamp="2026-04-13 20:41:23 +0000 UTC" firstStartedPulling="2026-04-13 20:41:23.638214911 +0000 UTC m=+23.250540391" lastFinishedPulling="2026-04-13 20:41:41.041347228 +0000 UTC m=+40.653672707" observedRunningTime="2026-04-13 20:41:44.934536482 +0000 UTC m=+44.546861984" watchObservedRunningTime="2026-04-13 20:41:44.938524117 +0000 UTC m=+44.550849619" Apr 13 20:41:45.032623 containerd[1598]: 2026-04-13 20:41:44.985 [INFO][4027] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Apr 13 20:41:45.032623 containerd[1598]: 2026-04-13 20:41:44.985 [INFO][4027] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" iface="eth0" netns="/var/run/netns/cni-d8ca0ebb-671f-7d3e-e606-d3ae57cdf1d6" Apr 13 20:41:45.032623 containerd[1598]: 2026-04-13 20:41:44.987 [INFO][4027] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" iface="eth0" netns="/var/run/netns/cni-d8ca0ebb-671f-7d3e-e606-d3ae57cdf1d6" Apr 13 20:41:45.032623 containerd[1598]: 2026-04-13 20:41:44.988 [INFO][4027] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" iface="eth0" netns="/var/run/netns/cni-d8ca0ebb-671f-7d3e-e606-d3ae57cdf1d6" Apr 13 20:41:45.032623 containerd[1598]: 2026-04-13 20:41:44.988 [INFO][4027] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Apr 13 20:41:45.032623 containerd[1598]: 2026-04-13 20:41:44.988 [INFO][4027] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Apr 13 20:41:45.032623 containerd[1598]: 2026-04-13 20:41:45.015 [INFO][4035] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" HandleID="k8s-pod-network.7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--848d97df56--d5zqb-eth0" Apr 13 20:41:45.032623 containerd[1598]: 2026-04-13 20:41:45.015 [INFO][4035] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:45.032623 containerd[1598]: 2026-04-13 20:41:45.016 [INFO][4035] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:45.032623 containerd[1598]: 2026-04-13 20:41:45.025 [WARNING][4035] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" HandleID="k8s-pod-network.7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--848d97df56--d5zqb-eth0" Apr 13 20:41:45.032623 containerd[1598]: 2026-04-13 20:41:45.025 [INFO][4035] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" HandleID="k8s-pod-network.7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--848d97df56--d5zqb-eth0" Apr 13 20:41:45.032623 containerd[1598]: 2026-04-13 20:41:45.027 [INFO][4035] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:45.032623 containerd[1598]: 2026-04-13 20:41:45.030 [INFO][4027] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Apr 13 20:41:45.035840 containerd[1598]: time="2026-04-13T20:41:45.035158653Z" level=info msg="TearDown network for sandbox \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\" successfully" Apr 13 20:41:45.035840 containerd[1598]: time="2026-04-13T20:41:45.035202145Z" level=info msg="StopPodSandbox for \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\" returns successfully" Apr 13 20:41:45.041457 systemd[1]: run-netns-cni\x2dd8ca0ebb\x2d671f\x2d7d3e\x2de606\x2dd3ae57cdf1d6.mount: Deactivated successfully. Apr 13 20:41:45.170563 kubelet[2768]: I0413 20:41:45.169136 2768 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0a0711eb-188e-496e-b764-dcc10a1782d1-whisker-backend-key-pair\") pod \"0a0711eb-188e-496e-b764-dcc10a1782d1\" (UID: \"0a0711eb-188e-496e-b764-dcc10a1782d1\") " Apr 13 20:41:45.170563 kubelet[2768]: I0413 20:41:45.169308 2768 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0a0711eb-188e-496e-b764-dcc10a1782d1-nginx-config\") pod \"0a0711eb-188e-496e-b764-dcc10a1782d1\" (UID: \"0a0711eb-188e-496e-b764-dcc10a1782d1\") " Apr 13 20:41:45.170563 kubelet[2768]: I0413 20:41:45.169348 2768 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j84vh\" (UniqueName: \"kubernetes.io/projected/0a0711eb-188e-496e-b764-dcc10a1782d1-kube-api-access-j84vh\") pod \"0a0711eb-188e-496e-b764-dcc10a1782d1\" (UID: \"0a0711eb-188e-496e-b764-dcc10a1782d1\") " Apr 13 20:41:45.170563 kubelet[2768]: I0413 20:41:45.169388 2768 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a0711eb-188e-496e-b764-dcc10a1782d1-whisker-ca-bundle\") pod \"0a0711eb-188e-496e-b764-dcc10a1782d1\" (UID: \"0a0711eb-188e-496e-b764-dcc10a1782d1\") " Apr 13 20:41:45.170563 kubelet[2768]: I0413 20:41:45.169930 2768 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a0711eb-188e-496e-b764-dcc10a1782d1-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0a0711eb-188e-496e-b764-dcc10a1782d1" (UID: "0a0711eb-188e-496e-b764-dcc10a1782d1"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:41:45.172526 kubelet[2768]: I0413 20:41:45.172485 2768 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a0711eb-188e-496e-b764-dcc10a1782d1-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "0a0711eb-188e-496e-b764-dcc10a1782d1" (UID: "0a0711eb-188e-496e-b764-dcc10a1782d1"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:41:45.178086 kubelet[2768]: I0413 20:41:45.175952 2768 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0711eb-188e-496e-b764-dcc10a1782d1-kube-api-access-j84vh" (OuterVolumeSpecName: "kube-api-access-j84vh") pod "0a0711eb-188e-496e-b764-dcc10a1782d1" (UID: "0a0711eb-188e-496e-b764-dcc10a1782d1"). InnerVolumeSpecName "kube-api-access-j84vh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:41:45.178086 kubelet[2768]: I0413 20:41:45.176648 2768 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0711eb-188e-496e-b764-dcc10a1782d1-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0a0711eb-188e-496e-b764-dcc10a1782d1" (UID: "0a0711eb-188e-496e-b764-dcc10a1782d1"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:41:45.180382 systemd[1]: var-lib-kubelet-pods-0a0711eb\x2d188e\x2d496e\x2db764\x2ddcc10a1782d1-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 20:41:45.185169 systemd[1]: var-lib-kubelet-pods-0a0711eb\x2d188e\x2d496e\x2db764\x2ddcc10a1782d1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj84vh.mount: Deactivated successfully. Apr 13 20:41:45.270182 kubelet[2768]: I0413 20:41:45.270106 2768 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/0a0711eb-188e-496e-b764-dcc10a1782d1-nginx-config\") on node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" DevicePath \"\"" Apr 13 20:41:45.270182 kubelet[2768]: I0413 20:41:45.270165 2768 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j84vh\" (UniqueName: \"kubernetes.io/projected/0a0711eb-188e-496e-b764-dcc10a1782d1-kube-api-access-j84vh\") on node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" DevicePath \"\"" Apr 13 20:41:45.270182 kubelet[2768]: I0413 20:41:45.270184 2768 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0a0711eb-188e-496e-b764-dcc10a1782d1-whisker-ca-bundle\") on node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" DevicePath \"\"" Apr 13 20:41:45.270182 kubelet[2768]: I0413 20:41:45.270200 2768 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0a0711eb-188e-496e-b764-dcc10a1782d1-whisker-backend-key-pair\") on node \"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal\" DevicePath \"\"" Apr 13 20:41:46.186388 kubelet[2768]: I0413 20:41:46.186314 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ab5ea4f1-6294-4946-bd73-ebe62d18f6c9-whisker-backend-key-pair\") pod \"whisker-5d45cc4cfc-r7h7h\" (UID: \"ab5ea4f1-6294-4946-bd73-ebe62d18f6c9\") " pod="calico-system/whisker-5d45cc4cfc-r7h7h" Apr 13 20:41:46.186388 kubelet[2768]: I0413 20:41:46.186391 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ab5ea4f1-6294-4946-bd73-ebe62d18f6c9-whisker-ca-bundle\") pod \"whisker-5d45cc4cfc-r7h7h\" (UID: \"ab5ea4f1-6294-4946-bd73-ebe62d18f6c9\") " pod="calico-system/whisker-5d45cc4cfc-r7h7h" Apr 13 20:41:46.187150 kubelet[2768]: I0413 20:41:46.186427 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ab5ea4f1-6294-4946-bd73-ebe62d18f6c9-nginx-config\") pod \"whisker-5d45cc4cfc-r7h7h\" (UID: \"ab5ea4f1-6294-4946-bd73-ebe62d18f6c9\") " pod="calico-system/whisker-5d45cc4cfc-r7h7h" Apr 13 20:41:46.187908 kubelet[2768]: I0413 20:41:46.187163 2768 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbnql\" (UniqueName: \"kubernetes.io/projected/ab5ea4f1-6294-4946-bd73-ebe62d18f6c9-kube-api-access-zbnql\") pod \"whisker-5d45cc4cfc-r7h7h\" (UID: \"ab5ea4f1-6294-4946-bd73-ebe62d18f6c9\") " pod="calico-system/whisker-5d45cc4cfc-r7h7h" Apr 13 20:41:46.342660 containerd[1598]: time="2026-04-13T20:41:46.342157530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d45cc4cfc-r7h7h,Uid:ab5ea4f1-6294-4946-bd73-ebe62d18f6c9,Namespace:calico-system,Attempt:0,}" Apr 13 20:41:46.354571 kernel: calico-node[4122]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 20:41:46.577039 kubelet[2768]: I0413 20:41:46.576905 2768 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a0711eb-188e-496e-b764-dcc10a1782d1" path="/var/lib/kubelet/pods/0a0711eb-188e-496e-b764-dcc10a1782d1/volumes" Apr 13 20:41:46.594588 systemd-networkd[1218]: cali3f82632367a: Link UP Apr 13 20:41:46.597713 systemd-networkd[1218]: cali3f82632367a: Gained carrier Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.436 [INFO][4174] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--5d45cc4cfc--r7h7h-eth0 whisker-5d45cc4cfc- calico-system ab5ea4f1-6294-4946-bd73-ebe62d18f6c9 948 0 2026-04-13 20:41:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5d45cc4cfc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal whisker-5d45cc4cfc-r7h7h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali3f82632367a [] [] }} ContainerID="ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" Namespace="calico-system" Pod="whisker-5d45cc4cfc-r7h7h" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--5d45cc4cfc--r7h7h-" Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.436 [INFO][4174] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" Namespace="calico-system" Pod="whisker-5d45cc4cfc-r7h7h" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--5d45cc4cfc--r7h7h-eth0" Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.486 [INFO][4186] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" HandleID="k8s-pod-network.ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--5d45cc4cfc--r7h7h-eth0" Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.504 [INFO][4186] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" HandleID="k8s-pod-network.ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--5d45cc4cfc--r7h7h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ef810), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", "pod":"whisker-5d45cc4cfc-r7h7h", "timestamp":"2026-04-13 20:41:46.486868741 +0000 UTC"}, Hostname:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000366f20)} Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.504 [INFO][4186] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.504 [INFO][4186] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.504 [INFO][4186] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal' Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.517 [INFO][4186] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.526 [INFO][4186] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.545 [INFO][4186] ipam/ipam.go 526: Trying affinity for 192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.548 [INFO][4186] ipam/ipam.go 160: Attempting to load block cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.551 [INFO][4186] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.551 [INFO][4186] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.553 [INFO][4186] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63 Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.560 [INFO][4186] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.568 [INFO][4186] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.16.65/26] block=192.168.16.64/26 handle="k8s-pod-network.ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.568 [INFO][4186] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.16.65/26] handle="k8s-pod-network.ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:46.626199 containerd[1598]: 2026-04-13 20:41:46.568 [INFO][4186] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:46.628691 containerd[1598]: 2026-04-13 20:41:46.569 [INFO][4186] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.16.65/26] IPv6=[] ContainerID="ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" HandleID="k8s-pod-network.ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--5d45cc4cfc--r7h7h-eth0" Apr 13 20:41:46.628691 containerd[1598]: 2026-04-13 20:41:46.571 [INFO][4174] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" Namespace="calico-system" Pod="whisker-5d45cc4cfc-r7h7h" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--5d45cc4cfc--r7h7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--5d45cc4cfc--r7h7h-eth0", GenerateName:"whisker-5d45cc4cfc-", Namespace:"calico-system", SelfLink:"", UID:"ab5ea4f1-6294-4946-bd73-ebe62d18f6c9", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d45cc4cfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-5d45cc4cfc-r7h7h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.16.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3f82632367a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:41:46.628691 containerd[1598]: 2026-04-13 20:41:46.572 [INFO][4174] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.65/32] ContainerID="ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" Namespace="calico-system" Pod="whisker-5d45cc4cfc-r7h7h" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--5d45cc4cfc--r7h7h-eth0" Apr 13 20:41:46.628691 containerd[1598]: 2026-04-13 20:41:46.572 [INFO][4174] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3f82632367a ContainerID="ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" Namespace="calico-system" Pod="whisker-5d45cc4cfc-r7h7h" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--5d45cc4cfc--r7h7h-eth0" Apr 13 20:41:46.628691 containerd[1598]: 2026-04-13 20:41:46.597 [INFO][4174] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" Namespace="calico-system" Pod="whisker-5d45cc4cfc-r7h7h" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--5d45cc4cfc--r7h7h-eth0" Apr 13 20:41:46.628691 containerd[1598]: 2026-04-13 20:41:46.598 [INFO][4174] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" Namespace="calico-system" Pod="whisker-5d45cc4cfc-r7h7h" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--5d45cc4cfc--r7h7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--5d45cc4cfc--r7h7h-eth0", GenerateName:"whisker-5d45cc4cfc-", Namespace:"calico-system", SelfLink:"", UID:"ab5ea4f1-6294-4946-bd73-ebe62d18f6c9", ResourceVersion:"948", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5d45cc4cfc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63", Pod:"whisker-5d45cc4cfc-r7h7h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.16.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali3f82632367a", MAC:"8a:14:e4:47:13:91", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:41:46.630281 containerd[1598]: 2026-04-13 20:41:46.619 [INFO][4174] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63" Namespace="calico-system" Pod="whisker-5d45cc4cfc-r7h7h" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--5d45cc4cfc--r7h7h-eth0" Apr 13 20:41:46.660657 containerd[1598]: time="2026-04-13T20:41:46.660492717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:41:46.661016 containerd[1598]: time="2026-04-13T20:41:46.660713856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:41:46.661016 containerd[1598]: time="2026-04-13T20:41:46.660745616Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:46.661016 containerd[1598]: time="2026-04-13T20:41:46.660905865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:46.793801 containerd[1598]: time="2026-04-13T20:41:46.793730128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5d45cc4cfc-r7h7h,Uid:ab5ea4f1-6294-4946-bd73-ebe62d18f6c9,Namespace:calico-system,Attempt:0,} returns sandbox id \"ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63\"" Apr 13 20:41:46.796720 containerd[1598]: time="2026-04-13T20:41:46.796440709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 20:41:47.036724 kubelet[2768]: I0413 20:41:47.036657 2768 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:41:47.156270 systemd-networkd[1218]: vxlan.calico: Link UP Apr 13 20:41:47.156280 systemd-networkd[1218]: vxlan.calico: Gained carrier Apr 13 20:41:47.719260 systemd-networkd[1218]: cali3f82632367a: Gained IPv6LL Apr 13 20:41:47.977967 containerd[1598]: time="2026-04-13T20:41:47.977789273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:47.979820 containerd[1598]: time="2026-04-13T20:41:47.979511601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 13 20:41:47.981079 containerd[1598]: time="2026-04-13T20:41:47.980980183Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:47.987746 containerd[1598]: time="2026-04-13T20:41:47.987696321Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:47.991824 containerd[1598]: time="2026-04-13T20:41:47.990768291Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.194271024s" Apr 13 20:41:47.991824 containerd[1598]: time="2026-04-13T20:41:47.990819797Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 13 20:41:48.000549 containerd[1598]: time="2026-04-13T20:41:48.000452610Z" level=info msg="CreateContainer within sandbox \"ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 20:41:48.019712 containerd[1598]: time="2026-04-13T20:41:48.019646058Z" level=info msg="CreateContainer within sandbox \"ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0101b391abc9b9be0c2ae09e46cf3fc816887c57dd0e579a3e8e900166771557\"" Apr 13 20:41:48.020426 containerd[1598]: time="2026-04-13T20:41:48.020331954Z" level=info msg="StartContainer for \"0101b391abc9b9be0c2ae09e46cf3fc816887c57dd0e579a3e8e900166771557\"" Apr 13 20:41:48.078077 systemd[1]: run-containerd-runc-k8s.io-0101b391abc9b9be0c2ae09e46cf3fc816887c57dd0e579a3e8e900166771557-runc.XLENMV.mount: Deactivated successfully. Apr 13 20:41:48.140139 containerd[1598]: time="2026-04-13T20:41:48.139876098Z" level=info msg="StartContainer for \"0101b391abc9b9be0c2ae09e46cf3fc816887c57dd0e579a3e8e900166771557\" returns successfully" Apr 13 20:41:48.145589 containerd[1598]: time="2026-04-13T20:41:48.145532735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 20:41:49.062967 systemd-networkd[1218]: vxlan.calico: Gained IPv6LL Apr 13 20:41:49.744575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028077443.mount: Deactivated successfully. Apr 13 20:41:49.762359 containerd[1598]: time="2026-04-13T20:41:49.762304653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:49.763754 containerd[1598]: time="2026-04-13T20:41:49.763684834Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 13 20:41:49.765093 containerd[1598]: time="2026-04-13T20:41:49.764675677Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:49.767636 containerd[1598]: time="2026-04-13T20:41:49.767574070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:41:49.768832 containerd[1598]: time="2026-04-13T20:41:49.768650674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.623059354s" Apr 13 20:41:49.768832 containerd[1598]: time="2026-04-13T20:41:49.768698465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 13 20:41:49.774693 containerd[1598]: time="2026-04-13T20:41:49.774651176Z" level=info msg="CreateContainer within sandbox \"ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 20:41:49.789489 containerd[1598]: time="2026-04-13T20:41:49.789277544Z" level=info msg="CreateContainer within sandbox \"ce8ece43965b4084d35e6cf24fc9508c7d48382b1748a2324effdb8f8ae8bc63\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"00b0694ee68b3940f4a27a26d495bf2d9aa77173949538976a46b2605cb24ddd\"" Apr 13 20:41:49.793313 containerd[1598]: time="2026-04-13T20:41:49.793274892Z" level=info msg="StartContainer for \"00b0694ee68b3940f4a27a26d495bf2d9aa77173949538976a46b2605cb24ddd\"" Apr 13 20:41:49.888709 containerd[1598]: time="2026-04-13T20:41:49.888558499Z" level=info msg="StartContainer for \"00b0694ee68b3940f4a27a26d495bf2d9aa77173949538976a46b2605cb24ddd\" returns successfully" Apr 13 20:41:49.942961 kubelet[2768]: I0413 20:41:49.942888 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-5d45cc4cfc-r7h7h" podStartSLOduration=0.968653698 podStartE2EDuration="3.942864904s" podCreationTimestamp="2026-04-13 20:41:46 +0000 UTC" firstStartedPulling="2026-04-13 20:41:46.795819886 +0000 UTC m=+46.408145373" lastFinishedPulling="2026-04-13 20:41:49.770031089 +0000 UTC m=+49.382356579" observedRunningTime="2026-04-13 20:41:49.941706547 +0000 UTC m=+49.554032049" watchObservedRunningTime="2026-04-13 20:41:49.942864904 +0000 UTC m=+49.555190407" Apr 13 20:41:51.747431 ntpd[1544]: Listen normally on 6 vxlan.calico 192.168.16.64:123 Apr 13 20:41:51.747571 ntpd[1544]: Listen normally on 7 cali3f82632367a [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 20:41:51.748121 ntpd[1544]: 13 Apr 20:41:51 ntpd[1544]: Listen normally on 6 vxlan.calico 192.168.16.64:123 Apr 13 20:41:51.748121 ntpd[1544]: 13 Apr 20:41:51 ntpd[1544]: Listen normally on 7 cali3f82632367a [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 20:41:51.748121 ntpd[1544]: 13 Apr 20:41:51 ntpd[1544]: Listen normally on 8 vxlan.calico [fe80::6474:58ff:fe41:23d6%5]:123 Apr 13 20:41:51.747656 ntpd[1544]: Listen normally on 8 vxlan.calico [fe80::6474:58ff:fe41:23d6%5]:123 Apr 13 20:41:54.574579 containerd[1598]: time="2026-04-13T20:41:54.574512141Z" level=info msg="StopPodSandbox for \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\"" Apr 13 20:41:54.693101 containerd[1598]: 2026-04-13 20:41:54.643 [INFO][4484] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Apr 13 20:41:54.693101 containerd[1598]: 2026-04-13 20:41:54.645 [INFO][4484] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" iface="eth0" netns="/var/run/netns/cni-59b00bf8-18ac-9c1f-5023-c21bfe2cf3aa" Apr 13 20:41:54.693101 containerd[1598]: 2026-04-13 20:41:54.645 [INFO][4484] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" iface="eth0" netns="/var/run/netns/cni-59b00bf8-18ac-9c1f-5023-c21bfe2cf3aa" Apr 13 20:41:54.693101 containerd[1598]: 2026-04-13 20:41:54.647 [INFO][4484] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" iface="eth0" netns="/var/run/netns/cni-59b00bf8-18ac-9c1f-5023-c21bfe2cf3aa" Apr 13 20:41:54.693101 containerd[1598]: 2026-04-13 20:41:54.647 [INFO][4484] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Apr 13 20:41:54.693101 containerd[1598]: 2026-04-13 20:41:54.647 [INFO][4484] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Apr 13 20:41:54.693101 containerd[1598]: 2026-04-13 20:41:54.676 [INFO][4491] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" HandleID="k8s-pod-network.206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:41:54.693101 containerd[1598]: 2026-04-13 20:41:54.676 [INFO][4491] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:54.693101 containerd[1598]: 2026-04-13 20:41:54.676 [INFO][4491] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:54.693101 containerd[1598]: 2026-04-13 20:41:54.686 [WARNING][4491] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" HandleID="k8s-pod-network.206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:41:54.693101 containerd[1598]: 2026-04-13 20:41:54.686 [INFO][4491] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" HandleID="k8s-pod-network.206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:41:54.693101 containerd[1598]: 2026-04-13 20:41:54.688 [INFO][4491] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:54.693101 containerd[1598]: 2026-04-13 20:41:54.689 [INFO][4484] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Apr 13 20:41:54.693826 containerd[1598]: time="2026-04-13T20:41:54.693036514Z" level=info msg="TearDown network for sandbox \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\" successfully" Apr 13 20:41:54.693826 containerd[1598]: time="2026-04-13T20:41:54.693156061Z" level=info msg="StopPodSandbox for \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\" returns successfully" Apr 13 20:41:54.695445 containerd[1598]: time="2026-04-13T20:41:54.694017165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67b544dcfd-44f62,Uid:c7953eac-aed1-4972-966d-335bb475a17a,Namespace:calico-system,Attempt:1,}" Apr 13 20:41:54.700897 systemd[1]: run-netns-cni\x2d59b00bf8\x2d18ac\x2d9c1f\x2d5023\x2dc21bfe2cf3aa.mount: Deactivated successfully. Apr 13 20:41:54.856757 systemd-networkd[1218]: calib77da2d0ad7: Link UP Apr 13 20:41:54.859210 systemd-networkd[1218]: calib77da2d0ad7: Gained carrier Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.768 [INFO][4497] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0 calico-kube-controllers-67b544dcfd- calico-system c7953eac-aed1-4972-966d-335bb475a17a 990 0 2026-04-13 20:41:23 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67b544dcfd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal calico-kube-controllers-67b544dcfd-44f62 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib77da2d0ad7 [] [] }} ContainerID="653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" Namespace="calico-system" Pod="calico-kube-controllers-67b544dcfd-44f62" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-" Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.768 [INFO][4497] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" Namespace="calico-system" Pod="calico-kube-controllers-67b544dcfd-44f62" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.803 [INFO][4509] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" HandleID="k8s-pod-network.653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.814 [INFO][4509] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" HandleID="k8s-pod-network.653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000380140), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", "pod":"calico-kube-controllers-67b544dcfd-44f62", "timestamp":"2026-04-13 20:41:54.803559564 +0000 UTC"}, Hostname:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0004aa000)} Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.814 [INFO][4509] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.814 [INFO][4509] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.814 [INFO][4509] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal' Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.817 [INFO][4509] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.823 [INFO][4509] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.828 [INFO][4509] ipam/ipam.go 526: Trying affinity for 192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.831 [INFO][4509] ipam/ipam.go 160: Attempting to load block cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.835 [INFO][4509] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.835 [INFO][4509] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.837 [INFO][4509] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.843 [INFO][4509] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:54.891144 containerd[1598]: 2026-04-13 20:41:54.850 [INFO][4509] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.16.66/26] block=192.168.16.64/26 handle="k8s-pod-network.653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:54.892196 containerd[1598]: 2026-04-13 20:41:54.850 [INFO][4509] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.16.66/26] handle="k8s-pod-network.653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:54.892196 containerd[1598]: 2026-04-13 20:41:54.850 [INFO][4509] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:54.892196 containerd[1598]: 2026-04-13 20:41:54.850 [INFO][4509] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.16.66/26] IPv6=[] ContainerID="653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" HandleID="k8s-pod-network.653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:41:54.892196 containerd[1598]: 2026-04-13 20:41:54.852 [INFO][4497] cni-plugin/k8s.go 418: Populated endpoint ContainerID="653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" Namespace="calico-system" Pod="calico-kube-controllers-67b544dcfd-44f62" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0", GenerateName:"calico-kube-controllers-67b544dcfd-", Namespace:"calico-system", SelfLink:"", UID:"c7953eac-aed1-4972-966d-335bb475a17a", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67b544dcfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-67b544dcfd-44f62", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib77da2d0ad7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:41:54.892196 containerd[1598]: 2026-04-13 20:41:54.852 [INFO][4497] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.66/32] ContainerID="653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" Namespace="calico-system" Pod="calico-kube-controllers-67b544dcfd-44f62" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:41:54.892196 containerd[1598]: 2026-04-13 20:41:54.852 [INFO][4497] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib77da2d0ad7 ContainerID="653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" Namespace="calico-system" Pod="calico-kube-controllers-67b544dcfd-44f62" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:41:54.892196 containerd[1598]: 2026-04-13 20:41:54.860 [INFO][4497] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" Namespace="calico-system" Pod="calico-kube-controllers-67b544dcfd-44f62" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:41:54.892620 containerd[1598]: 2026-04-13 20:41:54.862 [INFO][4497] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" Namespace="calico-system" Pod="calico-kube-controllers-67b544dcfd-44f62" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0", GenerateName:"calico-kube-controllers-67b544dcfd-", Namespace:"calico-system", SelfLink:"", UID:"c7953eac-aed1-4972-966d-335bb475a17a", ResourceVersion:"990", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67b544dcfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a", Pod:"calico-kube-controllers-67b544dcfd-44f62", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib77da2d0ad7", MAC:"6a:86:0f:3a:c1:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:41:54.892620 containerd[1598]: 2026-04-13 20:41:54.883 [INFO][4497] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a" Namespace="calico-system" Pod="calico-kube-controllers-67b544dcfd-44f62" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:41:54.947819 containerd[1598]: time="2026-04-13T20:41:54.947681927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:41:54.947819 containerd[1598]: time="2026-04-13T20:41:54.947773322Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:41:54.947819 containerd[1598]: time="2026-04-13T20:41:54.947792197Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:54.948443 containerd[1598]: time="2026-04-13T20:41:54.948291143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:55.059572 containerd[1598]: time="2026-04-13T20:41:55.059439253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67b544dcfd-44f62,Uid:c7953eac-aed1-4972-966d-335bb475a17a,Namespace:calico-system,Attempt:1,} returns sandbox id \"653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a\"" Apr 13 20:41:55.063107 containerd[1598]: time="2026-04-13T20:41:55.061858650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 20:41:56.168197 systemd-networkd[1218]: calib77da2d0ad7: Gained IPv6LL Apr 13 20:41:56.577747 containerd[1598]: time="2026-04-13T20:41:56.577700989Z" level=info msg="StopPodSandbox for \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\"" Apr 13 20:41:56.580462 containerd[1598]: time="2026-04-13T20:41:56.579918089Z" level=info msg="StopPodSandbox for \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\"" Apr 13 20:41:56.842980 containerd[1598]: 2026-04-13 20:41:56.718 [INFO][4605] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Apr 13 20:41:56.842980 containerd[1598]: 2026-04-13 20:41:56.718 [INFO][4605] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" iface="eth0" netns="/var/run/netns/cni-00c294e8-5a4f-d169-b4de-bed38ee5a3d7" Apr 13 20:41:56.842980 containerd[1598]: 2026-04-13 20:41:56.720 [INFO][4605] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" iface="eth0" netns="/var/run/netns/cni-00c294e8-5a4f-d169-b4de-bed38ee5a3d7" Apr 13 20:41:56.842980 containerd[1598]: 2026-04-13 20:41:56.722 [INFO][4605] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" iface="eth0" netns="/var/run/netns/cni-00c294e8-5a4f-d169-b4de-bed38ee5a3d7" Apr 13 20:41:56.842980 containerd[1598]: 2026-04-13 20:41:56.722 [INFO][4605] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Apr 13 20:41:56.842980 containerd[1598]: 2026-04-13 20:41:56.723 [INFO][4605] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Apr 13 20:41:56.842980 containerd[1598]: 2026-04-13 20:41:56.818 [INFO][4618] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" HandleID="k8s-pod-network.ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:41:56.842980 containerd[1598]: 2026-04-13 20:41:56.818 [INFO][4618] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:56.842980 containerd[1598]: 2026-04-13 20:41:56.819 [INFO][4618] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:56.842980 containerd[1598]: 2026-04-13 20:41:56.834 [WARNING][4618] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" HandleID="k8s-pod-network.ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:41:56.842980 containerd[1598]: 2026-04-13 20:41:56.834 [INFO][4618] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" HandleID="k8s-pod-network.ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:41:56.842980 containerd[1598]: 2026-04-13 20:41:56.837 [INFO][4618] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:56.842980 containerd[1598]: 2026-04-13 20:41:56.840 [INFO][4605] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Apr 13 20:41:56.846504 containerd[1598]: time="2026-04-13T20:41:56.843942423Z" level=info msg="TearDown network for sandbox \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\" successfully" Apr 13 20:41:56.846504 containerd[1598]: time="2026-04-13T20:41:56.843984407Z" level=info msg="StopPodSandbox for \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\" returns successfully" Apr 13 20:41:56.852192 containerd[1598]: time="2026-04-13T20:41:56.851716187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dkdsv,Uid:bf6c12d4-a453-4d40-bc8e-b49f714452b6,Namespace:kube-system,Attempt:1,}" Apr 13 20:41:56.854711 systemd[1]: run-netns-cni\x2d00c294e8\x2d5a4f\x2dd169\x2db4de\x2dbed38ee5a3d7.mount: Deactivated successfully. Apr 13 20:41:56.865444 containerd[1598]: 2026-04-13 20:41:56.720 [INFO][4604] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Apr 13 20:41:56.865444 containerd[1598]: 2026-04-13 20:41:56.721 [INFO][4604] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" iface="eth0" netns="/var/run/netns/cni-62e8bc45-64c4-c4c8-b270-f3b25839a424" Apr 13 20:41:56.865444 containerd[1598]: 2026-04-13 20:41:56.722 [INFO][4604] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" iface="eth0" netns="/var/run/netns/cni-62e8bc45-64c4-c4c8-b270-f3b25839a424" Apr 13 20:41:56.865444 containerd[1598]: 2026-04-13 20:41:56.722 [INFO][4604] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" iface="eth0" netns="/var/run/netns/cni-62e8bc45-64c4-c4c8-b270-f3b25839a424" Apr 13 20:41:56.865444 containerd[1598]: 2026-04-13 20:41:56.722 [INFO][4604] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Apr 13 20:41:56.865444 containerd[1598]: 2026-04-13 20:41:56.723 [INFO][4604] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Apr 13 20:41:56.865444 containerd[1598]: 2026-04-13 20:41:56.830 [INFO][4619] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" HandleID="k8s-pod-network.588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:41:56.865444 containerd[1598]: 2026-04-13 20:41:56.831 [INFO][4619] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:56.865444 containerd[1598]: 2026-04-13 20:41:56.836 [INFO][4619] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:56.865444 containerd[1598]: 2026-04-13 20:41:56.854 [WARNING][4619] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" HandleID="k8s-pod-network.588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:41:56.865444 containerd[1598]: 2026-04-13 20:41:56.855 [INFO][4619] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" HandleID="k8s-pod-network.588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:41:56.865444 containerd[1598]: 2026-04-13 20:41:56.859 [INFO][4619] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:56.865444 containerd[1598]: 2026-04-13 20:41:56.862 [INFO][4604] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Apr 13 20:41:56.869459 containerd[1598]: time="2026-04-13T20:41:56.868799002Z" level=info msg="TearDown network for sandbox \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\" successfully" Apr 13 20:41:56.869459 containerd[1598]: time="2026-04-13T20:41:56.868839472Z" level=info msg="StopPodSandbox for \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\" returns successfully" Apr 13 20:41:56.870577 containerd[1598]: time="2026-04-13T20:41:56.870229774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2m2c2,Uid:9f5a2099-cb50-4d52-9877-f1dd83710551,Namespace:kube-system,Attempt:1,}" Apr 13 20:41:56.876978 systemd[1]: run-netns-cni\x2d62e8bc45\x2d64c4\x2dc4c8\x2db270\x2df3b25839a424.mount: Deactivated successfully. Apr 13 20:41:57.227841 systemd-networkd[1218]: cali8964bfe202e: Link UP Apr 13 20:41:57.231159 systemd-networkd[1218]: cali8964bfe202e: Gained carrier Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.027 [INFO][4631] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0 coredns-674b8bbfcf- kube-system bf6c12d4-a453-4d40-bc8e-b49f714452b6 1000 0 2026-04-13 20:41:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal coredns-674b8bbfcf-dkdsv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8964bfe202e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dkdsv" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-" Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.027 [INFO][4631] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dkdsv" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.137 [INFO][4654] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" HandleID="k8s-pod-network.60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.155 [INFO][4654] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" HandleID="k8s-pod-network.60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00039c9b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", "pod":"coredns-674b8bbfcf-dkdsv", "timestamp":"2026-04-13 20:41:57.136056535 +0000 UTC"}, Hostname:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002f6580)} Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.156 [INFO][4654] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.156 [INFO][4654] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.156 [INFO][4654] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal' Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.161 [INFO][4654] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.170 [INFO][4654] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.182 [INFO][4654] ipam/ipam.go 526: Trying affinity for 192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.186 [INFO][4654] ipam/ipam.go 160: Attempting to load block cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.190 [INFO][4654] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.190 [INFO][4654] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.193 [INFO][4654] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7 Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.199 [INFO][4654] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.209 [INFO][4654] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.16.67/26] block=192.168.16.64/26 handle="k8s-pod-network.60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.210 [INFO][4654] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.16.67/26] handle="k8s-pod-network.60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.258728 containerd[1598]: 2026-04-13 20:41:57.211 [INFO][4654] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:57.263774 containerd[1598]: 2026-04-13 20:41:57.212 [INFO][4654] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.16.67/26] IPv6=[] ContainerID="60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" HandleID="k8s-pod-network.60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:41:57.263774 containerd[1598]: 2026-04-13 20:41:57.217 [INFO][4631] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dkdsv" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bf6c12d4-a453-4d40-bc8e-b49f714452b6", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-674b8bbfcf-dkdsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8964bfe202e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:41:57.263774 containerd[1598]: 2026-04-13 20:41:57.217 [INFO][4631] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.67/32] ContainerID="60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dkdsv" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:41:57.263774 containerd[1598]: 2026-04-13 20:41:57.218 [INFO][4631] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8964bfe202e ContainerID="60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dkdsv" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:41:57.263774 containerd[1598]: 2026-04-13 20:41:57.230 [INFO][4631] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dkdsv" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:41:57.265711 containerd[1598]: 2026-04-13 20:41:57.233 [INFO][4631] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dkdsv" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bf6c12d4-a453-4d40-bc8e-b49f714452b6", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7", Pod:"coredns-674b8bbfcf-dkdsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8964bfe202e", MAC:"56:15:17:b7:11:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:41:57.265711 containerd[1598]: 2026-04-13 20:41:57.255 [INFO][4631] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7" Namespace="kube-system" Pod="coredns-674b8bbfcf-dkdsv" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:41:57.340508 systemd-networkd[1218]: cali7651e3b8ff3: Link UP Apr 13 20:41:57.345207 systemd-networkd[1218]: cali7651e3b8ff3: Gained carrier Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.048 [INFO][4640] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0 coredns-674b8bbfcf- kube-system 9f5a2099-cb50-4d52-9877-f1dd83710551 1001 0 2026-04-13 20:41:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal coredns-674b8bbfcf-2m2c2 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7651e3b8ff3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2m2c2" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-" Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.048 [INFO][4640] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2m2c2" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.142 [INFO][4659] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" HandleID="k8s-pod-network.eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.162 [INFO][4659] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" HandleID="k8s-pod-network.eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004f7900), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", "pod":"coredns-674b8bbfcf-2m2c2", "timestamp":"2026-04-13 20:41:57.142299523 +0000 UTC"}, Hostname:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000f6c60)} Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.162 [INFO][4659] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.211 [INFO][4659] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.211 [INFO][4659] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal' Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.266 [INFO][4659] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.276 [INFO][4659] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.290 [INFO][4659] ipam/ipam.go 526: Trying affinity for 192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.294 [INFO][4659] ipam/ipam.go 160: Attempting to load block cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.299 [INFO][4659] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.299 [INFO][4659] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.302 [INFO][4659] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.310 [INFO][4659] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.326 [INFO][4659] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.16.68/26] block=192.168.16.64/26 handle="k8s-pod-network.eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.327 [INFO][4659] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.16.68/26] handle="k8s-pod-network.eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:57.388413 containerd[1598]: 2026-04-13 20:41:57.328 [INFO][4659] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:57.391502 containerd[1598]: 2026-04-13 20:41:57.328 [INFO][4659] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.16.68/26] IPv6=[] ContainerID="eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" HandleID="k8s-pod-network.eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:41:57.391502 containerd[1598]: 2026-04-13 20:41:57.334 [INFO][4640] cni-plugin/k8s.go 418: Populated endpoint ContainerID="eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2m2c2" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9f5a2099-cb50-4d52-9877-f1dd83710551", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-674b8bbfcf-2m2c2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7651e3b8ff3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:41:57.391502 containerd[1598]: 2026-04-13 20:41:57.334 [INFO][4640] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.68/32] ContainerID="eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2m2c2" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:41:57.391502 containerd[1598]: 2026-04-13 20:41:57.334 [INFO][4640] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7651e3b8ff3 ContainerID="eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2m2c2" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:41:57.391502 containerd[1598]: 2026-04-13 20:41:57.339 [INFO][4640] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2m2c2" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:41:57.392753 containerd[1598]: 2026-04-13 20:41:57.339 [INFO][4640] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2m2c2" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9f5a2099-cb50-4d52-9877-f1dd83710551", ResourceVersion:"1001", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d", Pod:"coredns-674b8bbfcf-2m2c2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7651e3b8ff3", MAC:"12:20:e3:95:e6:3a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:41:57.392753 containerd[1598]: 2026-04-13 20:41:57.358 [INFO][4640] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d" Namespace="kube-system" Pod="coredns-674b8bbfcf-2m2c2" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:41:57.400345 containerd[1598]: time="2026-04-13T20:41:57.398741027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:41:57.401491 containerd[1598]: time="2026-04-13T20:41:57.399438861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:41:57.401491 containerd[1598]: time="2026-04-13T20:41:57.399586353Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:57.401491 containerd[1598]: time="2026-04-13T20:41:57.401190401Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:57.497100 containerd[1598]: time="2026-04-13T20:41:57.492688633Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:41:57.497100 containerd[1598]: time="2026-04-13T20:41:57.492769670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:41:57.497100 containerd[1598]: time="2026-04-13T20:41:57.492797702Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:57.497100 containerd[1598]: time="2026-04-13T20:41:57.492919911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:57.574620 containerd[1598]: time="2026-04-13T20:41:57.574571465Z" level=info msg="StopPodSandbox for \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\"" Apr 13 20:41:57.586454 containerd[1598]: time="2026-04-13T20:41:57.586385506Z" level=info msg="StopPodSandbox for \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\"" Apr 13 20:41:57.656090 containerd[1598]: time="2026-04-13T20:41:57.655635276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-dkdsv,Uid:bf6c12d4-a453-4d40-bc8e-b49f714452b6,Namespace:kube-system,Attempt:1,} returns sandbox id \"60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7\"" Apr 13 20:41:57.686630 containerd[1598]: time="2026-04-13T20:41:57.686232638Z" level=info msg="CreateContainer within sandbox \"60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:41:57.699038 containerd[1598]: time="2026-04-13T20:41:57.697677760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-2m2c2,Uid:9f5a2099-cb50-4d52-9877-f1dd83710551,Namespace:kube-system,Attempt:1,} returns sandbox id \"eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d\"" Apr 13 20:41:57.727491 containerd[1598]: time="2026-04-13T20:41:57.726868436Z" level=info msg="CreateContainer within sandbox \"eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:41:57.765254 containerd[1598]: time="2026-04-13T20:41:57.765117632Z" level=info msg="CreateContainer within sandbox \"eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"61a3a8181c0f54ea257fd9f722171189334f5c2f131962bb19f82243c7ae2a27\"" Apr 13 20:41:57.768860 containerd[1598]: time="2026-04-13T20:41:57.767874885Z" level=info msg="StartContainer for \"61a3a8181c0f54ea257fd9f722171189334f5c2f131962bb19f82243c7ae2a27\"" Apr 13 20:41:57.770805 containerd[1598]: time="2026-04-13T20:41:57.770719150Z" level=info msg="CreateContainer within sandbox \"60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c696a737d55cedcbe56091f4b2e4d5d0e9f808d780b50c71ed541970f9dd052d\"" Apr 13 20:41:57.775782 containerd[1598]: time="2026-04-13T20:41:57.775746005Z" level=info msg="StartContainer for \"c696a737d55cedcbe56091f4b2e4d5d0e9f808d780b50c71ed541970f9dd052d\"" Apr 13 20:41:58.020382 containerd[1598]: time="2026-04-13T20:41:58.019988242Z" level=info msg="StartContainer for \"c696a737d55cedcbe56091f4b2e4d5d0e9f808d780b50c71ed541970f9dd052d\" returns successfully" Apr 13 20:41:58.042040 containerd[1598]: time="2026-04-13T20:41:58.041911676Z" level=info msg="StartContainer for \"61a3a8181c0f54ea257fd9f722171189334f5c2f131962bb19f82243c7ae2a27\" returns successfully" Apr 13 20:41:58.186774 containerd[1598]: 2026-04-13 20:41:57.931 [INFO][4798] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Apr 13 20:41:58.186774 containerd[1598]: 2026-04-13 20:41:57.931 [INFO][4798] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" iface="eth0" netns="/var/run/netns/cni-2a4310d5-9f09-3b72-ff8d-bec24f6436eb" Apr 13 20:41:58.186774 containerd[1598]: 2026-04-13 20:41:57.931 [INFO][4798] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" iface="eth0" netns="/var/run/netns/cni-2a4310d5-9f09-3b72-ff8d-bec24f6436eb" Apr 13 20:41:58.186774 containerd[1598]: 2026-04-13 20:41:57.931 [INFO][4798] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" iface="eth0" netns="/var/run/netns/cni-2a4310d5-9f09-3b72-ff8d-bec24f6436eb" Apr 13 20:41:58.186774 containerd[1598]: 2026-04-13 20:41:57.931 [INFO][4798] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Apr 13 20:41:58.186774 containerd[1598]: 2026-04-13 20:41:57.931 [INFO][4798] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Apr 13 20:41:58.186774 containerd[1598]: 2026-04-13 20:41:58.151 [INFO][4871] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" HandleID="k8s-pod-network.bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:41:58.186774 containerd[1598]: 2026-04-13 20:41:58.151 [INFO][4871] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:58.186774 containerd[1598]: 2026-04-13 20:41:58.151 [INFO][4871] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:58.186774 containerd[1598]: 2026-04-13 20:41:58.172 [WARNING][4871] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" HandleID="k8s-pod-network.bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:41:58.186774 containerd[1598]: 2026-04-13 20:41:58.172 [INFO][4871] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" HandleID="k8s-pod-network.bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:41:58.186774 containerd[1598]: 2026-04-13 20:41:58.175 [INFO][4871] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:58.186774 containerd[1598]: 2026-04-13 20:41:58.180 [INFO][4798] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Apr 13 20:41:58.192677 containerd[1598]: time="2026-04-13T20:41:58.186895931Z" level=info msg="TearDown network for sandbox \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\" successfully" Apr 13 20:41:58.192677 containerd[1598]: time="2026-04-13T20:41:58.186933908Z" level=info msg="StopPodSandbox for \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\" returns successfully" Apr 13 20:41:58.192677 containerd[1598]: time="2026-04-13T20:41:58.188038466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4775f99-cfwgd,Uid:80bc0439-cac3-4b71-ae31-e9556293dc74,Namespace:calico-system,Attempt:1,}" Apr 13 20:41:58.196129 systemd[1]: run-netns-cni\x2d2a4310d5\x2d9f09\x2d3b72\x2dff8d\x2dbec24f6436eb.mount: Deactivated successfully. Apr 13 20:41:58.246492 containerd[1598]: 2026-04-13 20:41:57.995 [INFO][4802] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Apr 13 20:41:58.246492 containerd[1598]: 2026-04-13 20:41:57.996 [INFO][4802] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" iface="eth0" netns="/var/run/netns/cni-35cf94c7-c973-1e20-604c-ba1e07bca77b" Apr 13 20:41:58.246492 containerd[1598]: 2026-04-13 20:41:57.998 [INFO][4802] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" iface="eth0" netns="/var/run/netns/cni-35cf94c7-c973-1e20-604c-ba1e07bca77b" Apr 13 20:41:58.246492 containerd[1598]: 2026-04-13 20:41:57.998 [INFO][4802] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" iface="eth0" netns="/var/run/netns/cni-35cf94c7-c973-1e20-604c-ba1e07bca77b" Apr 13 20:41:58.246492 containerd[1598]: 2026-04-13 20:41:57.998 [INFO][4802] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Apr 13 20:41:58.246492 containerd[1598]: 2026-04-13 20:41:57.998 [INFO][4802] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Apr 13 20:41:58.246492 containerd[1598]: 2026-04-13 20:41:58.207 [INFO][4885] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" HandleID="k8s-pod-network.9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:41:58.246492 containerd[1598]: 2026-04-13 20:41:58.208 [INFO][4885] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:58.246492 containerd[1598]: 2026-04-13 20:41:58.208 [INFO][4885] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:58.246492 containerd[1598]: 2026-04-13 20:41:58.223 [WARNING][4885] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" HandleID="k8s-pod-network.9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:41:58.246492 containerd[1598]: 2026-04-13 20:41:58.224 [INFO][4885] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" HandleID="k8s-pod-network.9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:41:58.246492 containerd[1598]: 2026-04-13 20:41:58.233 [INFO][4885] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:58.246492 containerd[1598]: 2026-04-13 20:41:58.238 [INFO][4802] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Apr 13 20:41:58.248411 containerd[1598]: time="2026-04-13T20:41:58.248307472Z" level=info msg="TearDown network for sandbox \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\" successfully" Apr 13 20:41:58.248555 containerd[1598]: time="2026-04-13T20:41:58.248533085Z" level=info msg="StopPodSandbox for \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\" returns successfully" Apr 13 20:41:58.250049 containerd[1598]: time="2026-04-13T20:41:58.249534316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-qch75,Uid:2e0a0dff-fcd2-4863-8bb8-041686ac070a,Namespace:calico-system,Attempt:1,}" Apr 13 20:41:58.526355 systemd-networkd[1218]: calif90f2ede309: Link UP Apr 13 20:41:58.532920 systemd-networkd[1218]: calif90f2ede309: Gained carrier Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.332 [INFO][4915] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0 calico-apiserver-7d4775f99- calico-system 80bc0439-cac3-4b71-ae31-e9556293dc74 1016 0 2026-04-13 20:41:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4775f99 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal calico-apiserver-7d4775f99-cfwgd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calif90f2ede309 [] [] }} ContainerID="d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-cfwgd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-" Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.333 [INFO][4915] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-cfwgd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.414 [INFO][4938] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" HandleID="k8s-pod-network.d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.438 [INFO][4938] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" HandleID="k8s-pod-network.d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000123e90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", "pod":"calico-apiserver-7d4775f99-cfwgd", "timestamp":"2026-04-13 20:41:58.414571406 +0000 UTC"}, Hostname:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0000fc000)} Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.438 [INFO][4938] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.438 [INFO][4938] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.438 [INFO][4938] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal' Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.446 [INFO][4938] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.456 [INFO][4938] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.466 [INFO][4938] ipam/ipam.go 526: Trying affinity for 192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.470 [INFO][4938] ipam/ipam.go 160: Attempting to load block cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.476 [INFO][4938] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.477 [INFO][4938] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.481 [INFO][4938] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.489 [INFO][4938] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.504 [INFO][4938] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.16.69/26] block=192.168.16.64/26 handle="k8s-pod-network.d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.570537 containerd[1598]: 2026-04-13 20:41:58.504 [INFO][4938] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.16.69/26] handle="k8s-pod-network.d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.571614 containerd[1598]: 2026-04-13 20:41:58.505 [INFO][4938] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:58.571614 containerd[1598]: 2026-04-13 20:41:58.505 [INFO][4938] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.16.69/26] IPv6=[] ContainerID="d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" HandleID="k8s-pod-network.d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:41:58.571614 containerd[1598]: 2026-04-13 20:41:58.508 [INFO][4915] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-cfwgd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0", GenerateName:"calico-apiserver-7d4775f99-", Namespace:"calico-system", SelfLink:"", UID:"80bc0439-cac3-4b71-ae31-e9556293dc74", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4775f99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7d4775f99-cfwgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif90f2ede309", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:41:58.571614 containerd[1598]: 2026-04-13 20:41:58.508 [INFO][4915] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.69/32] ContainerID="d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-cfwgd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:41:58.571614 containerd[1598]: 2026-04-13 20:41:58.508 [INFO][4915] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif90f2ede309 ContainerID="d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-cfwgd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:41:58.571614 containerd[1598]: 2026-04-13 20:41:58.526 [INFO][4915] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-cfwgd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:41:58.571982 containerd[1598]: 2026-04-13 20:41:58.527 [INFO][4915] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-cfwgd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0", GenerateName:"calico-apiserver-7d4775f99-", Namespace:"calico-system", SelfLink:"", UID:"80bc0439-cac3-4b71-ae31-e9556293dc74", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4775f99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb", Pod:"calico-apiserver-7d4775f99-cfwgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif90f2ede309", MAC:"6e:9a:69:d0:44:7c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:41:58.571982 containerd[1598]: 2026-04-13 20:41:58.550 [INFO][4915] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-cfwgd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:41:58.583593 containerd[1598]: time="2026-04-13T20:41:58.582824468Z" level=info msg="StopPodSandbox for \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\"" Apr 13 20:41:58.659298 systemd-networkd[1218]: cali1dea5c6c271: Link UP Apr 13 20:41:58.659664 systemd-networkd[1218]: cali1dea5c6c271: Gained carrier Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.378 [INFO][4925] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0 goldmane-5b85766d88- calico-system 2e0a0dff-fcd2-4863-8bb8-041686ac070a 1017 0 2026-04-13 20:41:22 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal goldmane-5b85766d88-qch75 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1dea5c6c271 [] [] }} ContainerID="e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" Namespace="calico-system" Pod="goldmane-5b85766d88-qch75" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-" Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.378 [INFO][4925] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" Namespace="calico-system" Pod="goldmane-5b85766d88-qch75" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.494 [INFO][4945] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" HandleID="k8s-pod-network.e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.533 [INFO][4945] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" HandleID="k8s-pod-network.e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fec0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", "pod":"goldmane-5b85766d88-qch75", "timestamp":"2026-04-13 20:41:58.49488943 +0000 UTC"}, Hostname:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000261760)} Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.533 [INFO][4945] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.533 [INFO][4945] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.533 [INFO][4945] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal' Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.553 [INFO][4945] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.569 [INFO][4945] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.587 [INFO][4945] ipam/ipam.go 526: Trying affinity for 192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.592 [INFO][4945] ipam/ipam.go 160: Attempting to load block cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.602 [INFO][4945] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.602 [INFO][4945] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.605 [INFO][4945] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.616 [INFO][4945] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.628 [INFO][4945] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.16.70/26] block=192.168.16.64/26 handle="k8s-pod-network.e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.628 [INFO][4945] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.16.70/26] handle="k8s-pod-network.e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:58.725145 containerd[1598]: 2026-04-13 20:41:58.628 [INFO][4945] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:58.729576 containerd[1598]: 2026-04-13 20:41:58.631 [INFO][4945] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.16.70/26] IPv6=[] ContainerID="e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" HandleID="k8s-pod-network.e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:41:58.729576 containerd[1598]: 2026-04-13 20:41:58.646 [INFO][4925] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" Namespace="calico-system" Pod="goldmane-5b85766d88-qch75" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"2e0a0dff-fcd2-4863-8bb8-041686ac070a", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-5b85766d88-qch75", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1dea5c6c271", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:41:58.729576 containerd[1598]: 2026-04-13 20:41:58.647 [INFO][4925] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.70/32] ContainerID="e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" Namespace="calico-system" Pod="goldmane-5b85766d88-qch75" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:41:58.729576 containerd[1598]: 2026-04-13 20:41:58.647 [INFO][4925] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1dea5c6c271 ContainerID="e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" Namespace="calico-system" Pod="goldmane-5b85766d88-qch75" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:41:58.729576 containerd[1598]: 2026-04-13 20:41:58.661 [INFO][4925] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" Namespace="calico-system" Pod="goldmane-5b85766d88-qch75" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:41:58.729576 containerd[1598]: 2026-04-13 20:41:58.678 [INFO][4925] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" Namespace="calico-system" Pod="goldmane-5b85766d88-qch75" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"2e0a0dff-fcd2-4863-8bb8-041686ac070a", ResourceVersion:"1017", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d", Pod:"goldmane-5b85766d88-qch75", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1dea5c6c271", MAC:"8e:7a:1f:c6:d3:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:41:58.731799 containerd[1598]: 2026-04-13 20:41:58.706 [INFO][4925] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d" Namespace="calico-system" Pod="goldmane-5b85766d88-qch75" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:41:58.734719 containerd[1598]: time="2026-04-13T20:41:58.734108044Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:41:58.734719 containerd[1598]: time="2026-04-13T20:41:58.734209415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:41:58.734719 containerd[1598]: time="2026-04-13T20:41:58.734241606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:58.734719 containerd[1598]: time="2026-04-13T20:41:58.734406496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:58.865396 systemd[1]: run-netns-cni\x2d35cf94c7\x2dc973\x2d1e20\x2d604c\x2dba1e07bca77b.mount: Deactivated successfully. Apr 13 20:41:58.923137 containerd[1598]: time="2026-04-13T20:41:58.921394481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:41:58.923137 containerd[1598]: time="2026-04-13T20:41:58.921472202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:41:58.923137 containerd[1598]: time="2026-04-13T20:41:58.921492383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:58.923137 containerd[1598]: time="2026-04-13T20:41:58.921645862Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:59.029602 systemd[1]: run-containerd-runc-k8s.io-e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d-runc.YQVaip.mount: Deactivated successfully. Apr 13 20:41:59.047036 systemd-networkd[1218]: cali7651e3b8ff3: Gained IPv6LL Apr 13 20:41:59.131126 containerd[1598]: 2026-04-13 20:41:58.803 [INFO][4977] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Apr 13 20:41:59.131126 containerd[1598]: 2026-04-13 20:41:58.803 [INFO][4977] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" iface="eth0" netns="/var/run/netns/cni-1e476972-9bd7-417e-4101-05d28f9c28db" Apr 13 20:41:59.131126 containerd[1598]: 2026-04-13 20:41:58.804 [INFO][4977] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" iface="eth0" netns="/var/run/netns/cni-1e476972-9bd7-417e-4101-05d28f9c28db" Apr 13 20:41:59.131126 containerd[1598]: 2026-04-13 20:41:58.804 [INFO][4977] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" iface="eth0" netns="/var/run/netns/cni-1e476972-9bd7-417e-4101-05d28f9c28db" Apr 13 20:41:59.131126 containerd[1598]: 2026-04-13 20:41:58.804 [INFO][4977] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Apr 13 20:41:59.131126 containerd[1598]: 2026-04-13 20:41:58.804 [INFO][4977] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Apr 13 20:41:59.131126 containerd[1598]: 2026-04-13 20:41:59.032 [INFO][5025] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" HandleID="k8s-pod-network.992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:41:59.131126 containerd[1598]: 2026-04-13 20:41:59.043 [INFO][5025] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:59.131126 containerd[1598]: 2026-04-13 20:41:59.044 [INFO][5025] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:59.131126 containerd[1598]: 2026-04-13 20:41:59.070 [WARNING][5025] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" HandleID="k8s-pod-network.992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:41:59.131126 containerd[1598]: 2026-04-13 20:41:59.070 [INFO][5025] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" HandleID="k8s-pod-network.992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:41:59.131126 containerd[1598]: 2026-04-13 20:41:59.080 [INFO][5025] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:59.131126 containerd[1598]: 2026-04-13 20:41:59.109 [INFO][4977] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Apr 13 20:41:59.131126 containerd[1598]: time="2026-04-13T20:41:59.128975868Z" level=info msg="TearDown network for sandbox \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\" successfully" Apr 13 20:41:59.131126 containerd[1598]: time="2026-04-13T20:41:59.129014283Z" level=info msg="StopPodSandbox for \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\" returns successfully" Apr 13 20:41:59.141684 systemd[1]: run-netns-cni\x2d1e476972\x2d9bd7\x2d417e\x2d4101\x2d05d28f9c28db.mount: Deactivated successfully. Apr 13 20:41:59.150288 containerd[1598]: time="2026-04-13T20:41:59.147260325Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzf9p,Uid:0a802a66-82ba-4481-9d13-dc399ccc739d,Namespace:calico-system,Attempt:1,}" Apr 13 20:41:59.154839 kubelet[2768]: I0413 20:41:59.154332 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-dkdsv" podStartSLOduration=51.154304914 podStartE2EDuration="51.154304914s" podCreationTimestamp="2026-04-13 20:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:41:59.104908446 +0000 UTC m=+58.717233948" watchObservedRunningTime="2026-04-13 20:41:59.154304914 +0000 UTC m=+58.766630417" Apr 13 20:41:59.161250 containerd[1598]: time="2026-04-13T20:41:59.160371392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4775f99-cfwgd,Uid:80bc0439-cac3-4b71-ae31-e9556293dc74,Namespace:calico-system,Attempt:1,} returns sandbox id \"d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb\"" Apr 13 20:41:59.175574 systemd-networkd[1218]: cali8964bfe202e: Gained IPv6LL Apr 13 20:41:59.189248 kubelet[2768]: I0413 20:41:59.187588 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-2m2c2" podStartSLOduration=51.187557648 podStartE2EDuration="51.187557648s" podCreationTimestamp="2026-04-13 20:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:41:59.182631534 +0000 UTC m=+58.794957036" watchObservedRunningTime="2026-04-13 20:41:59.187557648 +0000 UTC m=+58.799883149" Apr 13 20:41:59.300328 containerd[1598]: time="2026-04-13T20:41:59.300280991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-qch75,Uid:2e0a0dff-fcd2-4863-8bb8-041686ac070a,Namespace:calico-system,Attempt:1,} returns sandbox id \"e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d\"" Apr 13 20:41:59.520982 systemd-networkd[1218]: cali00e5025f0fe: Link UP Apr 13 20:41:59.522915 systemd-networkd[1218]: cali00e5025f0fe: Gained carrier Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.377 [INFO][5090] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0 csi-node-driver- calico-system 0a802a66-82ba-4481-9d13-dc399ccc739d 1029 0 2026-04-13 20:41:23 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal csi-node-driver-wzf9p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali00e5025f0fe [] [] }} ContainerID="740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" Namespace="calico-system" Pod="csi-node-driver-wzf9p" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-" Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.377 [INFO][5090] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" Namespace="calico-system" Pod="csi-node-driver-wzf9p" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.431 [INFO][5117] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" HandleID="k8s-pod-network.740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.451 [INFO][5117] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" HandleID="k8s-pod-network.740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e1d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", "pod":"csi-node-driver-wzf9p", "timestamp":"2026-04-13 20:41:59.431601636 +0000 UTC"}, Hostname:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000186840)} Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.452 [INFO][5117] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.452 [INFO][5117] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.452 [INFO][5117] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal' Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.456 [INFO][5117] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.464 [INFO][5117] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.472 [INFO][5117] ipam/ipam.go 526: Trying affinity for 192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.475 [INFO][5117] ipam/ipam.go 160: Attempting to load block cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.480 [INFO][5117] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.480 [INFO][5117] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.483 [INFO][5117] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705 Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.494 [INFO][5117] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.509 [INFO][5117] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.16.71/26] block=192.168.16.64/26 handle="k8s-pod-network.740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.510 [INFO][5117] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.16.71/26] handle="k8s-pod-network.740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:41:59.569447 containerd[1598]: 2026-04-13 20:41:59.510 [INFO][5117] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:59.570654 containerd[1598]: 2026-04-13 20:41:59.510 [INFO][5117] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.16.71/26] IPv6=[] ContainerID="740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" HandleID="k8s-pod-network.740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:41:59.570654 containerd[1598]: 2026-04-13 20:41:59.514 [INFO][5090] cni-plugin/k8s.go 418: Populated endpoint ContainerID="740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" Namespace="calico-system" Pod="csi-node-driver-wzf9p" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a802a66-82ba-4481-9d13-dc399ccc739d", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-wzf9p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali00e5025f0fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:41:59.570654 containerd[1598]: 2026-04-13 20:41:59.514 [INFO][5090] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.71/32] ContainerID="740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" Namespace="calico-system" Pod="csi-node-driver-wzf9p" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:41:59.570654 containerd[1598]: 2026-04-13 20:41:59.514 [INFO][5090] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali00e5025f0fe ContainerID="740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" Namespace="calico-system" Pod="csi-node-driver-wzf9p" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:41:59.570654 containerd[1598]: 2026-04-13 20:41:59.524 [INFO][5090] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" Namespace="calico-system" Pod="csi-node-driver-wzf9p" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:41:59.570654 containerd[1598]: 2026-04-13 20:41:59.544 [INFO][5090] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" Namespace="calico-system" Pod="csi-node-driver-wzf9p" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a802a66-82ba-4481-9d13-dc399ccc739d", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705", Pod:"csi-node-driver-wzf9p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali00e5025f0fe", MAC:"f6:3a:1e:8d:07:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:41:59.573371 containerd[1598]: 2026-04-13 20:41:59.565 [INFO][5090] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705" Namespace="calico-system" Pod="csi-node-driver-wzf9p" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:41:59.579137 containerd[1598]: time="2026-04-13T20:41:59.579018085Z" level=info msg="StopPodSandbox for \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\"" Apr 13 20:41:59.686313 containerd[1598]: time="2026-04-13T20:41:59.685552914Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:41:59.686313 containerd[1598]: time="2026-04-13T20:41:59.685633644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:41:59.686313 containerd[1598]: time="2026-04-13T20:41:59.685654027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:59.686313 containerd[1598]: time="2026-04-13T20:41:59.685812509Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:41:59.844557 containerd[1598]: time="2026-04-13T20:41:59.844165141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wzf9p,Uid:0a802a66-82ba-4481-9d13-dc399ccc739d,Namespace:calico-system,Attempt:1,} returns sandbox id \"740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705\"" Apr 13 20:41:59.912621 containerd[1598]: 2026-04-13 20:41:59.795 [INFO][5144] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Apr 13 20:41:59.912621 containerd[1598]: 2026-04-13 20:41:59.796 [INFO][5144] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" iface="eth0" netns="/var/run/netns/cni-963c545d-1f92-03c5-2d62-276b3907dbbf" Apr 13 20:41:59.912621 containerd[1598]: 2026-04-13 20:41:59.796 [INFO][5144] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" iface="eth0" netns="/var/run/netns/cni-963c545d-1f92-03c5-2d62-276b3907dbbf" Apr 13 20:41:59.912621 containerd[1598]: 2026-04-13 20:41:59.796 [INFO][5144] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" iface="eth0" netns="/var/run/netns/cni-963c545d-1f92-03c5-2d62-276b3907dbbf" Apr 13 20:41:59.912621 containerd[1598]: 2026-04-13 20:41:59.797 [INFO][5144] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Apr 13 20:41:59.912621 containerd[1598]: 2026-04-13 20:41:59.797 [INFO][5144] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Apr 13 20:41:59.912621 containerd[1598]: 2026-04-13 20:41:59.887 [INFO][5192] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" HandleID="k8s-pod-network.a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:41:59.912621 containerd[1598]: 2026-04-13 20:41:59.888 [INFO][5192] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:41:59.912621 containerd[1598]: 2026-04-13 20:41:59.888 [INFO][5192] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:41:59.912621 containerd[1598]: 2026-04-13 20:41:59.899 [WARNING][5192] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" HandleID="k8s-pod-network.a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:41:59.912621 containerd[1598]: 2026-04-13 20:41:59.899 [INFO][5192] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" HandleID="k8s-pod-network.a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:41:59.912621 containerd[1598]: 2026-04-13 20:41:59.902 [INFO][5192] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:41:59.912621 containerd[1598]: 2026-04-13 20:41:59.905 [INFO][5144] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Apr 13 20:41:59.912621 containerd[1598]: time="2026-04-13T20:41:59.910942083Z" level=info msg="TearDown network for sandbox \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\" successfully" Apr 13 20:41:59.912621 containerd[1598]: time="2026-04-13T20:41:59.911002637Z" level=info msg="StopPodSandbox for \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\" returns successfully" Apr 13 20:41:59.917386 containerd[1598]: time="2026-04-13T20:41:59.913659132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4775f99-4x7xd,Uid:28cbe2de-3916-40c2-b29d-4324dd024eb0,Namespace:calico-system,Attempt:1,}" Apr 13 20:41:59.923790 systemd[1]: run-netns-cni\x2d963c545d\x2d1f92\x2d03c5\x2d2d62\x2d276b3907dbbf.mount: Deactivated successfully. Apr 13 20:42:00.006366 systemd-networkd[1218]: calif90f2ede309: Gained IPv6LL Apr 13 20:42:00.174670 systemd-networkd[1218]: calib7b1547486e: Link UP Apr 13 20:42:00.175412 systemd-networkd[1218]: calib7b1547486e: Gained carrier Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.017 [INFO][5205] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0 calico-apiserver-7d4775f99- calico-system 28cbe2de-3916-40c2-b29d-4324dd024eb0 1052 0 2026-04-13 20:41:22 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4775f99 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal calico-apiserver-7d4775f99-4x7xd eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calib7b1547486e [] [] }} ContainerID="e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-4x7xd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-" Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.018 [INFO][5205] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-4x7xd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.086 [INFO][5220] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" HandleID="k8s-pod-network.e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.105 [INFO][5220] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" HandleID="k8s-pod-network.e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", "pod":"calico-apiserver-7d4775f99-4x7xd", "timestamp":"2026-04-13 20:42:00.086908824 +0000 UTC"}, Hostname:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000398160)} Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.105 [INFO][5220] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.105 [INFO][5220] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.105 [INFO][5220] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal' Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.108 [INFO][5220] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.117 [INFO][5220] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.134 [INFO][5220] ipam/ipam.go 526: Trying affinity for 192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.138 [INFO][5220] ipam/ipam.go 160: Attempting to load block cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.142 [INFO][5220] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.16.64/26 host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.142 [INFO][5220] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.16.64/26 handle="k8s-pod-network.e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.144 [INFO][5220] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.152 [INFO][5220] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.16.64/26 handle="k8s-pod-network.e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.162 [INFO][5220] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.16.72/26] block=192.168.16.64/26 handle="k8s-pod-network.e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:42:00.214409 containerd[1598]: 2026-04-13 20:42:00.163 [INFO][5220] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.16.72/26] handle="k8s-pod-network.e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" host="ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal" Apr 13 20:42:00.218746 containerd[1598]: 2026-04-13 20:42:00.163 [INFO][5220] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:00.218746 containerd[1598]: 2026-04-13 20:42:00.163 [INFO][5220] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.16.72/26] IPv6=[] ContainerID="e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" HandleID="k8s-pod-network.e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:42:00.218746 containerd[1598]: 2026-04-13 20:42:00.168 [INFO][5205] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-4x7xd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0", GenerateName:"calico-apiserver-7d4775f99-", Namespace:"calico-system", SelfLink:"", UID:"28cbe2de-3916-40c2-b29d-4324dd024eb0", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4775f99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7d4775f99-4x7xd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib7b1547486e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:00.218746 containerd[1598]: 2026-04-13 20:42:00.168 [INFO][5205] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.72/32] ContainerID="e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-4x7xd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:42:00.218746 containerd[1598]: 2026-04-13 20:42:00.168 [INFO][5205] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib7b1547486e ContainerID="e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-4x7xd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:42:00.218746 containerd[1598]: 2026-04-13 20:42:00.180 [INFO][5205] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-4x7xd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:42:00.219618 containerd[1598]: 2026-04-13 20:42:00.185 [INFO][5205] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-4x7xd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0", GenerateName:"calico-apiserver-7d4775f99-", Namespace:"calico-system", SelfLink:"", UID:"28cbe2de-3916-40c2-b29d-4324dd024eb0", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4775f99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb", Pod:"calico-apiserver-7d4775f99-4x7xd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib7b1547486e", MAC:"ae:7a:1e:3c:94:c4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:00.219618 containerd[1598]: 2026-04-13 20:42:00.209 [INFO][5205] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb" Namespace="calico-system" Pod="calico-apiserver-7d4775f99-4x7xd" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:42:00.285197 containerd[1598]: time="2026-04-13T20:42:00.284173202Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:42:00.285197 containerd[1598]: time="2026-04-13T20:42:00.284267393Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:42:00.285197 containerd[1598]: time="2026-04-13T20:42:00.284296383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:42:00.285646 containerd[1598]: time="2026-04-13T20:42:00.285012058Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:42:00.457929 containerd[1598]: time="2026-04-13T20:42:00.457724795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4775f99-4x7xd,Uid:28cbe2de-3916-40c2-b29d-4324dd024eb0,Namespace:calico-system,Attempt:1,} returns sandbox id \"e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb\"" Apr 13 20:42:00.539354 containerd[1598]: time="2026-04-13T20:42:00.539310521Z" level=info msg="StopPodSandbox for \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\"" Apr 13 20:42:00.648150 containerd[1598]: time="2026-04-13T20:42:00.646848248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:00.649363 containerd[1598]: time="2026-04-13T20:42:00.649311059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 13 20:42:00.650877 containerd[1598]: time="2026-04-13T20:42:00.650840219Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:00.659093 containerd[1598]: time="2026-04-13T20:42:00.658857158Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 5.596945092s" Apr 13 20:42:00.659093 containerd[1598]: time="2026-04-13T20:42:00.658891014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:00.659093 containerd[1598]: time="2026-04-13T20:42:00.658904435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 13 20:42:00.671305 containerd[1598]: time="2026-04-13T20:42:00.671265370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:42:00.686023 containerd[1598]: time="2026-04-13T20:42:00.685639197Z" level=info msg="CreateContainer within sandbox \"653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 20:42:00.701251 containerd[1598]: 2026-04-13 20:42:00.624 [WARNING][5294] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0", GenerateName:"calico-apiserver-7d4775f99-", Namespace:"calico-system", SelfLink:"", UID:"80bc0439-cac3-4b71-ae31-e9556293dc74", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4775f99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb", Pod:"calico-apiserver-7d4775f99-cfwgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif90f2ede309", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:00.701251 containerd[1598]: 2026-04-13 20:42:00.625 [INFO][5294] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Apr 13 20:42:00.701251 containerd[1598]: 2026-04-13 20:42:00.625 [INFO][5294] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" iface="eth0" netns="" Apr 13 20:42:00.701251 containerd[1598]: 2026-04-13 20:42:00.625 [INFO][5294] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Apr 13 20:42:00.701251 containerd[1598]: 2026-04-13 20:42:00.625 [INFO][5294] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Apr 13 20:42:00.701251 containerd[1598]: 2026-04-13 20:42:00.681 [INFO][5303] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" HandleID="k8s-pod-network.bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:42:00.701251 containerd[1598]: 2026-04-13 20:42:00.682 [INFO][5303] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:00.701251 containerd[1598]: 2026-04-13 20:42:00.682 [INFO][5303] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:00.701251 containerd[1598]: 2026-04-13 20:42:00.693 [WARNING][5303] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" HandleID="k8s-pod-network.bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:42:00.701251 containerd[1598]: 2026-04-13 20:42:00.694 [INFO][5303] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" HandleID="k8s-pod-network.bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:42:00.701251 containerd[1598]: 2026-04-13 20:42:00.696 [INFO][5303] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:00.701251 containerd[1598]: 2026-04-13 20:42:00.699 [INFO][5294] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Apr 13 20:42:00.702046 containerd[1598]: time="2026-04-13T20:42:00.701331041Z" level=info msg="TearDown network for sandbox \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\" successfully" Apr 13 20:42:00.702046 containerd[1598]: time="2026-04-13T20:42:00.701371174Z" level=info msg="StopPodSandbox for \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\" returns successfully" Apr 13 20:42:00.702487 containerd[1598]: time="2026-04-13T20:42:00.702364834Z" level=info msg="RemovePodSandbox for \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\"" Apr 13 20:42:00.702487 containerd[1598]: time="2026-04-13T20:42:00.702423743Z" level=info msg="Forcibly stopping sandbox \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\"" Apr 13 20:42:00.706139 containerd[1598]: time="2026-04-13T20:42:00.705623219Z" level=info msg="CreateContainer within sandbox \"653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"fb3f24017dd71f2108548f66a2ac984000e4992a3191d577f9cb801e38b5c0ee\"" Apr 13 20:42:00.706487 containerd[1598]: time="2026-04-13T20:42:00.706451029Z" level=info msg="StartContainer for \"fb3f24017dd71f2108548f66a2ac984000e4992a3191d577f9cb801e38b5c0ee\"" Apr 13 20:42:00.711153 systemd-networkd[1218]: cali1dea5c6c271: Gained IPv6LL Apr 13 20:42:00.884778 containerd[1598]: time="2026-04-13T20:42:00.884720142Z" level=info msg="StartContainer for \"fb3f24017dd71f2108548f66a2ac984000e4992a3191d577f9cb801e38b5c0ee\" returns successfully" Apr 13 20:42:00.906020 containerd[1598]: 2026-04-13 20:42:00.814 [WARNING][5320] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0", GenerateName:"calico-apiserver-7d4775f99-", Namespace:"calico-system", SelfLink:"", UID:"80bc0439-cac3-4b71-ae31-e9556293dc74", ResourceVersion:"1023", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4775f99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb", Pod:"calico-apiserver-7d4775f99-cfwgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calif90f2ede309", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:00.906020 containerd[1598]: 2026-04-13 20:42:00.815 [INFO][5320] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Apr 13 20:42:00.906020 containerd[1598]: 2026-04-13 20:42:00.815 [INFO][5320] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" iface="eth0" netns="" Apr 13 20:42:00.906020 containerd[1598]: 2026-04-13 20:42:00.815 [INFO][5320] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Apr 13 20:42:00.906020 containerd[1598]: 2026-04-13 20:42:00.815 [INFO][5320] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Apr 13 20:42:00.906020 containerd[1598]: 2026-04-13 20:42:00.874 [INFO][5355] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" HandleID="k8s-pod-network.bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:42:00.906020 containerd[1598]: 2026-04-13 20:42:00.875 [INFO][5355] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:00.906020 containerd[1598]: 2026-04-13 20:42:00.875 [INFO][5355] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:00.906020 containerd[1598]: 2026-04-13 20:42:00.888 [WARNING][5355] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" HandleID="k8s-pod-network.bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:42:00.906020 containerd[1598]: 2026-04-13 20:42:00.888 [INFO][5355] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" HandleID="k8s-pod-network.bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--cfwgd-eth0" Apr 13 20:42:00.906020 containerd[1598]: 2026-04-13 20:42:00.892 [INFO][5355] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:00.906020 containerd[1598]: 2026-04-13 20:42:00.899 [INFO][5320] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4" Apr 13 20:42:00.907786 containerd[1598]: time="2026-04-13T20:42:00.906253537Z" level=info msg="TearDown network for sandbox \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\" successfully" Apr 13 20:42:00.925437 containerd[1598]: time="2026-04-13T20:42:00.924199291Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:42:00.925437 containerd[1598]: time="2026-04-13T20:42:00.924660169Z" level=info msg="RemovePodSandbox \"bd5e8a48c43bb79e749e721921c90f9b673751714a79da1a63bb677be49084c4\" returns successfully" Apr 13 20:42:00.930342 containerd[1598]: time="2026-04-13T20:42:00.930300289Z" level=info msg="StopPodSandbox for \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\"" Apr 13 20:42:01.031640 systemd-networkd[1218]: cali00e5025f0fe: Gained IPv6LL Apr 13 20:42:01.161632 containerd[1598]: 2026-04-13 20:42:01.021 [WARNING][5386] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0", GenerateName:"calico-kube-controllers-67b544dcfd-", Namespace:"calico-system", SelfLink:"", UID:"c7953eac-aed1-4972-966d-335bb475a17a", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67b544dcfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a", Pod:"calico-kube-controllers-67b544dcfd-44f62", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib77da2d0ad7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:01.161632 containerd[1598]: 2026-04-13 20:42:01.021 [INFO][5386] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Apr 13 20:42:01.161632 containerd[1598]: 2026-04-13 20:42:01.022 [INFO][5386] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" iface="eth0" netns="" Apr 13 20:42:01.161632 containerd[1598]: 2026-04-13 20:42:01.022 [INFO][5386] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Apr 13 20:42:01.161632 containerd[1598]: 2026-04-13 20:42:01.022 [INFO][5386] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Apr 13 20:42:01.161632 containerd[1598]: 2026-04-13 20:42:01.118 [INFO][5394] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" HandleID="k8s-pod-network.206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:42:01.161632 containerd[1598]: 2026-04-13 20:42:01.118 [INFO][5394] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:01.161632 containerd[1598]: 2026-04-13 20:42:01.118 [INFO][5394] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:01.161632 containerd[1598]: 2026-04-13 20:42:01.149 [WARNING][5394] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" HandleID="k8s-pod-network.206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:42:01.161632 containerd[1598]: 2026-04-13 20:42:01.149 [INFO][5394] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" HandleID="k8s-pod-network.206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:42:01.161632 containerd[1598]: 2026-04-13 20:42:01.155 [INFO][5394] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:01.161632 containerd[1598]: 2026-04-13 20:42:01.158 [INFO][5386] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Apr 13 20:42:01.162631 containerd[1598]: time="2026-04-13T20:42:01.162592239Z" level=info msg="TearDown network for sandbox \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\" successfully" Apr 13 20:42:01.162767 containerd[1598]: time="2026-04-13T20:42:01.162741893Z" level=info msg="StopPodSandbox for \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\" returns successfully" Apr 13 20:42:01.163578 containerd[1598]: time="2026-04-13T20:42:01.163537474Z" level=info msg="RemovePodSandbox for \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\"" Apr 13 20:42:01.163962 containerd[1598]: time="2026-04-13T20:42:01.163934505Z" level=info msg="Forcibly stopping sandbox \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\"" Apr 13 20:42:01.277122 kubelet[2768]: I0413 20:42:01.275612 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67b544dcfd-44f62" podStartSLOduration=32.673824957 podStartE2EDuration="38.275588887s" podCreationTimestamp="2026-04-13 20:41:23 +0000 UTC" firstStartedPulling="2026-04-13 20:41:55.061535696 +0000 UTC m=+54.673861191" lastFinishedPulling="2026-04-13 20:42:00.663299615 +0000 UTC m=+60.275625121" observedRunningTime="2026-04-13 20:42:01.168956216 +0000 UTC m=+60.781281719" watchObservedRunningTime="2026-04-13 20:42:01.275588887 +0000 UTC m=+60.887914389" Apr 13 20:42:01.330422 containerd[1598]: 2026-04-13 20:42:01.260 [WARNING][5426] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0", GenerateName:"calico-kube-controllers-67b544dcfd-", Namespace:"calico-system", SelfLink:"", UID:"c7953eac-aed1-4972-966d-335bb475a17a", ResourceVersion:"1066", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67b544dcfd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"653a3a40fd126af097ca38016cff54bbb1ce5c445b92f5fb5ef836758555f66a", Pod:"calico-kube-controllers-67b544dcfd-44f62", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib77da2d0ad7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:01.330422 containerd[1598]: 2026-04-13 20:42:01.261 [INFO][5426] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Apr 13 20:42:01.330422 containerd[1598]: 2026-04-13 20:42:01.261 [INFO][5426] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" iface="eth0" netns="" Apr 13 20:42:01.330422 containerd[1598]: 2026-04-13 20:42:01.261 [INFO][5426] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Apr 13 20:42:01.330422 containerd[1598]: 2026-04-13 20:42:01.261 [INFO][5426] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Apr 13 20:42:01.330422 containerd[1598]: 2026-04-13 20:42:01.315 [INFO][5443] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" HandleID="k8s-pod-network.206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:42:01.330422 containerd[1598]: 2026-04-13 20:42:01.315 [INFO][5443] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:01.330422 containerd[1598]: 2026-04-13 20:42:01.315 [INFO][5443] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:01.330422 containerd[1598]: 2026-04-13 20:42:01.324 [WARNING][5443] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" HandleID="k8s-pod-network.206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:42:01.330422 containerd[1598]: 2026-04-13 20:42:01.324 [INFO][5443] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" HandleID="k8s-pod-network.206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--kube--controllers--67b544dcfd--44f62-eth0" Apr 13 20:42:01.330422 containerd[1598]: 2026-04-13 20:42:01.327 [INFO][5443] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:01.330422 containerd[1598]: 2026-04-13 20:42:01.328 [INFO][5426] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5" Apr 13 20:42:01.330422 containerd[1598]: time="2026-04-13T20:42:01.330343513Z" level=info msg="TearDown network for sandbox \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\" successfully" Apr 13 20:42:01.336728 containerd[1598]: time="2026-04-13T20:42:01.336644029Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:42:01.336857 containerd[1598]: time="2026-04-13T20:42:01.336778346Z" level=info msg="RemovePodSandbox \"206eec7e835702caf1c146bc7d4169d2dc454bb52d335127d4b156750f3037d5\" returns successfully" Apr 13 20:42:01.337427 containerd[1598]: time="2026-04-13T20:42:01.337391514Z" level=info msg="StopPodSandbox for \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\"" Apr 13 20:42:01.426396 containerd[1598]: 2026-04-13 20:42:01.384 [WARNING][5457] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0", GenerateName:"calico-apiserver-7d4775f99-", Namespace:"calico-system", SelfLink:"", UID:"28cbe2de-3916-40c2-b29d-4324dd024eb0", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4775f99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb", Pod:"calico-apiserver-7d4775f99-4x7xd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib7b1547486e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:01.426396 containerd[1598]: 2026-04-13 20:42:01.385 [INFO][5457] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Apr 13 20:42:01.426396 containerd[1598]: 2026-04-13 20:42:01.385 [INFO][5457] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" iface="eth0" netns="" Apr 13 20:42:01.426396 containerd[1598]: 2026-04-13 20:42:01.385 [INFO][5457] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Apr 13 20:42:01.426396 containerd[1598]: 2026-04-13 20:42:01.385 [INFO][5457] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Apr 13 20:42:01.426396 containerd[1598]: 2026-04-13 20:42:01.411 [INFO][5464] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" HandleID="k8s-pod-network.a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:42:01.426396 containerd[1598]: 2026-04-13 20:42:01.411 [INFO][5464] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:01.426396 containerd[1598]: 2026-04-13 20:42:01.411 [INFO][5464] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:01.426396 containerd[1598]: 2026-04-13 20:42:01.421 [WARNING][5464] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" HandleID="k8s-pod-network.a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:42:01.426396 containerd[1598]: 2026-04-13 20:42:01.421 [INFO][5464] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" HandleID="k8s-pod-network.a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:42:01.426396 containerd[1598]: 2026-04-13 20:42:01.423 [INFO][5464] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:01.426396 containerd[1598]: 2026-04-13 20:42:01.424 [INFO][5457] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Apr 13 20:42:01.427623 containerd[1598]: time="2026-04-13T20:42:01.426429970Z" level=info msg="TearDown network for sandbox \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\" successfully" Apr 13 20:42:01.427623 containerd[1598]: time="2026-04-13T20:42:01.426466058Z" level=info msg="StopPodSandbox for \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\" returns successfully" Apr 13 20:42:01.427623 containerd[1598]: time="2026-04-13T20:42:01.427091069Z" level=info msg="RemovePodSandbox for \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\"" Apr 13 20:42:01.427623 containerd[1598]: time="2026-04-13T20:42:01.427131521Z" level=info msg="Forcibly stopping sandbox \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\"" Apr 13 20:42:01.528328 containerd[1598]: 2026-04-13 20:42:01.479 [WARNING][5479] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0", GenerateName:"calico-apiserver-7d4775f99-", Namespace:"calico-system", SelfLink:"", UID:"28cbe2de-3916-40c2-b29d-4324dd024eb0", ResourceVersion:"1056", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4775f99", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb", Pod:"calico-apiserver-7d4775f99-4x7xd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calib7b1547486e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:01.528328 containerd[1598]: 2026-04-13 20:42:01.479 [INFO][5479] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Apr 13 20:42:01.528328 containerd[1598]: 2026-04-13 20:42:01.479 [INFO][5479] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" iface="eth0" netns="" Apr 13 20:42:01.528328 containerd[1598]: 2026-04-13 20:42:01.479 [INFO][5479] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Apr 13 20:42:01.528328 containerd[1598]: 2026-04-13 20:42:01.479 [INFO][5479] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Apr 13 20:42:01.528328 containerd[1598]: 2026-04-13 20:42:01.513 [INFO][5486] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" HandleID="k8s-pod-network.a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:42:01.528328 containerd[1598]: 2026-04-13 20:42:01.513 [INFO][5486] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:01.528328 containerd[1598]: 2026-04-13 20:42:01.513 [INFO][5486] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:01.528328 containerd[1598]: 2026-04-13 20:42:01.522 [WARNING][5486] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" HandleID="k8s-pod-network.a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:42:01.528328 containerd[1598]: 2026-04-13 20:42:01.522 [INFO][5486] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" HandleID="k8s-pod-network.a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-calico--apiserver--7d4775f99--4x7xd-eth0" Apr 13 20:42:01.528328 containerd[1598]: 2026-04-13 20:42:01.524 [INFO][5486] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:01.528328 containerd[1598]: 2026-04-13 20:42:01.526 [INFO][5479] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320" Apr 13 20:42:01.528328 containerd[1598]: time="2026-04-13T20:42:01.527876664Z" level=info msg="TearDown network for sandbox \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\" successfully" Apr 13 20:42:01.536079 containerd[1598]: time="2026-04-13T20:42:01.535817415Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:42:01.536079 containerd[1598]: time="2026-04-13T20:42:01.535922540Z" level=info msg="RemovePodSandbox \"a8d2d40d7d19a8a4d10f05d8c309bebd3183e7389d2de654ade24b1878484320\" returns successfully" Apr 13 20:42:01.536816 containerd[1598]: time="2026-04-13T20:42:01.536783781Z" level=info msg="StopPodSandbox for \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\"" Apr 13 20:42:01.641379 containerd[1598]: 2026-04-13 20:42:01.588 [WARNING][5500] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9f5a2099-cb50-4d52-9877-f1dd83710551", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d", Pod:"coredns-674b8bbfcf-2m2c2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7651e3b8ff3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:01.641379 containerd[1598]: 2026-04-13 20:42:01.589 [INFO][5500] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Apr 13 20:42:01.641379 containerd[1598]: 2026-04-13 20:42:01.589 [INFO][5500] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" iface="eth0" netns="" Apr 13 20:42:01.641379 containerd[1598]: 2026-04-13 20:42:01.589 [INFO][5500] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Apr 13 20:42:01.641379 containerd[1598]: 2026-04-13 20:42:01.589 [INFO][5500] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Apr 13 20:42:01.641379 containerd[1598]: 2026-04-13 20:42:01.626 [INFO][5507] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" HandleID="k8s-pod-network.588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:42:01.641379 containerd[1598]: 2026-04-13 20:42:01.626 [INFO][5507] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:01.641379 containerd[1598]: 2026-04-13 20:42:01.626 [INFO][5507] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:01.641379 containerd[1598]: 2026-04-13 20:42:01.635 [WARNING][5507] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" HandleID="k8s-pod-network.588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:42:01.641379 containerd[1598]: 2026-04-13 20:42:01.635 [INFO][5507] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" HandleID="k8s-pod-network.588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:42:01.641379 containerd[1598]: 2026-04-13 20:42:01.637 [INFO][5507] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:01.641379 containerd[1598]: 2026-04-13 20:42:01.639 [INFO][5500] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Apr 13 20:42:01.641379 containerd[1598]: time="2026-04-13T20:42:01.641317840Z" level=info msg="TearDown network for sandbox \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\" successfully" Apr 13 20:42:01.641379 containerd[1598]: time="2026-04-13T20:42:01.641352866Z" level=info msg="StopPodSandbox for \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\" returns successfully" Apr 13 20:42:01.643441 containerd[1598]: time="2026-04-13T20:42:01.642965840Z" level=info msg="RemovePodSandbox for \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\"" Apr 13 20:42:01.643441 containerd[1598]: time="2026-04-13T20:42:01.643008263Z" level=info msg="Forcibly stopping sandbox \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\"" Apr 13 20:42:01.733980 containerd[1598]: 2026-04-13 20:42:01.688 [WARNING][5521] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9f5a2099-cb50-4d52-9877-f1dd83710551", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"eab9d93d9154189d2d33e2e2d21c8bf69bff2548ac52b1cf84a6206cfd89386d", Pod:"coredns-674b8bbfcf-2m2c2", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7651e3b8ff3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:01.733980 containerd[1598]: 2026-04-13 20:42:01.688 [INFO][5521] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Apr 13 20:42:01.733980 containerd[1598]: 2026-04-13 20:42:01.689 [INFO][5521] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" iface="eth0" netns="" Apr 13 20:42:01.733980 containerd[1598]: 2026-04-13 20:42:01.689 [INFO][5521] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Apr 13 20:42:01.733980 containerd[1598]: 2026-04-13 20:42:01.689 [INFO][5521] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Apr 13 20:42:01.733980 containerd[1598]: 2026-04-13 20:42:01.718 [INFO][5528] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" HandleID="k8s-pod-network.588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:42:01.733980 containerd[1598]: 2026-04-13 20:42:01.718 [INFO][5528] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:01.733980 containerd[1598]: 2026-04-13 20:42:01.718 [INFO][5528] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:01.733980 containerd[1598]: 2026-04-13 20:42:01.728 [WARNING][5528] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" HandleID="k8s-pod-network.588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:42:01.733980 containerd[1598]: 2026-04-13 20:42:01.728 [INFO][5528] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" HandleID="k8s-pod-network.588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--2m2c2-eth0" Apr 13 20:42:01.733980 containerd[1598]: 2026-04-13 20:42:01.730 [INFO][5528] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:01.733980 containerd[1598]: 2026-04-13 20:42:01.732 [INFO][5521] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6" Apr 13 20:42:01.735132 containerd[1598]: time="2026-04-13T20:42:01.734036966Z" level=info msg="TearDown network for sandbox \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\" successfully" Apr 13 20:42:01.739213 containerd[1598]: time="2026-04-13T20:42:01.739164813Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:42:01.739525 containerd[1598]: time="2026-04-13T20:42:01.739253324Z" level=info msg="RemovePodSandbox \"588b14562306f7edefcef2b05fd65b67a86e7dc9f26a577e9261af71f990d9b6\" returns successfully" Apr 13 20:42:01.739876 containerd[1598]: time="2026-04-13T20:42:01.739842900Z" level=info msg="StopPodSandbox for \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\"" Apr 13 20:42:01.867385 containerd[1598]: 2026-04-13 20:42:01.799 [WARNING][5542] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a802a66-82ba-4481-9d13-dc399ccc739d", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705", Pod:"csi-node-driver-wzf9p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali00e5025f0fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:01.867385 containerd[1598]: 2026-04-13 20:42:01.799 [INFO][5542] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Apr 13 20:42:01.867385 containerd[1598]: 2026-04-13 20:42:01.799 [INFO][5542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" iface="eth0" netns="" Apr 13 20:42:01.867385 containerd[1598]: 2026-04-13 20:42:01.799 [INFO][5542] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Apr 13 20:42:01.867385 containerd[1598]: 2026-04-13 20:42:01.799 [INFO][5542] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Apr 13 20:42:01.867385 containerd[1598]: 2026-04-13 20:42:01.845 [INFO][5549] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" HandleID="k8s-pod-network.992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:42:01.867385 containerd[1598]: 2026-04-13 20:42:01.847 [INFO][5549] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:01.867385 containerd[1598]: 2026-04-13 20:42:01.847 [INFO][5549] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:01.867385 containerd[1598]: 2026-04-13 20:42:01.859 [WARNING][5549] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" HandleID="k8s-pod-network.992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:42:01.867385 containerd[1598]: 2026-04-13 20:42:01.859 [INFO][5549] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" HandleID="k8s-pod-network.992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:42:01.867385 containerd[1598]: 2026-04-13 20:42:01.862 [INFO][5549] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:01.867385 containerd[1598]: 2026-04-13 20:42:01.864 [INFO][5542] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Apr 13 20:42:01.867385 containerd[1598]: time="2026-04-13T20:42:01.867290393Z" level=info msg="TearDown network for sandbox \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\" successfully" Apr 13 20:42:01.867385 containerd[1598]: time="2026-04-13T20:42:01.867327904Z" level=info msg="StopPodSandbox for \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\" returns successfully" Apr 13 20:42:01.869290 containerd[1598]: time="2026-04-13T20:42:01.868045571Z" level=info msg="RemovePodSandbox for \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\"" Apr 13 20:42:01.869290 containerd[1598]: time="2026-04-13T20:42:01.868109979Z" level=info msg="Forcibly stopping sandbox \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\"" Apr 13 20:42:02.011815 containerd[1598]: 2026-04-13 20:42:01.938 [WARNING][5563] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0a802a66-82ba-4481-9d13-dc399ccc739d", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705", Pod:"csi-node-driver-wzf9p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali00e5025f0fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:02.011815 containerd[1598]: 2026-04-13 20:42:01.939 [INFO][5563] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Apr 13 20:42:02.011815 containerd[1598]: 2026-04-13 20:42:01.939 [INFO][5563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" iface="eth0" netns="" Apr 13 20:42:02.011815 containerd[1598]: 2026-04-13 20:42:01.939 [INFO][5563] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Apr 13 20:42:02.011815 containerd[1598]: 2026-04-13 20:42:01.939 [INFO][5563] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Apr 13 20:42:02.011815 containerd[1598]: 2026-04-13 20:42:01.986 [INFO][5570] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" HandleID="k8s-pod-network.992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:42:02.011815 containerd[1598]: 2026-04-13 20:42:01.989 [INFO][5570] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:02.011815 containerd[1598]: 2026-04-13 20:42:01.990 [INFO][5570] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:02.011815 containerd[1598]: 2026-04-13 20:42:02.005 [WARNING][5570] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" HandleID="k8s-pod-network.992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:42:02.011815 containerd[1598]: 2026-04-13 20:42:02.005 [INFO][5570] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" HandleID="k8s-pod-network.992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-csi--node--driver--wzf9p-eth0" Apr 13 20:42:02.011815 containerd[1598]: 2026-04-13 20:42:02.007 [INFO][5570] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:02.011815 containerd[1598]: 2026-04-13 20:42:02.009 [INFO][5563] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481" Apr 13 20:42:02.014473 containerd[1598]: time="2026-04-13T20:42:02.011823461Z" level=info msg="TearDown network for sandbox \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\" successfully" Apr 13 20:42:02.016904 containerd[1598]: time="2026-04-13T20:42:02.016721784Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:42:02.017017 containerd[1598]: time="2026-04-13T20:42:02.016812698Z" level=info msg="RemovePodSandbox \"992efed50f87263a733002599a85d44d327a19dc8261940d57a1611e7cd31481\" returns successfully" Apr 13 20:42:02.017796 containerd[1598]: time="2026-04-13T20:42:02.017760226Z" level=info msg="StopPodSandbox for \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\"" Apr 13 20:42:02.169481 containerd[1598]: 2026-04-13 20:42:02.092 [WARNING][5585] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"2e0a0dff-fcd2-4863-8bb8-041686ac070a", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d", Pod:"goldmane-5b85766d88-qch75", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1dea5c6c271", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:02.169481 containerd[1598]: 2026-04-13 20:42:02.092 [INFO][5585] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Apr 13 20:42:02.169481 containerd[1598]: 2026-04-13 20:42:02.092 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" iface="eth0" netns="" Apr 13 20:42:02.169481 containerd[1598]: 2026-04-13 20:42:02.092 [INFO][5585] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Apr 13 20:42:02.169481 containerd[1598]: 2026-04-13 20:42:02.092 [INFO][5585] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Apr 13 20:42:02.169481 containerd[1598]: 2026-04-13 20:42:02.142 [INFO][5593] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" HandleID="k8s-pod-network.9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:42:02.169481 containerd[1598]: 2026-04-13 20:42:02.143 [INFO][5593] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:02.169481 containerd[1598]: 2026-04-13 20:42:02.143 [INFO][5593] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:02.169481 containerd[1598]: 2026-04-13 20:42:02.158 [WARNING][5593] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" HandleID="k8s-pod-network.9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:42:02.169481 containerd[1598]: 2026-04-13 20:42:02.158 [INFO][5593] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" HandleID="k8s-pod-network.9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:42:02.169481 containerd[1598]: 2026-04-13 20:42:02.160 [INFO][5593] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:02.169481 containerd[1598]: 2026-04-13 20:42:02.166 [INFO][5585] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Apr 13 20:42:02.169481 containerd[1598]: time="2026-04-13T20:42:02.169367202Z" level=info msg="TearDown network for sandbox \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\" successfully" Apr 13 20:42:02.169481 containerd[1598]: time="2026-04-13T20:42:02.169398787Z" level=info msg="StopPodSandbox for \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\" returns successfully" Apr 13 20:42:02.171538 containerd[1598]: time="2026-04-13T20:42:02.170482371Z" level=info msg="RemovePodSandbox for \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\"" Apr 13 20:42:02.171538 containerd[1598]: time="2026-04-13T20:42:02.170522160Z" level=info msg="Forcibly stopping sandbox \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\"" Apr 13 20:42:02.183356 systemd-networkd[1218]: calib7b1547486e: Gained IPv6LL Apr 13 20:42:02.308215 containerd[1598]: 2026-04-13 20:42:02.261 [WARNING][5608] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"2e0a0dff-fcd2-4863-8bb8-041686ac070a", ResourceVersion:"1027", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d", Pod:"goldmane-5b85766d88-qch75", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1dea5c6c271", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:02.308215 containerd[1598]: 2026-04-13 20:42:02.262 [INFO][5608] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Apr 13 20:42:02.308215 containerd[1598]: 2026-04-13 20:42:02.262 [INFO][5608] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" iface="eth0" netns="" Apr 13 20:42:02.308215 containerd[1598]: 2026-04-13 20:42:02.262 [INFO][5608] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Apr 13 20:42:02.308215 containerd[1598]: 2026-04-13 20:42:02.262 [INFO][5608] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Apr 13 20:42:02.308215 containerd[1598]: 2026-04-13 20:42:02.291 [INFO][5618] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" HandleID="k8s-pod-network.9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:42:02.308215 containerd[1598]: 2026-04-13 20:42:02.291 [INFO][5618] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:02.308215 containerd[1598]: 2026-04-13 20:42:02.291 [INFO][5618] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:02.308215 containerd[1598]: 2026-04-13 20:42:02.301 [WARNING][5618] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" HandleID="k8s-pod-network.9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:42:02.308215 containerd[1598]: 2026-04-13 20:42:02.301 [INFO][5618] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" HandleID="k8s-pod-network.9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-goldmane--5b85766d88--qch75-eth0" Apr 13 20:42:02.308215 containerd[1598]: 2026-04-13 20:42:02.303 [INFO][5618] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:02.308215 containerd[1598]: 2026-04-13 20:42:02.305 [INFO][5608] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49" Apr 13 20:42:02.308215 containerd[1598]: time="2026-04-13T20:42:02.307172383Z" level=info msg="TearDown network for sandbox \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\" successfully" Apr 13 20:42:02.327889 containerd[1598]: time="2026-04-13T20:42:02.327826688Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:42:02.328041 containerd[1598]: time="2026-04-13T20:42:02.327907600Z" level=info msg="RemovePodSandbox \"9b73423bbb3631ad7221403cd8ee659a6d7ddf36ddf50f2a43414f90cd6abc49\" returns successfully" Apr 13 20:42:02.331439 containerd[1598]: time="2026-04-13T20:42:02.331265256Z" level=info msg="StopPodSandbox for \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\"" Apr 13 20:42:02.492446 containerd[1598]: 2026-04-13 20:42:02.416 [WARNING][5637] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bf6c12d4-a453-4d40-bc8e-b49f714452b6", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7", Pod:"coredns-674b8bbfcf-dkdsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8964bfe202e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:02.492446 containerd[1598]: 2026-04-13 20:42:02.417 [INFO][5637] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Apr 13 20:42:02.492446 containerd[1598]: 2026-04-13 20:42:02.417 [INFO][5637] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" iface="eth0" netns="" Apr 13 20:42:02.492446 containerd[1598]: 2026-04-13 20:42:02.417 [INFO][5637] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Apr 13 20:42:02.492446 containerd[1598]: 2026-04-13 20:42:02.417 [INFO][5637] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Apr 13 20:42:02.492446 containerd[1598]: 2026-04-13 20:42:02.463 [INFO][5644] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" HandleID="k8s-pod-network.ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:42:02.492446 containerd[1598]: 2026-04-13 20:42:02.466 [INFO][5644] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:02.492446 containerd[1598]: 2026-04-13 20:42:02.466 [INFO][5644] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:02.492446 containerd[1598]: 2026-04-13 20:42:02.483 [WARNING][5644] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" HandleID="k8s-pod-network.ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:42:02.492446 containerd[1598]: 2026-04-13 20:42:02.483 [INFO][5644] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" HandleID="k8s-pod-network.ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:42:02.492446 containerd[1598]: 2026-04-13 20:42:02.485 [INFO][5644] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:02.492446 containerd[1598]: 2026-04-13 20:42:02.488 [INFO][5637] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Apr 13 20:42:02.493859 containerd[1598]: time="2026-04-13T20:42:02.492617121Z" level=info msg="TearDown network for sandbox \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\" successfully" Apr 13 20:42:02.493859 containerd[1598]: time="2026-04-13T20:42:02.492653232Z" level=info msg="StopPodSandbox for \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\" returns successfully" Apr 13 20:42:02.496281 containerd[1598]: time="2026-04-13T20:42:02.495386741Z" level=info msg="RemovePodSandbox for \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\"" Apr 13 20:42:02.496281 containerd[1598]: time="2026-04-13T20:42:02.495431492Z" level=info msg="Forcibly stopping sandbox \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\"" Apr 13 20:42:02.648317 containerd[1598]: 2026-04-13 20:42:02.569 [WARNING][5659] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"bf6c12d4-a453-4d40-bc8e-b49f714452b6", ResourceVersion:"1035", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 41, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-5eddfa61563ae4f9d392.c.flatcar-212911.internal", ContainerID:"60e3dff6de86bde7bd114cb22f8052257839163712e65294cae019e0af780df7", Pod:"coredns-674b8bbfcf-dkdsv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8964bfe202e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:42:02.648317 containerd[1598]: 2026-04-13 20:42:02.569 [INFO][5659] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Apr 13 20:42:02.648317 containerd[1598]: 2026-04-13 20:42:02.571 [INFO][5659] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" iface="eth0" netns="" Apr 13 20:42:02.648317 containerd[1598]: 2026-04-13 20:42:02.571 [INFO][5659] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Apr 13 20:42:02.648317 containerd[1598]: 2026-04-13 20:42:02.571 [INFO][5659] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Apr 13 20:42:02.648317 containerd[1598]: 2026-04-13 20:42:02.628 [INFO][5666] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" HandleID="k8s-pod-network.ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:42:02.648317 containerd[1598]: 2026-04-13 20:42:02.628 [INFO][5666] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:02.648317 containerd[1598]: 2026-04-13 20:42:02.629 [INFO][5666] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:02.648317 containerd[1598]: 2026-04-13 20:42:02.640 [WARNING][5666] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" HandleID="k8s-pod-network.ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:42:02.648317 containerd[1598]: 2026-04-13 20:42:02.640 [INFO][5666] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" HandleID="k8s-pod-network.ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-coredns--674b8bbfcf--dkdsv-eth0" Apr 13 20:42:02.648317 containerd[1598]: 2026-04-13 20:42:02.642 [INFO][5666] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:02.648317 containerd[1598]: 2026-04-13 20:42:02.645 [INFO][5659] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109" Apr 13 20:42:02.650535 containerd[1598]: time="2026-04-13T20:42:02.648274860Z" level=info msg="TearDown network for sandbox \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\" successfully" Apr 13 20:42:02.656455 containerd[1598]: time="2026-04-13T20:42:02.656247954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:42:02.656455 containerd[1598]: time="2026-04-13T20:42:02.656334251Z" level=info msg="RemovePodSandbox \"ae8e529bea172e02346710979b5455df55f25ec07280faf8f892e942f3250109\" returns successfully" Apr 13 20:42:02.657202 containerd[1598]: time="2026-04-13T20:42:02.657128616Z" level=info msg="StopPodSandbox for \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\"" Apr 13 20:42:02.814541 containerd[1598]: 2026-04-13 20:42:02.735 [WARNING][5681] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--848d97df56--d5zqb-eth0" Apr 13 20:42:02.814541 containerd[1598]: 2026-04-13 20:42:02.736 [INFO][5681] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Apr 13 20:42:02.814541 containerd[1598]: 2026-04-13 20:42:02.736 [INFO][5681] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" iface="eth0" netns="" Apr 13 20:42:02.814541 containerd[1598]: 2026-04-13 20:42:02.736 [INFO][5681] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Apr 13 20:42:02.814541 containerd[1598]: 2026-04-13 20:42:02.736 [INFO][5681] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Apr 13 20:42:02.814541 containerd[1598]: 2026-04-13 20:42:02.794 [INFO][5689] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" HandleID="k8s-pod-network.7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--848d97df56--d5zqb-eth0" Apr 13 20:42:02.814541 containerd[1598]: 2026-04-13 20:42:02.794 [INFO][5689] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:02.814541 containerd[1598]: 2026-04-13 20:42:02.794 [INFO][5689] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:02.814541 containerd[1598]: 2026-04-13 20:42:02.806 [WARNING][5689] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" HandleID="k8s-pod-network.7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--848d97df56--d5zqb-eth0" Apr 13 20:42:02.814541 containerd[1598]: 2026-04-13 20:42:02.806 [INFO][5689] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" HandleID="k8s-pod-network.7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--848d97df56--d5zqb-eth0" Apr 13 20:42:02.814541 containerd[1598]: 2026-04-13 20:42:02.808 [INFO][5689] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:02.814541 containerd[1598]: 2026-04-13 20:42:02.811 [INFO][5681] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Apr 13 20:42:02.816585 containerd[1598]: time="2026-04-13T20:42:02.814580600Z" level=info msg="TearDown network for sandbox \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\" successfully" Apr 13 20:42:02.816585 containerd[1598]: time="2026-04-13T20:42:02.814614689Z" level=info msg="StopPodSandbox for \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\" returns successfully" Apr 13 20:42:02.816585 containerd[1598]: time="2026-04-13T20:42:02.815546857Z" level=info msg="RemovePodSandbox for \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\"" Apr 13 20:42:02.816585 containerd[1598]: time="2026-04-13T20:42:02.815776868Z" level=info msg="Forcibly stopping sandbox \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\"" Apr 13 20:42:02.979169 containerd[1598]: 2026-04-13 20:42:02.907 [WARNING][5703] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" WorkloadEndpoint="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--848d97df56--d5zqb-eth0" Apr 13 20:42:02.979169 containerd[1598]: 2026-04-13 20:42:02.907 [INFO][5703] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Apr 13 20:42:02.979169 containerd[1598]: 2026-04-13 20:42:02.907 [INFO][5703] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" iface="eth0" netns="" Apr 13 20:42:02.979169 containerd[1598]: 2026-04-13 20:42:02.907 [INFO][5703] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Apr 13 20:42:02.979169 containerd[1598]: 2026-04-13 20:42:02.907 [INFO][5703] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Apr 13 20:42:02.979169 containerd[1598]: 2026-04-13 20:42:02.956 [INFO][5710] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" HandleID="k8s-pod-network.7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--848d97df56--d5zqb-eth0" Apr 13 20:42:02.979169 containerd[1598]: 2026-04-13 20:42:02.956 [INFO][5710] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:42:02.979169 containerd[1598]: 2026-04-13 20:42:02.956 [INFO][5710] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:42:02.979169 containerd[1598]: 2026-04-13 20:42:02.968 [WARNING][5710] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" HandleID="k8s-pod-network.7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--848d97df56--d5zqb-eth0" Apr 13 20:42:02.979169 containerd[1598]: 2026-04-13 20:42:02.969 [INFO][5710] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" HandleID="k8s-pod-network.7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Workload="ci--4081--3--7--5eddfa61563ae4f9d392.c.flatcar--212911.internal-k8s-whisker--848d97df56--d5zqb-eth0" Apr 13 20:42:02.979169 containerd[1598]: 2026-04-13 20:42:02.973 [INFO][5710] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:42:02.979169 containerd[1598]: 2026-04-13 20:42:02.975 [INFO][5703] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e" Apr 13 20:42:02.979169 containerd[1598]: time="2026-04-13T20:42:02.978934996Z" level=info msg="TearDown network for sandbox \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\" successfully" Apr 13 20:42:02.985768 containerd[1598]: time="2026-04-13T20:42:02.985545484Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:42:02.985768 containerd[1598]: time="2026-04-13T20:42:02.985629013Z" level=info msg="RemovePodSandbox \"7c92c006084b332bf2efb6562df4022aabf789c0b49e8160f7d4711c2380fe7e\" returns successfully" Apr 13 20:42:03.789751 containerd[1598]: time="2026-04-13T20:42:03.789696123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:03.791090 containerd[1598]: time="2026-04-13T20:42:03.790998448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 13 20:42:03.792415 containerd[1598]: time="2026-04-13T20:42:03.792292717Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:03.795730 containerd[1598]: time="2026-04-13T20:42:03.795490354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:03.796836 containerd[1598]: time="2026-04-13T20:42:03.796615122Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.125135731s" Apr 13 20:42:03.796836 containerd[1598]: time="2026-04-13T20:42:03.796661458Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:42:03.798599 containerd[1598]: time="2026-04-13T20:42:03.798465719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 20:42:03.803502 containerd[1598]: time="2026-04-13T20:42:03.803445030Z" level=info msg="CreateContainer within sandbox \"d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:42:03.827294 containerd[1598]: time="2026-04-13T20:42:03.827216945Z" level=info msg="CreateContainer within sandbox \"d38ece3b0a5bb3f51a5aa1613131ca0483afeb98b9b25918ccd9197f8d12c6fb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"624e0cdcbb7db306b4eb6b6221933597f7c0ff606c945100372e2120c38fd2b6\"" Apr 13 20:42:03.829582 containerd[1598]: time="2026-04-13T20:42:03.827954492Z" level=info msg="StartContainer for \"624e0cdcbb7db306b4eb6b6221933597f7c0ff606c945100372e2120c38fd2b6\"" Apr 13 20:42:03.883137 systemd[1]: run-containerd-runc-k8s.io-624e0cdcbb7db306b4eb6b6221933597f7c0ff606c945100372e2120c38fd2b6-runc.QTowXs.mount: Deactivated successfully. Apr 13 20:42:03.948031 containerd[1598]: time="2026-04-13T20:42:03.947976332Z" level=info msg="StartContainer for \"624e0cdcbb7db306b4eb6b6221933597f7c0ff606c945100372e2120c38fd2b6\" returns successfully" Apr 13 20:42:03.978482 systemd[1]: Started sshd@7-10.128.0.46:22-60.167.19.189:50947.service - OpenSSH per-connection server daemon (60.167.19.189:50947). Apr 13 20:42:04.200120 kubelet[2768]: I0413 20:42:04.197749 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7d4775f99-cfwgd" podStartSLOduration=37.571953074 podStartE2EDuration="42.197723756s" podCreationTimestamp="2026-04-13 20:41:22 +0000 UTC" firstStartedPulling="2026-04-13 20:41:59.172325016 +0000 UTC m=+58.784650504" lastFinishedPulling="2026-04-13 20:42:03.798095686 +0000 UTC m=+63.410421186" observedRunningTime="2026-04-13 20:42:04.192650748 +0000 UTC m=+63.804976250" watchObservedRunningTime="2026-04-13 20:42:04.197723756 +0000 UTC m=+63.810049258" Apr 13 20:42:04.747486 ntpd[1544]: Listen normally on 9 calib77da2d0ad7 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 20:42:04.747610 ntpd[1544]: Listen normally on 10 cali8964bfe202e [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 20:42:04.748152 ntpd[1544]: 13 Apr 20:42:04 ntpd[1544]: Listen normally on 9 calib77da2d0ad7 [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 20:42:04.748152 ntpd[1544]: 13 Apr 20:42:04 ntpd[1544]: Listen normally on 10 cali8964bfe202e [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 20:42:04.748152 ntpd[1544]: 13 Apr 20:42:04 ntpd[1544]: Listen normally on 11 cali7651e3b8ff3 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 20:42:04.748152 ntpd[1544]: 13 Apr 20:42:04 ntpd[1544]: Listen normally on 12 calif90f2ede309 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 13 20:42:04.748152 ntpd[1544]: 13 Apr 20:42:04 ntpd[1544]: Listen normally on 13 cali1dea5c6c271 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 13 20:42:04.748152 ntpd[1544]: 13 Apr 20:42:04 ntpd[1544]: Listen normally on 14 cali00e5025f0fe [fe80::ecee:eeff:feee:eeee%13]:123 Apr 13 20:42:04.748152 ntpd[1544]: 13 Apr 20:42:04 ntpd[1544]: Listen normally on 15 calib7b1547486e [fe80::ecee:eeff:feee:eeee%14]:123 Apr 13 20:42:04.747674 ntpd[1544]: Listen normally on 11 cali7651e3b8ff3 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 20:42:04.747733 ntpd[1544]: Listen normally on 12 calif90f2ede309 [fe80::ecee:eeff:feee:eeee%11]:123 Apr 13 20:42:04.747794 ntpd[1544]: Listen normally on 13 cali1dea5c6c271 [fe80::ecee:eeff:feee:eeee%12]:123 Apr 13 20:42:04.747852 ntpd[1544]: Listen normally on 14 cali00e5025f0fe [fe80::ecee:eeff:feee:eeee%13]:123 Apr 13 20:42:04.747918 ntpd[1544]: Listen normally on 15 calib7b1547486e [fe80::ecee:eeff:feee:eeee%14]:123 Apr 13 20:42:06.565993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1472469328.mount: Deactivated successfully. Apr 13 20:42:07.626114 containerd[1598]: time="2026-04-13T20:42:07.626027555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:07.627449 containerd[1598]: time="2026-04-13T20:42:07.627378242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 13 20:42:07.628442 containerd[1598]: time="2026-04-13T20:42:07.628322515Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:07.631995 containerd[1598]: time="2026-04-13T20:42:07.631807029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:07.633047 containerd[1598]: time="2026-04-13T20:42:07.632898780Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 3.834391878s" Apr 13 20:42:07.633047 containerd[1598]: time="2026-04-13T20:42:07.632943405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 13 20:42:07.635228 containerd[1598]: time="2026-04-13T20:42:07.634726766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 20:42:07.639326 containerd[1598]: time="2026-04-13T20:42:07.639276895Z" level=info msg="CreateContainer within sandbox \"e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 20:42:07.662124 containerd[1598]: time="2026-04-13T20:42:07.662075439Z" level=info msg="CreateContainer within sandbox \"e01c86281a62c5a334e7fc1f0e24ef7f0f498999406ef931d0312912b0beac0d\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"5ffa75fd00b9654c11986bf60a9e6ed4e5843df4f18535c926306ad3c2792179\"" Apr 13 20:42:07.664260 containerd[1598]: time="2026-04-13T20:42:07.662684913Z" level=info msg="StartContainer for \"5ffa75fd00b9654c11986bf60a9e6ed4e5843df4f18535c926306ad3c2792179\"" Apr 13 20:42:07.700313 sshd[5755]: Invalid user Admin from 60.167.19.189 port 50947 Apr 13 20:42:07.888827 containerd[1598]: time="2026-04-13T20:42:07.887996843Z" level=info msg="StartContainer for \"5ffa75fd00b9654c11986bf60a9e6ed4e5843df4f18535c926306ad3c2792179\" returns successfully" Apr 13 20:42:08.238227 kubelet[2768]: I0413 20:42:08.236917 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-qch75" podStartSLOduration=37.904814627 podStartE2EDuration="46.236851586s" podCreationTimestamp="2026-04-13 20:41:22 +0000 UTC" firstStartedPulling="2026-04-13 20:41:59.302405766 +0000 UTC m=+58.914731256" lastFinishedPulling="2026-04-13 20:42:07.634442716 +0000 UTC m=+67.246768215" observedRunningTime="2026-04-13 20:42:08.235826801 +0000 UTC m=+67.848152302" watchObservedRunningTime="2026-04-13 20:42:08.236851586 +0000 UTC m=+67.849177090" Apr 13 20:42:08.357848 sshd[5755]: PAM: Permission denied for illegal user Admin from 60.167.19.189 Apr 13 20:42:08.358695 sshd[5755]: Failed keyboard-interactive/pam for invalid user Admin from 60.167.19.189 port 50947 ssh2 Apr 13 20:42:08.905324 containerd[1598]: time="2026-04-13T20:42:08.905257062Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:08.906728 containerd[1598]: time="2026-04-13T20:42:08.906659804Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 13 20:42:08.907970 containerd[1598]: time="2026-04-13T20:42:08.907901990Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:08.911139 containerd[1598]: time="2026-04-13T20:42:08.911053471Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:08.912875 containerd[1598]: time="2026-04-13T20:42:08.911984188Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.277217152s" Apr 13 20:42:08.912875 containerd[1598]: time="2026-04-13T20:42:08.912030036Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 13 20:42:08.914006 containerd[1598]: time="2026-04-13T20:42:08.913964332Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:42:08.917702 containerd[1598]: time="2026-04-13T20:42:08.917665894Z" level=info msg="CreateContainer within sandbox \"740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 20:42:08.939609 containerd[1598]: time="2026-04-13T20:42:08.939554431Z" level=info msg="CreateContainer within sandbox \"740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"90030dfbee54f2716c19054335b3169c0fd9d7a81ba3711b4de717850d2c4790\"" Apr 13 20:42:08.942603 containerd[1598]: time="2026-04-13T20:42:08.941840341Z" level=info msg="StartContainer for \"90030dfbee54f2716c19054335b3169c0fd9d7a81ba3711b4de717850d2c4790\"" Apr 13 20:42:09.041822 containerd[1598]: time="2026-04-13T20:42:09.041769744Z" level=info msg="StartContainer for \"90030dfbee54f2716c19054335b3169c0fd9d7a81ba3711b4de717850d2c4790\" returns successfully" Apr 13 20:42:09.135453 sshd[5755]: Connection closed by invalid user Admin 60.167.19.189 port 50947 [preauth] Apr 13 20:42:09.142353 systemd[1]: sshd@7-10.128.0.46:22-60.167.19.189:50947.service: Deactivated successfully. Apr 13 20:42:09.152998 containerd[1598]: time="2026-04-13T20:42:09.152932261Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:09.154581 containerd[1598]: time="2026-04-13T20:42:09.154473871Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 20:42:09.157809 containerd[1598]: time="2026-04-13T20:42:09.157682216Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 243.672212ms" Apr 13 20:42:09.157809 containerd[1598]: time="2026-04-13T20:42:09.157729684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:42:09.161379 containerd[1598]: time="2026-04-13T20:42:09.160674032Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 20:42:09.164331 containerd[1598]: time="2026-04-13T20:42:09.164280364Z" level=info msg="CreateContainer within sandbox \"e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:42:09.180845 containerd[1598]: time="2026-04-13T20:42:09.180799530Z" level=info msg="CreateContainer within sandbox \"e867929e4d33c9a108e08c7d3a1868c91662c6978f3ad5c2f795943d249668bb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"914ca248e9c4b4599a1a46a7516accd2dbd27027e2e446c7ef62ee99a8b6ed73\"" Apr 13 20:42:09.181574 containerd[1598]: time="2026-04-13T20:42:09.181526806Z" level=info msg="StartContainer for \"914ca248e9c4b4599a1a46a7516accd2dbd27027e2e446c7ef62ee99a8b6ed73\"" Apr 13 20:42:09.325569 containerd[1598]: time="2026-04-13T20:42:09.325438747Z" level=info msg="StartContainer for \"914ca248e9c4b4599a1a46a7516accd2dbd27027e2e446c7ef62ee99a8b6ed73\" returns successfully" Apr 13 20:42:10.906663 containerd[1598]: time="2026-04-13T20:42:10.906544965Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:10.909747 containerd[1598]: time="2026-04-13T20:42:10.909540838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 13 20:42:10.910540 containerd[1598]: time="2026-04-13T20:42:10.910475506Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:10.928130 containerd[1598]: time="2026-04-13T20:42:10.925485397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:42:10.933647 containerd[1598]: time="2026-04-13T20:42:10.933492016Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.772685146s" Apr 13 20:42:10.934188 containerd[1598]: time="2026-04-13T20:42:10.934107275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 13 20:42:10.941961 containerd[1598]: time="2026-04-13T20:42:10.941823000Z" level=info msg="CreateContainer within sandbox \"740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 20:42:10.964989 containerd[1598]: time="2026-04-13T20:42:10.964045814Z" level=info msg="CreateContainer within sandbox \"740a2bff9c37a9ac3c6dc0ad87adcaefabf9454a028794f2cd843425848ca705\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"789b7909137985ef7d02378649ef1ecb1577614171d3390e403eaa4ee49ef692\"" Apr 13 20:42:10.967096 containerd[1598]: time="2026-04-13T20:42:10.966176673Z" level=info msg="StartContainer for \"789b7909137985ef7d02378649ef1ecb1577614171d3390e403eaa4ee49ef692\"" Apr 13 20:42:11.115743 containerd[1598]: time="2026-04-13T20:42:11.115656129Z" level=info msg="StartContainer for \"789b7909137985ef7d02378649ef1ecb1577614171d3390e403eaa4ee49ef692\" returns successfully" Apr 13 20:42:11.250596 kubelet[2768]: I0413 20:42:11.250302 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-7d4775f99-4x7xd" podStartSLOduration=40.553566533 podStartE2EDuration="49.250276232s" podCreationTimestamp="2026-04-13 20:41:22 +0000 UTC" firstStartedPulling="2026-04-13 20:42:00.46256955 +0000 UTC m=+60.074895025" lastFinishedPulling="2026-04-13 20:42:09.15927923 +0000 UTC m=+68.771604724" observedRunningTime="2026-04-13 20:42:10.247508893 +0000 UTC m=+69.859834395" watchObservedRunningTime="2026-04-13 20:42:11.250276232 +0000 UTC m=+70.862601731" Apr 13 20:42:11.252812 kubelet[2768]: I0413 20:42:11.251577 2768 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wzf9p" podStartSLOduration=37.166894403 podStartE2EDuration="48.251540712s" podCreationTimestamp="2026-04-13 20:41:23 +0000 UTC" firstStartedPulling="2026-04-13 20:41:59.85110374 +0000 UTC m=+59.463429240" lastFinishedPulling="2026-04-13 20:42:10.935750057 +0000 UTC m=+70.548075549" observedRunningTime="2026-04-13 20:42:11.249205174 +0000 UTC m=+70.861530680" watchObservedRunningTime="2026-04-13 20:42:11.251540712 +0000 UTC m=+70.863866213" Apr 13 20:42:11.714910 kubelet[2768]: I0413 20:42:11.714471 2768 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 20:42:11.714910 kubelet[2768]: I0413 20:42:11.714528 2768 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 20:42:12.607841 systemd[1]: Started sshd@8-10.128.0.46:22-20.229.252.112:37830.service - OpenSSH per-connection server daemon (20.229.252.112:37830). Apr 13 20:42:13.295613 sshd[6030]: Accepted publickey for core from 20.229.252.112 port 37830 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:42:13.296492 sshd[6030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:42:13.303182 systemd-logind[1582]: New session 8 of user core. Apr 13 20:42:13.307663 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 20:42:13.868389 sshd[6030]: pam_unix(sshd:session): session closed for user core Apr 13 20:42:13.874821 systemd[1]: sshd@8-10.128.0.46:22-20.229.252.112:37830.service: Deactivated successfully. Apr 13 20:42:13.879884 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 20:42:13.880607 systemd-logind[1582]: Session 8 logged out. Waiting for processes to exit. Apr 13 20:42:13.882535 systemd-logind[1582]: Removed session 8. Apr 13 20:42:18.991473 systemd[1]: Started sshd@9-10.128.0.46:22-20.229.252.112:55038.service - OpenSSH per-connection server daemon (20.229.252.112:55038). Apr 13 20:42:19.715099 sshd[6066]: Accepted publickey for core from 20.229.252.112 port 55038 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:42:19.716776 sshd[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:42:19.723189 systemd-logind[1582]: New session 9 of user core. Apr 13 20:42:19.728399 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 20:42:20.297192 sshd[6066]: pam_unix(sshd:session): session closed for user core Apr 13 20:42:20.301525 systemd[1]: sshd@9-10.128.0.46:22-20.229.252.112:55038.service: Deactivated successfully. Apr 13 20:42:20.308861 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:42:20.310900 systemd-logind[1582]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:42:20.312406 systemd-logind[1582]: Removed session 9. Apr 13 20:42:25.417993 systemd[1]: Started sshd@10-10.128.0.46:22-20.229.252.112:37486.service - OpenSSH per-connection server daemon (20.229.252.112:37486). Apr 13 20:42:26.132671 sshd[6092]: Accepted publickey for core from 20.229.252.112 port 37486 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:42:26.134604 sshd[6092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:42:26.142182 systemd-logind[1582]: New session 10 of user core. Apr 13 20:42:26.145548 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:42:26.727148 sshd[6092]: pam_unix(sshd:session): session closed for user core Apr 13 20:42:26.732102 systemd[1]: sshd@10-10.128.0.46:22-20.229.252.112:37486.service: Deactivated successfully. Apr 13 20:42:26.738835 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:42:26.740913 systemd-logind[1582]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:42:26.742568 systemd-logind[1582]: Removed session 10. Apr 13 20:42:31.849507 systemd[1]: Started sshd@11-10.128.0.46:22-20.229.252.112:37498.service - OpenSSH per-connection server daemon (20.229.252.112:37498). Apr 13 20:42:32.569170 sshd[6137]: Accepted publickey for core from 20.229.252.112 port 37498 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:42:32.573179 sshd[6137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:42:32.582593 systemd-logind[1582]: New session 11 of user core. Apr 13 20:42:32.590677 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:42:33.158333 sshd[6137]: pam_unix(sshd:session): session closed for user core Apr 13 20:42:33.164775 systemd[1]: sshd@11-10.128.0.46:22-20.229.252.112:37498.service: Deactivated successfully. Apr 13 20:42:33.170283 systemd-logind[1582]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:42:33.171037 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:42:33.173334 systemd-logind[1582]: Removed session 11. Apr 13 20:42:33.279470 systemd[1]: Started sshd@12-10.128.0.46:22-20.229.252.112:37506.service - OpenSSH per-connection server daemon (20.229.252.112:37506). Apr 13 20:42:33.998757 sshd[6168]: Accepted publickey for core from 20.229.252.112 port 37506 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:42:34.000967 sshd[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:42:34.007163 systemd-logind[1582]: New session 12 of user core. Apr 13 20:42:34.011478 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:42:34.616231 sshd[6168]: pam_unix(sshd:session): session closed for user core Apr 13 20:42:34.621636 systemd[1]: sshd@12-10.128.0.46:22-20.229.252.112:37506.service: Deactivated successfully. Apr 13 20:42:34.627905 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:42:34.628263 systemd-logind[1582]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:42:34.631336 systemd-logind[1582]: Removed session 12. Apr 13 20:42:34.732662 systemd[1]: Started sshd@13-10.128.0.46:22-20.229.252.112:37512.service - OpenSSH per-connection server daemon (20.229.252.112:37512). Apr 13 20:42:35.424455 sshd[6180]: Accepted publickey for core from 20.229.252.112 port 37512 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:42:35.426495 sshd[6180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:42:35.435518 systemd-logind[1582]: New session 13 of user core. Apr 13 20:42:35.441779 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:42:35.986840 sshd[6180]: pam_unix(sshd:session): session closed for user core Apr 13 20:42:35.993126 systemd-logind[1582]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:42:35.993529 systemd[1]: sshd@13-10.128.0.46:22-20.229.252.112:37512.service: Deactivated successfully. Apr 13 20:42:35.997049 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:42:36.000561 systemd-logind[1582]: Removed session 13. Apr 13 20:42:41.107955 systemd[1]: Started sshd@14-10.128.0.46:22-20.229.252.112:40000.service - OpenSSH per-connection server daemon (20.229.252.112:40000). Apr 13 20:42:41.829617 sshd[6243]: Accepted publickey for core from 20.229.252.112 port 40000 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:42:41.831896 sshd[6243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:42:41.839185 systemd-logind[1582]: New session 14 of user core. Apr 13 20:42:41.848512 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:42:42.405984 sshd[6243]: pam_unix(sshd:session): session closed for user core Apr 13 20:42:42.412164 systemd[1]: sshd@14-10.128.0.46:22-20.229.252.112:40000.service: Deactivated successfully. Apr 13 20:42:42.418633 systemd-logind[1582]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:42:42.419720 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:42:42.423718 systemd-logind[1582]: Removed session 14. Apr 13 20:42:42.521546 systemd[1]: Started sshd@15-10.128.0.46:22-20.229.252.112:40010.service - OpenSSH per-connection server daemon (20.229.252.112:40010). Apr 13 20:42:43.214644 sshd[6257]: Accepted publickey for core from 20.229.252.112 port 40010 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:42:43.216743 sshd[6257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:42:43.223206 systemd-logind[1582]: New session 15 of user core. Apr 13 20:42:43.231827 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:42:43.841284 sshd[6257]: pam_unix(sshd:session): session closed for user core Apr 13 20:42:43.845870 systemd[1]: sshd@15-10.128.0.46:22-20.229.252.112:40010.service: Deactivated successfully. Apr 13 20:42:43.852003 systemd-logind[1582]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:42:43.853360 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:42:43.855760 systemd-logind[1582]: Removed session 15. Apr 13 20:42:43.957965 systemd[1]: Started sshd@16-10.128.0.46:22-20.229.252.112:40026.service - OpenSSH per-connection server daemon (20.229.252.112:40026). Apr 13 20:42:44.649102 sshd[6269]: Accepted publickey for core from 20.229.252.112 port 40026 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:42:44.650757 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:42:44.657514 systemd-logind[1582]: New session 16 of user core. Apr 13 20:42:44.666446 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:42:45.845384 sshd[6269]: pam_unix(sshd:session): session closed for user core Apr 13 20:42:45.851250 systemd[1]: sshd@16-10.128.0.46:22-20.229.252.112:40026.service: Deactivated successfully. Apr 13 20:42:45.856738 systemd-logind[1582]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:42:45.858268 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:42:45.860311 systemd-logind[1582]: Removed session 16. Apr 13 20:42:45.963989 systemd[1]: Started sshd@17-10.128.0.46:22-20.229.252.112:59606.service - OpenSSH per-connection server daemon (20.229.252.112:59606). Apr 13 20:42:46.665821 sshd[6296]: Accepted publickey for core from 20.229.252.112 port 59606 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:42:46.667599 sshd[6296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:42:46.673131 systemd-logind[1582]: New session 17 of user core. Apr 13 20:42:46.676801 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:42:47.383222 sshd[6296]: pam_unix(sshd:session): session closed for user core Apr 13 20:42:47.392682 systemd[1]: sshd@17-10.128.0.46:22-20.229.252.112:59606.service: Deactivated successfully. Apr 13 20:42:47.407412 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:42:47.416108 systemd-logind[1582]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:42:47.423446 systemd-logind[1582]: Removed session 17. Apr 13 20:42:47.503471 systemd[1]: Started sshd@18-10.128.0.46:22-20.229.252.112:59622.service - OpenSSH per-connection server daemon (20.229.252.112:59622). Apr 13 20:42:48.222041 sshd[6327]: Accepted publickey for core from 20.229.252.112 port 59622 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:42:48.223946 sshd[6327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:42:48.230412 systemd-logind[1582]: New session 18 of user core. Apr 13 20:42:48.233524 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:42:48.806596 sshd[6327]: pam_unix(sshd:session): session closed for user core Apr 13 20:42:48.812578 systemd[1]: sshd@18-10.128.0.46:22-20.229.252.112:59622.service: Deactivated successfully. Apr 13 20:42:48.818914 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:42:48.820431 systemd-logind[1582]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:42:48.823081 systemd-logind[1582]: Removed session 18. Apr 13 20:42:53.928536 systemd[1]: Started sshd@19-10.128.0.46:22-20.229.252.112:59636.service - OpenSSH per-connection server daemon (20.229.252.112:59636). Apr 13 20:42:54.639093 sshd[6343]: Accepted publickey for core from 20.229.252.112 port 59636 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:42:54.641182 sshd[6343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:42:54.648015 systemd-logind[1582]: New session 19 of user core. Apr 13 20:42:54.653558 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:42:55.206901 sshd[6343]: pam_unix(sshd:session): session closed for user core Apr 13 20:42:55.211566 systemd[1]: sshd@19-10.128.0.46:22-20.229.252.112:59636.service: Deactivated successfully. Apr 13 20:42:55.219425 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:42:55.220772 systemd-logind[1582]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:42:55.222454 systemd-logind[1582]: Removed session 19. Apr 13 20:43:00.322473 systemd[1]: Started sshd@20-10.128.0.46:22-20.229.252.112:57552.service - OpenSSH per-connection server daemon (20.229.252.112:57552). Apr 13 20:43:01.006759 sshd[6357]: Accepted publickey for core from 20.229.252.112 port 57552 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:43:01.008915 sshd[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:43:01.014871 systemd-logind[1582]: New session 20 of user core. Apr 13 20:43:01.021405 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 20:43:01.603531 sshd[6357]: pam_unix(sshd:session): session closed for user core Apr 13 20:43:01.608613 systemd[1]: sshd@20-10.128.0.46:22-20.229.252.112:57552.service: Deactivated successfully. Apr 13 20:43:01.615620 systemd-logind[1582]: Session 20 logged out. Waiting for processes to exit. Apr 13 20:43:01.616672 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 20:43:01.619008 systemd-logind[1582]: Removed session 20.