Apr 13 20:32:17.243658 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:32:17.243726 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:32:17.243755 kernel: BIOS-provided physical RAM map: Apr 13 20:32:17.243776 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 13 20:32:17.243797 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 13 20:32:17.243820 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 13 20:32:17.243849 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 13 20:32:17.243878 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 13 20:32:17.243892 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Apr 13 20:32:17.243908 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Apr 13 20:32:17.243925 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Apr 13 20:32:17.243941 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Apr 13 20:32:17.243958 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 13 20:32:17.243975 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 13 20:32:17.244002 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 13 20:32:17.244021 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 13 20:32:17.244042 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 13 20:32:17.244089 kernel: NX (Execute Disable) protection: active Apr 13 20:32:17.244107 kernel: APIC: Static calls initialized Apr 13 20:32:17.244129 kernel: efi: EFI v2.7 by EDK II Apr 13 20:32:17.244151 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Apr 13 20:32:17.244176 kernel: SMBIOS 2.4 present. Apr 13 20:32:17.244201 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Apr 13 20:32:17.244224 kernel: Hypervisor detected: KVM Apr 13 20:32:17.244253 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:32:17.244274 kernel: kvm-clock: using sched offset of 13374061603 cycles Apr 13 20:32:17.244299 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:32:17.244325 kernel: tsc: Detected 2299.998 MHz processor Apr 13 20:32:17.244348 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:32:17.244374 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:32:17.244399 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 13 20:32:17.244424 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Apr 13 20:32:17.244448 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:32:17.244477 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 13 20:32:17.244503 kernel: Using GB pages for direct mapping Apr 13 20:32:17.244536 kernel: Secure boot disabled Apr 13 20:32:17.244562 kernel: ACPI: Early table checksum verification disabled Apr 13 20:32:17.244587 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 13 20:32:17.244611 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 13 20:32:17.244638 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 13 20:32:17.244676 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 13 20:32:17.244705 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 13 20:32:17.244742 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Apr 13 20:32:17.244770 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 13 20:32:17.244795 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 13 20:32:17.244823 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 13 20:32:17.244847 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 13 20:32:17.244880 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 13 20:32:17.244904 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 13 20:32:17.244930 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 13 20:32:17.244958 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 13 20:32:17.244983 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 13 20:32:17.245007 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 13 20:32:17.245040 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 13 20:32:17.247131 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 13 20:32:17.247153 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 13 20:32:17.247191 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 13 20:32:17.247219 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 13 20:32:17.247250 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 13 20:32:17.247272 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 13 20:32:17.247300 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 13 20:32:17.247324 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 13 20:32:17.247347 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Apr 13 20:32:17.247375 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Apr 13 20:32:17.247399 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Apr 13 20:32:17.247426 kernel: Zone ranges: Apr 13 20:32:17.247446 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:32:17.247465 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:32:17.247485 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 13 20:32:17.247504 kernel: Movable zone start for each node Apr 13 20:32:17.247544 kernel: Early memory node ranges Apr 13 20:32:17.247566 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 13 20:32:17.247583 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 13 20:32:17.247600 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Apr 13 20:32:17.247626 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 13 20:32:17.247648 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 13 20:32:17.247671 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 13 20:32:17.247692 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:32:17.247709 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 13 20:32:17.247727 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 13 20:32:17.247747 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 13 20:32:17.247768 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 13 20:32:17.247789 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 13 20:32:17.247817 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:32:17.247854 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:32:17.247880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:32:17.247905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:32:17.247931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:32:17.247964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:32:17.247987 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:32:17.248022 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:32:17.248069 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 13 20:32:17.248100 kernel: Booting paravirtualized kernel on KVM Apr 13 20:32:17.248124 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:32:17.248147 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:32:17.248170 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:32:17.248192 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:32:17.248212 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:32:17.248234 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:32:17.248255 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:32:17.248279 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:32:17.248309 kernel: random: crng init done Apr 13 20:32:17.248333 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 13 20:32:17.248354 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:32:17.248374 kernel: Fallback order for Node 0: 0 Apr 13 20:32:17.248400 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Apr 13 20:32:17.248425 kernel: Policy zone: Normal Apr 13 20:32:17.248449 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:32:17.248472 kernel: software IO TLB: area num 2. Apr 13 20:32:17.248494 kernel: Memory: 7513184K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 347140K reserved, 0K cma-reserved) Apr 13 20:32:17.248532 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:32:17.248555 kernel: Kernel/User page tables isolation: enabled Apr 13 20:32:17.248578 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:32:17.248599 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:32:17.248753 kernel: Dynamic Preempt: voluntary Apr 13 20:32:17.248781 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:32:17.248811 kernel: rcu: RCU event tracing is enabled. Apr 13 20:32:17.248839 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:32:17.248899 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:32:17.248931 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:32:17.248960 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:32:17.248991 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:32:17.249017 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:32:17.251190 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:32:17.251423 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:32:17.251503 kernel: Console: colour dummy device 80x25 Apr 13 20:32:17.251657 kernel: printk: console [ttyS0] enabled Apr 13 20:32:17.251754 kernel: ACPI: Core revision 20230628 Apr 13 20:32:17.251821 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:32:17.251852 kernel: x2apic enabled Apr 13 20:32:17.251881 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:32:17.251915 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 13 20:32:17.251942 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 13 20:32:17.251971 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 13 20:32:17.251999 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 13 20:32:17.252036 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 13 20:32:17.253120 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:32:17.253169 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 13 20:32:17.253208 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 13 20:32:17.253247 kernel: Spectre V2 : Mitigation: IBRS Apr 13 20:32:17.253283 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:32:17.253318 kernel: RETBleed: Mitigation: IBRS Apr 13 20:32:17.253356 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:32:17.253393 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Apr 13 20:32:17.253440 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:32:17.253475 kernel: MDS: Mitigation: Clear CPU buffers Apr 13 20:32:17.253511 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:32:17.253548 kernel: active return thunk: its_return_thunk Apr 13 20:32:17.253584 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 20:32:17.253619 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:32:17.253655 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:32:17.253687 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:32:17.253714 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:32:17.253750 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 13 20:32:17.253777 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:32:17.253803 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:32:17.253829 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:32:17.253857 kernel: landlock: Up and running. Apr 13 20:32:17.253883 kernel: SELinux: Initializing. Apr 13 20:32:17.253913 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:32:17.253945 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:32:17.253980 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 13 20:32:17.254019 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:32:17.255099 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:32:17.255147 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:32:17.255177 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 13 20:32:17.255205 kernel: signal: max sigframe size: 1776 Apr 13 20:32:17.255246 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:32:17.255271 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:32:17.255294 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 20:32:17.255317 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:32:17.255349 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:32:17.255372 kernel: .... node #0, CPUs: #1 Apr 13 20:32:17.255398 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 13 20:32:17.255422 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 13 20:32:17.255444 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:32:17.255466 kernel: smpboot: Max logical packages: 1 Apr 13 20:32:17.255489 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 13 20:32:17.255512 kernel: devtmpfs: initialized Apr 13 20:32:17.255541 kernel: x86/mm: Memory block size: 128MB Apr 13 20:32:17.255568 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 13 20:32:17.255591 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:32:17.255614 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:32:17.255638 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:32:17.255661 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:32:17.255685 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:32:17.255708 kernel: audit: type=2000 audit(1776112335.716:1): state=initialized audit_enabled=0 res=1 Apr 13 20:32:17.255731 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:32:17.255761 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:32:17.255784 kernel: cpuidle: using governor menu Apr 13 20:32:17.255808 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:32:17.255832 kernel: dca service started, version 1.12.1 Apr 13 20:32:17.255855 kernel: PCI: Using configuration type 1 for base access Apr 13 20:32:17.255878 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:32:17.255902 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:32:17.255926 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:32:17.255951 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:32:17.255981 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:32:17.256005 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:32:17.256028 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:32:17.256077 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:32:17.257131 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 13 20:32:17.257156 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:32:17.257179 kernel: ACPI: Interpreter enabled Apr 13 20:32:17.257202 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 20:32:17.257237 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:32:17.257271 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:32:17.257294 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 13 20:32:17.257317 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 13 20:32:17.257340 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:32:17.257764 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:32:17.259171 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 13 20:32:17.259490 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 13 20:32:17.259530 kernel: PCI host bridge to bus 0000:00 Apr 13 20:32:17.259810 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:32:17.262190 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:32:17.262766 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:32:17.263035 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 13 20:32:17.263352 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:32:17.263770 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 13 20:32:17.265211 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Apr 13 20:32:17.265536 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 13 20:32:17.265817 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 13 20:32:17.267175 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Apr 13 20:32:17.267619 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 13 20:32:17.267896 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Apr 13 20:32:17.270302 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 13 20:32:17.270640 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Apr 13 20:32:17.270930 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Apr 13 20:32:17.271269 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Apr 13 20:32:17.271551 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 13 20:32:17.271826 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Apr 13 20:32:17.271858 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:32:17.271892 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:32:17.271914 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:32:17.271937 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:32:17.271961 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 13 20:32:17.271981 kernel: iommu: Default domain type: Translated Apr 13 20:32:17.272001 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:32:17.272023 kernel: efivars: Registered efivars operations Apr 13 20:32:17.272052 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:32:17.272096 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:32:17.272132 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 13 20:32:17.272164 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 13 20:32:17.272195 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 13 20:32:17.272225 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 13 20:32:17.272256 kernel: vgaarb: loaded Apr 13 20:32:17.272288 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:32:17.272328 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:32:17.272358 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:32:17.272387 kernel: pnp: PnP ACPI init Apr 13 20:32:17.272424 kernel: pnp: PnP ACPI: found 7 devices Apr 13 20:32:17.272456 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:32:17.272487 kernel: NET: Registered PF_INET protocol family Apr 13 20:32:17.272518 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 13 20:32:17.272548 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 13 20:32:17.272573 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:32:17.272594 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:32:17.272612 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 13 20:32:17.272636 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 13 20:32:17.272664 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 13 20:32:17.272687 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 13 20:32:17.272706 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:32:17.272731 kernel: NET: Registered PF_XDP protocol family Apr 13 20:32:17.272935 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:32:17.273142 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:32:17.273396 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:32:17.273636 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 13 20:32:17.275284 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 13 20:32:17.275333 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:32:17.275354 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:32:17.275375 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Apr 13 20:32:17.275394 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 20:32:17.275414 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 13 20:32:17.275435 kernel: clocksource: Switched to clocksource tsc Apr 13 20:32:17.275455 kernel: Initialise system trusted keyrings Apr 13 20:32:17.275482 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 13 20:32:17.275502 kernel: Key type asymmetric registered Apr 13 20:32:17.275522 kernel: Asymmetric key parser 'x509' registered Apr 13 20:32:17.275544 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:32:17.275564 kernel: io scheduler mq-deadline registered Apr 13 20:32:17.275585 kernel: io scheduler kyber registered Apr 13 20:32:17.275605 kernel: io scheduler bfq registered Apr 13 20:32:17.275625 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:32:17.275647 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 13 20:32:17.275891 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 13 20:32:17.275919 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 13 20:32:17.276202 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 13 20:32:17.276325 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 13 20:32:17.276990 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 13 20:32:17.280081 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:32:17.280123 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:32:17.280147 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 13 20:32:17.280168 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 13 20:32:17.280198 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 13 20:32:17.280477 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 13 20:32:17.280508 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:32:17.280529 kernel: i8042: Warning: Keylock active Apr 13 20:32:17.280550 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:32:17.280570 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:32:17.280801 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 13 20:32:17.281010 kernel: rtc_cmos 00:00: registered as rtc0 Apr 13 20:32:17.281285 kernel: rtc_cmos 00:00: setting system clock to 2026-04-13T20:32:16 UTC (1776112336) Apr 13 20:32:17.281518 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 13 20:32:17.281543 kernel: intel_pstate: CPU model not supported Apr 13 20:32:17.281562 kernel: pstore: Using crash dump compression: deflate Apr 13 20:32:17.281582 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 20:32:17.281602 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:32:17.281621 kernel: Segment Routing with IPv6 Apr 13 20:32:17.281640 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:32:17.281668 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:32:17.281688 kernel: Key type dns_resolver registered Apr 13 20:32:17.281707 kernel: IPI shorthand broadcast: enabled Apr 13 20:32:17.281726 kernel: sched_clock: Marking stable (1078072412, 340165812)->(1631807645, -213569421) Apr 13 20:32:17.281745 kernel: registered taskstats version 1 Apr 13 20:32:17.281764 kernel: Loading compiled-in X.509 certificates Apr 13 20:32:17.281783 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:32:17.281803 kernel: Key type .fscrypt registered Apr 13 20:32:17.281833 kernel: Key type fscrypt-provisioning registered Apr 13 20:32:17.281867 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:32:17.281892 kernel: ima: No architecture policies found Apr 13 20:32:17.281919 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:32:17.281943 kernel: clk: Disabling unused clocks Apr 13 20:32:17.281968 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:32:17.281994 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:32:17.282017 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:32:17.282040 kernel: Run /init as init process Apr 13 20:32:17.282098 kernel: with arguments: Apr 13 20:32:17.282131 kernel: /init Apr 13 20:32:17.282156 kernel: with environment: Apr 13 20:32:17.282176 kernel: HOME=/ Apr 13 20:32:17.282201 kernel: TERM=linux Apr 13 20:32:17.282225 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:32:17.282247 systemd[1]: Detected virtualization google. Apr 13 20:32:17.282267 systemd[1]: Detected architecture x86-64. Apr 13 20:32:17.282298 systemd[1]: Running in initrd. Apr 13 20:32:17.282333 systemd[1]: No hostname configured, using default hostname. Apr 13 20:32:17.282362 systemd[1]: Hostname set to . Apr 13 20:32:17.282391 systemd[1]: Initializing machine ID from random generator. Apr 13 20:32:17.282423 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:32:17.282454 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:32:17.282487 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:32:17.282521 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:32:17.282554 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:32:17.282574 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:32:17.282596 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:32:17.282622 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:32:17.282644 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:32:17.282667 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:32:17.282689 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:32:17.282719 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:32:17.282748 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:32:17.282807 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:32:17.282841 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:32:17.282871 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:32:17.282901 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:32:17.282935 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:32:17.282965 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:32:17.282996 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:32:17.283025 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:32:17.285107 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:32:17.285140 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:32:17.285163 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:32:17.285187 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:32:17.285209 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:32:17.285240 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:32:17.285262 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:32:17.285284 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:32:17.285318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:32:17.285341 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:32:17.285407 systemd-journald[184]: Collecting audit messages is disabled. Apr 13 20:32:17.285462 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:32:17.285485 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:32:17.285509 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:32:17.285538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:32:17.285561 systemd-journald[184]: Journal started Apr 13 20:32:17.285606 systemd-journald[184]: Runtime Journal (/run/log/journal/2be4b30b47af4aea8a562d65e5f04ccf) is 8.0M, max 148.7M, 140.7M free. Apr 13 20:32:17.241132 systemd-modules-load[185]: Inserted module 'overlay' Apr 13 20:32:17.295227 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:32:17.304383 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:32:17.306900 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:32:17.324095 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:32:17.327276 systemd-modules-load[185]: Inserted module 'br_netfilter' Apr 13 20:32:17.329091 kernel: Bridge firewalling registered Apr 13 20:32:17.335446 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:32:17.340402 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:32:17.343237 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:32:17.359266 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:32:17.377951 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:32:17.390148 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:32:17.397014 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:32:17.405172 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:32:17.443728 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:32:17.476738 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:32:17.496880 dracut-cmdline[217]: dracut-dracut-053 Apr 13 20:32:17.504221 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:32:17.559913 systemd-resolved[218]: Positive Trust Anchors: Apr 13 20:32:17.559933 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:32:17.560014 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:32:17.661567 kernel: SCSI subsystem initialized Apr 13 20:32:17.661747 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:32:17.568009 systemd-resolved[218]: Defaulting to hostname 'linux'. Apr 13 20:32:17.570804 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:32:17.691298 kernel: iscsi: registered transport (tcp) Apr 13 20:32:17.592374 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:32:17.725876 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:32:17.725987 kernel: QLogic iSCSI HBA Driver Apr 13 20:32:17.793719 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:32:17.799371 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:32:17.887425 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:32:17.887527 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:32:17.887603 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:32:17.950111 kernel: raid6: avx2x4 gen() 17434 MB/s Apr 13 20:32:17.971086 kernel: raid6: avx2x2 gen() 17666 MB/s Apr 13 20:32:17.997136 kernel: raid6: avx2x1 gen() 13773 MB/s Apr 13 20:32:17.997225 kernel: raid6: using algorithm avx2x2 gen() 17666 MB/s Apr 13 20:32:18.024315 kernel: raid6: .... xor() 17015 MB/s, rmw enabled Apr 13 20:32:18.024392 kernel: raid6: using avx2x2 recovery algorithm Apr 13 20:32:18.056092 kernel: xor: automatically using best checksumming function avx Apr 13 20:32:18.253122 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:32:18.268960 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:32:18.275483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:32:18.337148 systemd-udevd[401]: Using default interface naming scheme 'v255'. Apr 13 20:32:18.346398 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:32:18.385451 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:32:18.407927 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Apr 13 20:32:18.456844 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:32:18.463298 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:32:18.607741 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:32:18.629492 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:32:18.686834 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:32:18.710711 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:32:18.733231 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:32:18.757074 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:32:18.770746 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:32:18.790552 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:32:18.815119 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:32:18.827450 kernel: blk-mq: reduced tag depth to 10240 Apr 13 20:32:18.848144 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:32:18.874112 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 13 20:32:18.874427 kernel: AES CTR mode by8 optimization enabled Apr 13 20:32:18.886245 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:32:18.935041 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:32:18.935363 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:32:18.971432 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:32:19.060748 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Apr 13 20:32:19.061993 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 13 20:32:19.062908 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 13 20:32:19.063678 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 13 20:32:19.064420 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:32:19.065204 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:32:19.065293 kernel: GPT:17805311 != 33554431 Apr 13 20:32:19.065370 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:32:19.065578 kernel: GPT:17805311 != 33554431 Apr 13 20:32:19.065712 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:32:19.065751 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:32:18.983224 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:32:19.091558 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 13 20:32:18.983562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:32:19.022527 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:32:19.085646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:32:19.155082 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (449) Apr 13 20:32:19.170132 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (445) Apr 13 20:32:19.187817 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Apr 13 20:32:19.210626 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:32:19.219884 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Apr 13 20:32:19.257908 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Apr 13 20:32:19.269349 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Apr 13 20:32:19.302287 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 13 20:32:19.326354 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:32:19.343146 disk-uuid[541]: Primary Header is updated. Apr 13 20:32:19.343146 disk-uuid[541]: Secondary Entries is updated. Apr 13 20:32:19.343146 disk-uuid[541]: Secondary Header is updated. Apr 13 20:32:19.401235 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:32:19.401295 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:32:19.401323 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:32:19.360427 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:32:19.453625 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:32:20.399353 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:32:20.399442 disk-uuid[542]: The operation has completed successfully. Apr 13 20:32:20.488222 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:32:20.488421 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:32:20.526447 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:32:20.547483 sh[568]: Success Apr 13 20:32:20.564104 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 13 20:32:20.662368 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:32:20.687245 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:32:20.696827 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:32:20.765308 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:32:20.765570 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:32:20.765659 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:32:20.774812 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:32:20.781672 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:32:20.811093 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:32:20.816910 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:32:20.832154 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:32:20.837314 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:32:20.866317 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:32:20.908920 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:32:20.909230 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:32:20.909318 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:32:20.932613 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:32:20.932715 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:32:20.961840 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:32:20.961223 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:32:20.981523 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:32:21.009357 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:32:21.100325 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:32:21.106482 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:32:21.237508 ignition[684]: Ignition 2.19.0 Apr 13 20:32:21.238277 ignition[684]: Stage: fetch-offline Apr 13 20:32:21.242419 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:32:21.238373 ignition[684]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:32:21.242645 systemd-networkd[751]: lo: Link UP Apr 13 20:32:21.238431 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:32:21.242651 systemd-networkd[751]: lo: Gained carrier Apr 13 20:32:21.238673 ignition[684]: parsed url from cmdline: "" Apr 13 20:32:21.245311 systemd-networkd[751]: Enumeration completed Apr 13 20:32:21.238682 ignition[684]: no config URL provided Apr 13 20:32:21.246307 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:32:21.238710 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:32:21.246315 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:32:21.238749 ignition[684]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:32:21.248535 systemd-networkd[751]: eth0: Link UP Apr 13 20:32:21.238762 ignition[684]: failed to fetch config: resource requires networking Apr 13 20:32:21.248543 systemd-networkd[751]: eth0: Gained carrier Apr 13 20:32:21.239494 ignition[684]: Ignition finished successfully Apr 13 20:32:21.248556 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:32:21.340805 ignition[761]: Ignition 2.19.0 Apr 13 20:32:21.260358 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.70/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 13 20:32:21.340814 ignition[761]: Stage: fetch Apr 13 20:32:21.263464 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:32:21.341085 ignition[761]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:32:21.273733 systemd[1]: Reached target network.target - Network. Apr 13 20:32:21.341110 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:32:21.300343 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:32:21.341271 ignition[761]: parsed url from cmdline: "" Apr 13 20:32:21.354195 unknown[761]: fetched base config from "system" Apr 13 20:32:21.341281 ignition[761]: no config URL provided Apr 13 20:32:21.354211 unknown[761]: fetched base config from "system" Apr 13 20:32:21.341294 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:32:21.354231 unknown[761]: fetched user config from "gcp" Apr 13 20:32:21.341306 ignition[761]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:32:21.358687 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:32:21.341331 ignition[761]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 13 20:32:21.385404 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:32:21.346482 ignition[761]: GET result: OK Apr 13 20:32:21.430245 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:32:21.346602 ignition[761]: parsing config with SHA512: b0de58f18b3930f1d37c810a7515d180ae703a911d7cc57467610260aef8df08d9549b3d2e97cfcf5ca82bb22fec67491914077ce642a8ac77c4d404cbcce8fe Apr 13 20:32:21.457310 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:32:21.355370 ignition[761]: fetch: fetch complete Apr 13 20:32:21.508642 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:32:21.355382 ignition[761]: fetch: fetch passed Apr 13 20:32:21.538465 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:32:21.355464 ignition[761]: Ignition finished successfully Apr 13 20:32:21.557537 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:32:21.426784 ignition[767]: Ignition 2.19.0 Apr 13 20:32:21.577546 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:32:21.426796 ignition[767]: Stage: kargs Apr 13 20:32:21.597612 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:32:21.427038 ignition[767]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:32:21.617501 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:32:21.427109 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:32:21.640362 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:32:21.428176 ignition[767]: kargs: kargs passed Apr 13 20:32:21.428259 ignition[767]: Ignition finished successfully Apr 13 20:32:21.505431 ignition[773]: Ignition 2.19.0 Apr 13 20:32:21.505441 ignition[773]: Stage: disks Apr 13 20:32:21.505659 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:32:21.505684 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:32:21.506889 ignition[773]: disks: disks passed Apr 13 20:32:21.506949 ignition[773]: Ignition finished successfully Apr 13 20:32:21.700619 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 20:32:21.874258 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:32:21.893215 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:32:22.053104 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:32:22.054734 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:32:22.055874 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:32:22.090424 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:32:22.106248 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:32:22.126932 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:32:22.186258 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (789) Apr 13 20:32:22.186296 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:32:22.186313 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:32:22.186329 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:32:22.127033 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:32:22.218575 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:32:22.218617 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:32:22.127102 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:32:22.144774 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:32:22.229683 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:32:22.252358 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:32:22.372840 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:32:22.383275 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:32:22.394536 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:32:22.405228 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:32:22.558444 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:32:22.587211 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:32:22.616284 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:32:22.613479 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:32:22.623266 systemd-networkd[751]: eth0: Gained IPv6LL Apr 13 20:32:22.645838 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:32:22.667315 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:32:22.686596 ignition[904]: INFO : Ignition 2.19.0 Apr 13 20:32:22.686596 ignition[904]: INFO : Stage: mount Apr 13 20:32:22.708272 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:32:22.708272 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:32:22.708272 ignition[904]: INFO : mount: mount passed Apr 13 20:32:22.708272 ignition[904]: INFO : Ignition finished successfully Apr 13 20:32:22.690864 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:32:22.699248 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:32:23.061347 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:32:23.108097 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (916) Apr 13 20:32:23.126335 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:32:23.126657 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:32:23.126747 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:32:23.149194 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:32:23.149287 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:32:23.152992 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:32:23.193038 ignition[933]: INFO : Ignition 2.19.0 Apr 13 20:32:23.200266 ignition[933]: INFO : Stage: files Apr 13 20:32:23.200266 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:32:23.200266 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:32:23.200266 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:32:23.200266 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:32:23.200266 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:32:23.265253 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:32:23.265253 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:32:23.265253 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:32:23.265253 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:32:23.265253 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:32:23.206903 unknown[933]: wrote ssh authorized keys file for user: core Apr 13 20:32:23.354420 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:32:23.526198 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 13 20:32:39.025394 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 20:32:39.502741 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:32:39.502741 ignition[933]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 20:32:39.542250 ignition[933]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:32:39.542250 ignition[933]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:32:39.542250 ignition[933]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 20:32:39.542250 ignition[933]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:32:39.542250 ignition[933]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:32:39.542250 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:32:39.542250 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:32:39.542250 ignition[933]: INFO : files: files passed Apr 13 20:32:39.542250 ignition[933]: INFO : Ignition finished successfully Apr 13 20:32:39.509073 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:32:39.530345 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:32:39.585271 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:32:39.649835 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:32:39.766288 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:32:39.766288 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:32:39.649995 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:32:39.805287 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:32:39.676835 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:32:39.694698 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:32:39.720365 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:32:39.806502 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:32:39.806675 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:32:39.833483 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:32:39.853453 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:32:39.873613 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:32:39.879490 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:32:39.953133 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:32:39.980437 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:32:40.025417 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:32:40.037768 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:32:40.059805 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:32:40.078755 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:32:40.079018 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:32:40.107772 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:32:40.128799 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:32:40.147777 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:32:40.166814 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:32:40.176895 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:32:40.207803 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:32:40.217813 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:32:40.246819 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:32:40.267612 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:32:40.287642 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:32:40.306645 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:32:40.306836 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:32:40.332719 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:32:40.352688 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:32:40.373573 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:32:40.373793 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:32:40.396703 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:32:40.396951 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:32:40.428749 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:32:40.429026 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:32:40.448724 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:32:40.448878 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:32:40.474618 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:32:40.487291 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:32:40.558310 ignition[985]: INFO : Ignition 2.19.0 Apr 13 20:32:40.558310 ignition[985]: INFO : Stage: umount Apr 13 20:32:40.558310 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:32:40.558310 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:32:40.558310 ignition[985]: INFO : umount: umount passed Apr 13 20:32:40.558310 ignition[985]: INFO : Ignition finished successfully Apr 13 20:32:40.487768 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:32:40.530151 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:32:40.548414 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:32:40.548781 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:32:40.566854 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:32:40.567118 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:32:40.634355 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:32:40.635695 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:32:40.635844 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:32:40.652290 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:32:40.652487 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:32:40.674736 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:32:40.674883 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:32:40.692420 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:32:40.692573 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:32:40.711680 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:32:40.711767 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:32:40.731721 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:32:40.731801 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:32:40.751602 systemd[1]: Stopped target network.target - Network. Apr 13 20:32:40.769503 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:32:40.769604 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:32:40.789699 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:32:40.807269 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:32:40.812182 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:32:40.828534 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:32:40.846495 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:32:40.855630 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:32:40.855698 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:32:40.884664 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:32:40.884763 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:32:40.903573 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:32:40.903661 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:32:40.922635 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:32:40.922720 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:32:40.941629 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:32:40.941717 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:32:40.960870 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:32:40.966281 systemd-networkd[751]: eth0: DHCPv6 lease lost Apr 13 20:32:40.979815 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:32:40.999199 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:32:40.999395 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:32:41.020594 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:32:41.020819 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:32:41.039520 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:32:41.039594 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:32:41.493281 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Apr 13 20:32:41.064235 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:32:41.077203 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:32:41.077350 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:32:41.089531 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:32:41.089612 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:32:41.109553 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:32:41.109648 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:32:41.132543 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:32:41.132636 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:32:41.151699 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:32:41.170982 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:32:41.171267 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:32:41.199990 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:32:41.200237 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:32:41.218388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:32:41.218511 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:32:41.228348 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:32:41.228538 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:32:41.256651 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:32:41.256745 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:32:41.285547 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:32:41.285765 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:32:41.320323 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:32:41.334254 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:32:41.334432 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:32:41.346343 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:32:41.346462 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:32:41.359169 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:32:41.359334 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:32:41.378928 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:32:41.379095 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:32:41.397278 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:32:41.426398 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:32:41.450472 systemd[1]: Switching root. Apr 13 20:32:41.836302 systemd-journald[184]: Journal stopped Apr 13 20:32:17.243658 kernel: Linux version 6.6.127-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Apr 13 18:40:27 -00 2026 Apr 13 20:32:17.243726 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:32:17.243755 kernel: BIOS-provided physical RAM map: Apr 13 20:32:17.243776 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Apr 13 20:32:17.243797 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Apr 13 20:32:17.243820 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Apr 13 20:32:17.243849 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Apr 13 20:32:17.243878 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Apr 13 20:32:17.243892 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Apr 13 20:32:17.243908 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bf9ecfff] reserved Apr 13 20:32:17.243925 kernel: BIOS-e820: [mem 0x00000000bf9ed000-0x00000000bfaecfff] type 20 Apr 13 20:32:17.243941 kernel: BIOS-e820: [mem 0x00000000bfaed000-0x00000000bfb6cfff] reserved Apr 13 20:32:17.243958 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Apr 13 20:32:17.243975 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Apr 13 20:32:17.244002 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Apr 13 20:32:17.244021 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Apr 13 20:32:17.244042 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Apr 13 20:32:17.244089 kernel: NX (Execute Disable) protection: active Apr 13 20:32:17.244107 kernel: APIC: Static calls initialized Apr 13 20:32:17.244129 kernel: efi: EFI v2.7 by EDK II Apr 13 20:32:17.244151 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 Apr 13 20:32:17.244176 kernel: SMBIOS 2.4 present. Apr 13 20:32:17.244201 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Apr 13 20:32:17.244224 kernel: Hypervisor detected: KVM Apr 13 20:32:17.244253 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 13 20:32:17.244274 kernel: kvm-clock: using sched offset of 13374061603 cycles Apr 13 20:32:17.244299 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 13 20:32:17.244325 kernel: tsc: Detected 2299.998 MHz processor Apr 13 20:32:17.244348 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 13 20:32:17.244374 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 13 20:32:17.244399 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Apr 13 20:32:17.244424 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Apr 13 20:32:17.244448 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 13 20:32:17.244477 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Apr 13 20:32:17.244503 kernel: Using GB pages for direct mapping Apr 13 20:32:17.244536 kernel: Secure boot disabled Apr 13 20:32:17.244562 kernel: ACPI: Early table checksum verification disabled Apr 13 20:32:17.244587 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Apr 13 20:32:17.244611 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Apr 13 20:32:17.244638 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Apr 13 20:32:17.244676 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Apr 13 20:32:17.244705 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Apr 13 20:32:17.244742 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Apr 13 20:32:17.244770 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Apr 13 20:32:17.244795 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Apr 13 20:32:17.244823 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Apr 13 20:32:17.244847 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Apr 13 20:32:17.244880 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Apr 13 20:32:17.244904 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Apr 13 20:32:17.244930 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Apr 13 20:32:17.244958 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Apr 13 20:32:17.244983 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Apr 13 20:32:17.245007 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Apr 13 20:32:17.245040 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Apr 13 20:32:17.247131 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Apr 13 20:32:17.247153 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Apr 13 20:32:17.247191 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Apr 13 20:32:17.247219 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 13 20:32:17.247250 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 13 20:32:17.247272 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 13 20:32:17.247300 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Apr 13 20:32:17.247324 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Apr 13 20:32:17.247347 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Apr 13 20:32:17.247375 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Apr 13 20:32:17.247399 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Apr 13 20:32:17.247426 kernel: Zone ranges: Apr 13 20:32:17.247446 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 13 20:32:17.247465 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Apr 13 20:32:17.247485 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Apr 13 20:32:17.247504 kernel: Movable zone start for each node Apr 13 20:32:17.247544 kernel: Early memory node ranges Apr 13 20:32:17.247566 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Apr 13 20:32:17.247583 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Apr 13 20:32:17.247600 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Apr 13 20:32:17.247626 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Apr 13 20:32:17.247648 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Apr 13 20:32:17.247671 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Apr 13 20:32:17.247692 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 13 20:32:17.247709 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Apr 13 20:32:17.247727 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Apr 13 20:32:17.247747 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Apr 13 20:32:17.247768 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Apr 13 20:32:17.247789 kernel: ACPI: PM-Timer IO Port: 0xb008 Apr 13 20:32:17.247817 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 13 20:32:17.247854 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 13 20:32:17.247880 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 13 20:32:17.247905 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 13 20:32:17.247931 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 13 20:32:17.247964 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 13 20:32:17.247987 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 13 20:32:17.248022 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Apr 13 20:32:17.248069 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Apr 13 20:32:17.248100 kernel: Booting paravirtualized kernel on KVM Apr 13 20:32:17.248124 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 13 20:32:17.248147 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Apr 13 20:32:17.248170 kernel: percpu: Embedded 57 pages/cpu s196328 r8192 d28952 u1048576 Apr 13 20:32:17.248192 kernel: pcpu-alloc: s196328 r8192 d28952 u1048576 alloc=1*2097152 Apr 13 20:32:17.248212 kernel: pcpu-alloc: [0] 0 1 Apr 13 20:32:17.248234 kernel: kvm-guest: PV spinlocks enabled Apr 13 20:32:17.248255 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 13 20:32:17.248279 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:32:17.248309 kernel: random: crng init done Apr 13 20:32:17.248333 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Apr 13 20:32:17.248354 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Apr 13 20:32:17.248374 kernel: Fallback order for Node 0: 0 Apr 13 20:32:17.248400 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Apr 13 20:32:17.248425 kernel: Policy zone: Normal Apr 13 20:32:17.248449 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 13 20:32:17.248472 kernel: software IO TLB: area num 2. Apr 13 20:32:17.248494 kernel: Memory: 7513184K/7860584K available (12288K kernel code, 2288K rwdata, 22752K rodata, 42896K init, 2300K bss, 347140K reserved, 0K cma-reserved) Apr 13 20:32:17.248532 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Apr 13 20:32:17.248555 kernel: Kernel/User page tables isolation: enabled Apr 13 20:32:17.248578 kernel: ftrace: allocating 37996 entries in 149 pages Apr 13 20:32:17.248599 kernel: ftrace: allocated 149 pages with 4 groups Apr 13 20:32:17.248753 kernel: Dynamic Preempt: voluntary Apr 13 20:32:17.248781 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 13 20:32:17.248811 kernel: rcu: RCU event tracing is enabled. Apr 13 20:32:17.248839 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Apr 13 20:32:17.248899 kernel: Trampoline variant of Tasks RCU enabled. Apr 13 20:32:17.248931 kernel: Rude variant of Tasks RCU enabled. Apr 13 20:32:17.248960 kernel: Tracing variant of Tasks RCU enabled. Apr 13 20:32:17.248991 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 13 20:32:17.249017 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Apr 13 20:32:17.251190 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Apr 13 20:32:17.251423 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 13 20:32:17.251503 kernel: Console: colour dummy device 80x25 Apr 13 20:32:17.251657 kernel: printk: console [ttyS0] enabled Apr 13 20:32:17.251754 kernel: ACPI: Core revision 20230628 Apr 13 20:32:17.251821 kernel: APIC: Switch to symmetric I/O mode setup Apr 13 20:32:17.251852 kernel: x2apic enabled Apr 13 20:32:17.251881 kernel: APIC: Switched APIC routing to: physical x2apic Apr 13 20:32:17.251915 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Apr 13 20:32:17.251942 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 13 20:32:17.251971 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Apr 13 20:32:17.251999 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Apr 13 20:32:17.252036 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Apr 13 20:32:17.253120 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 13 20:32:17.253169 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Apr 13 20:32:17.253208 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Apr 13 20:32:17.253247 kernel: Spectre V2 : Mitigation: IBRS Apr 13 20:32:17.253283 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Apr 13 20:32:17.253318 kernel: RETBleed: Mitigation: IBRS Apr 13 20:32:17.253356 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 13 20:32:17.253393 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Apr 13 20:32:17.253440 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 13 20:32:17.253475 kernel: MDS: Mitigation: Clear CPU buffers Apr 13 20:32:17.253511 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Apr 13 20:32:17.253548 kernel: active return thunk: its_return_thunk Apr 13 20:32:17.253584 kernel: ITS: Mitigation: Aligned branch/return thunks Apr 13 20:32:17.253619 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 13 20:32:17.253655 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 13 20:32:17.253687 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 13 20:32:17.253714 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 13 20:32:17.253750 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 13 20:32:17.253777 kernel: Freeing SMP alternatives memory: 32K Apr 13 20:32:17.253803 kernel: pid_max: default: 32768 minimum: 301 Apr 13 20:32:17.253829 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 13 20:32:17.253857 kernel: landlock: Up and running. Apr 13 20:32:17.253883 kernel: SELinux: Initializing. Apr 13 20:32:17.253913 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:32:17.253945 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 13 20:32:17.253980 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Apr 13 20:32:17.254019 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:32:17.255099 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:32:17.255147 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Apr 13 20:32:17.255177 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Apr 13 20:32:17.255205 kernel: signal: max sigframe size: 1776 Apr 13 20:32:17.255246 kernel: rcu: Hierarchical SRCU implementation. Apr 13 20:32:17.255271 kernel: rcu: Max phase no-delay instances is 400. Apr 13 20:32:17.255294 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 13 20:32:17.255317 kernel: smp: Bringing up secondary CPUs ... Apr 13 20:32:17.255349 kernel: smpboot: x86: Booting SMP configuration: Apr 13 20:32:17.255372 kernel: .... node #0, CPUs: #1 Apr 13 20:32:17.255398 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Apr 13 20:32:17.255422 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Apr 13 20:32:17.255444 kernel: smp: Brought up 1 node, 2 CPUs Apr 13 20:32:17.255466 kernel: smpboot: Max logical packages: 1 Apr 13 20:32:17.255489 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Apr 13 20:32:17.255512 kernel: devtmpfs: initialized Apr 13 20:32:17.255541 kernel: x86/mm: Memory block size: 128MB Apr 13 20:32:17.255568 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Apr 13 20:32:17.255591 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 13 20:32:17.255614 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Apr 13 20:32:17.255638 kernel: pinctrl core: initialized pinctrl subsystem Apr 13 20:32:17.255661 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 13 20:32:17.255685 kernel: audit: initializing netlink subsys (disabled) Apr 13 20:32:17.255708 kernel: audit: type=2000 audit(1776112335.716:1): state=initialized audit_enabled=0 res=1 Apr 13 20:32:17.255731 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 13 20:32:17.255761 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 13 20:32:17.255784 kernel: cpuidle: using governor menu Apr 13 20:32:17.255808 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 13 20:32:17.255832 kernel: dca service started, version 1.12.1 Apr 13 20:32:17.255855 kernel: PCI: Using configuration type 1 for base access Apr 13 20:32:17.255878 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 13 20:32:17.255902 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 13 20:32:17.255926 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 13 20:32:17.255951 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 13 20:32:17.255981 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 13 20:32:17.256005 kernel: ACPI: Added _OSI(Module Device) Apr 13 20:32:17.256028 kernel: ACPI: Added _OSI(Processor Device) Apr 13 20:32:17.256077 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 13 20:32:17.257131 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Apr 13 20:32:17.257156 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 13 20:32:17.257179 kernel: ACPI: Interpreter enabled Apr 13 20:32:17.257202 kernel: ACPI: PM: (supports S0 S3 S5) Apr 13 20:32:17.257237 kernel: ACPI: Using IOAPIC for interrupt routing Apr 13 20:32:17.257271 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 13 20:32:17.257294 kernel: PCI: Ignoring E820 reservations for host bridge windows Apr 13 20:32:17.257317 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Apr 13 20:32:17.257340 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 13 20:32:17.257764 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Apr 13 20:32:17.259171 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Apr 13 20:32:17.259490 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Apr 13 20:32:17.259530 kernel: PCI host bridge to bus 0000:00 Apr 13 20:32:17.259810 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 13 20:32:17.262190 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 13 20:32:17.262766 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 13 20:32:17.263035 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Apr 13 20:32:17.263352 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 13 20:32:17.263770 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Apr 13 20:32:17.265211 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Apr 13 20:32:17.265536 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Apr 13 20:32:17.265817 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Apr 13 20:32:17.267175 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Apr 13 20:32:17.267619 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Apr 13 20:32:17.267896 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Apr 13 20:32:17.270302 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Apr 13 20:32:17.270640 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Apr 13 20:32:17.270930 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Apr 13 20:32:17.271269 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Apr 13 20:32:17.271551 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Apr 13 20:32:17.271826 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Apr 13 20:32:17.271858 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 13 20:32:17.271892 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 13 20:32:17.271914 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 13 20:32:17.271937 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 13 20:32:17.271961 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Apr 13 20:32:17.271981 kernel: iommu: Default domain type: Translated Apr 13 20:32:17.272001 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 13 20:32:17.272023 kernel: efivars: Registered efivars operations Apr 13 20:32:17.272052 kernel: PCI: Using ACPI for IRQ routing Apr 13 20:32:17.272096 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 13 20:32:17.272132 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Apr 13 20:32:17.272164 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Apr 13 20:32:17.272195 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Apr 13 20:32:17.272225 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Apr 13 20:32:17.272256 kernel: vgaarb: loaded Apr 13 20:32:17.272288 kernel: clocksource: Switched to clocksource kvm-clock Apr 13 20:32:17.272328 kernel: VFS: Disk quotas dquot_6.6.0 Apr 13 20:32:17.272358 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 13 20:32:17.272387 kernel: pnp: PnP ACPI init Apr 13 20:32:17.272424 kernel: pnp: PnP ACPI: found 7 devices Apr 13 20:32:17.272456 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 13 20:32:17.272487 kernel: NET: Registered PF_INET protocol family Apr 13 20:32:17.272518 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 13 20:32:17.272548 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Apr 13 20:32:17.272573 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 13 20:32:17.272594 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Apr 13 20:32:17.272612 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Apr 13 20:32:17.272636 kernel: TCP: Hash tables configured (established 65536 bind 65536) Apr 13 20:32:17.272664 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 13 20:32:17.272687 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Apr 13 20:32:17.272706 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 13 20:32:17.272731 kernel: NET: Registered PF_XDP protocol family Apr 13 20:32:17.272935 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 13 20:32:17.273142 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 13 20:32:17.273396 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 13 20:32:17.273636 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Apr 13 20:32:17.275284 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Apr 13 20:32:17.275333 kernel: PCI: CLS 0 bytes, default 64 Apr 13 20:32:17.275354 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 13 20:32:17.275375 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Apr 13 20:32:17.275394 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 13 20:32:17.275414 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Apr 13 20:32:17.275435 kernel: clocksource: Switched to clocksource tsc Apr 13 20:32:17.275455 kernel: Initialise system trusted keyrings Apr 13 20:32:17.275482 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Apr 13 20:32:17.275502 kernel: Key type asymmetric registered Apr 13 20:32:17.275522 kernel: Asymmetric key parser 'x509' registered Apr 13 20:32:17.275544 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 13 20:32:17.275564 kernel: io scheduler mq-deadline registered Apr 13 20:32:17.275585 kernel: io scheduler kyber registered Apr 13 20:32:17.275605 kernel: io scheduler bfq registered Apr 13 20:32:17.275625 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 13 20:32:17.275647 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Apr 13 20:32:17.275891 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Apr 13 20:32:17.275919 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Apr 13 20:32:17.276202 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Apr 13 20:32:17.276325 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Apr 13 20:32:17.276990 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Apr 13 20:32:17.280081 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 13 20:32:17.280123 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 13 20:32:17.280147 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Apr 13 20:32:17.280168 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Apr 13 20:32:17.280198 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Apr 13 20:32:17.280477 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Apr 13 20:32:17.280508 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 13 20:32:17.280529 kernel: i8042: Warning: Keylock active Apr 13 20:32:17.280550 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 13 20:32:17.280570 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 13 20:32:17.280801 kernel: rtc_cmos 00:00: RTC can wake from S4 Apr 13 20:32:17.281010 kernel: rtc_cmos 00:00: registered as rtc0 Apr 13 20:32:17.281285 kernel: rtc_cmos 00:00: setting system clock to 2026-04-13T20:32:16 UTC (1776112336) Apr 13 20:32:17.281518 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Apr 13 20:32:17.281543 kernel: intel_pstate: CPU model not supported Apr 13 20:32:17.281562 kernel: pstore: Using crash dump compression: deflate Apr 13 20:32:17.281582 kernel: pstore: Registered efi_pstore as persistent store backend Apr 13 20:32:17.281602 kernel: NET: Registered PF_INET6 protocol family Apr 13 20:32:17.281621 kernel: Segment Routing with IPv6 Apr 13 20:32:17.281640 kernel: In-situ OAM (IOAM) with IPv6 Apr 13 20:32:17.281668 kernel: NET: Registered PF_PACKET protocol family Apr 13 20:32:17.281688 kernel: Key type dns_resolver registered Apr 13 20:32:17.281707 kernel: IPI shorthand broadcast: enabled Apr 13 20:32:17.281726 kernel: sched_clock: Marking stable (1078072412, 340165812)->(1631807645, -213569421) Apr 13 20:32:17.281745 kernel: registered taskstats version 1 Apr 13 20:32:17.281764 kernel: Loading compiled-in X.509 certificates Apr 13 20:32:17.281783 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.127-flatcar: 51221ce98a81ccf90ef3d16403b42695603c5d00' Apr 13 20:32:17.281803 kernel: Key type .fscrypt registered Apr 13 20:32:17.281833 kernel: Key type fscrypt-provisioning registered Apr 13 20:32:17.281867 kernel: ima: Allocated hash algorithm: sha1 Apr 13 20:32:17.281892 kernel: ima: No architecture policies found Apr 13 20:32:17.281919 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 13 20:32:17.281943 kernel: clk: Disabling unused clocks Apr 13 20:32:17.281968 kernel: Freeing unused kernel image (initmem) memory: 42896K Apr 13 20:32:17.281994 kernel: Write protecting the kernel read-only data: 36864k Apr 13 20:32:17.282017 kernel: Freeing unused kernel image (rodata/data gap) memory: 1824K Apr 13 20:32:17.282040 kernel: Run /init as init process Apr 13 20:32:17.282098 kernel: with arguments: Apr 13 20:32:17.282131 kernel: /init Apr 13 20:32:17.282156 kernel: with environment: Apr 13 20:32:17.282176 kernel: HOME=/ Apr 13 20:32:17.282201 kernel: TERM=linux Apr 13 20:32:17.282225 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:32:17.282247 systemd[1]: Detected virtualization google. Apr 13 20:32:17.282267 systemd[1]: Detected architecture x86-64. Apr 13 20:32:17.282298 systemd[1]: Running in initrd. Apr 13 20:32:17.282333 systemd[1]: No hostname configured, using default hostname. Apr 13 20:32:17.282362 systemd[1]: Hostname set to . Apr 13 20:32:17.282391 systemd[1]: Initializing machine ID from random generator. Apr 13 20:32:17.282423 systemd[1]: Queued start job for default target initrd.target. Apr 13 20:32:17.282454 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:32:17.282487 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:32:17.282521 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 13 20:32:17.282554 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:32:17.282574 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 13 20:32:17.282596 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 13 20:32:17.282622 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 13 20:32:17.282644 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 13 20:32:17.282667 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:32:17.282689 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:32:17.282719 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:32:17.282748 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:32:17.282807 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:32:17.282841 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:32:17.282871 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:32:17.282901 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:32:17.282935 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 13 20:32:17.282965 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Apr 13 20:32:17.282996 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:32:17.283025 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:32:17.285107 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:32:17.285140 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:32:17.285163 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 13 20:32:17.285187 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:32:17.285209 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 13 20:32:17.285240 systemd[1]: Starting systemd-fsck-usr.service... Apr 13 20:32:17.285262 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:32:17.285284 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:32:17.285318 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:32:17.285341 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 13 20:32:17.285407 systemd-journald[184]: Collecting audit messages is disabled. Apr 13 20:32:17.285462 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:32:17.285485 systemd[1]: Finished systemd-fsck-usr.service. Apr 13 20:32:17.285509 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 13 20:32:17.285538 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:32:17.285561 systemd-journald[184]: Journal started Apr 13 20:32:17.285606 systemd-journald[184]: Runtime Journal (/run/log/journal/2be4b30b47af4aea8a562d65e5f04ccf) is 8.0M, max 148.7M, 140.7M free. Apr 13 20:32:17.241132 systemd-modules-load[185]: Inserted module 'overlay' Apr 13 20:32:17.295227 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:32:17.304383 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:32:17.306900 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 13 20:32:17.324095 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 13 20:32:17.327276 systemd-modules-load[185]: Inserted module 'br_netfilter' Apr 13 20:32:17.329091 kernel: Bridge firewalling registered Apr 13 20:32:17.335446 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:32:17.340402 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:32:17.343237 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:32:17.359266 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:32:17.377951 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:32:17.390148 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:32:17.397014 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:32:17.405172 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:32:17.443728 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 13 20:32:17.476738 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:32:17.496880 dracut-cmdline[217]: dracut-dracut-053 Apr 13 20:32:17.504221 dracut-cmdline[217]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=c1ba97db2f6278922cfc5bd0ca74b4bb573fca2c3aed19c121a34271e693e156 Apr 13 20:32:17.559913 systemd-resolved[218]: Positive Trust Anchors: Apr 13 20:32:17.559933 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:32:17.560014 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:32:17.661567 kernel: SCSI subsystem initialized Apr 13 20:32:17.661747 kernel: Loading iSCSI transport class v2.0-870. Apr 13 20:32:17.568009 systemd-resolved[218]: Defaulting to hostname 'linux'. Apr 13 20:32:17.570804 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:32:17.691298 kernel: iscsi: registered transport (tcp) Apr 13 20:32:17.592374 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:32:17.725876 kernel: iscsi: registered transport (qla4xxx) Apr 13 20:32:17.725987 kernel: QLogic iSCSI HBA Driver Apr 13 20:32:17.793719 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 13 20:32:17.799371 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 13 20:32:17.887425 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 13 20:32:17.887527 kernel: device-mapper: uevent: version 1.0.3 Apr 13 20:32:17.887603 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 13 20:32:17.950111 kernel: raid6: avx2x4 gen() 17434 MB/s Apr 13 20:32:17.971086 kernel: raid6: avx2x2 gen() 17666 MB/s Apr 13 20:32:17.997136 kernel: raid6: avx2x1 gen() 13773 MB/s Apr 13 20:32:17.997225 kernel: raid6: using algorithm avx2x2 gen() 17666 MB/s Apr 13 20:32:18.024315 kernel: raid6: .... xor() 17015 MB/s, rmw enabled Apr 13 20:32:18.024392 kernel: raid6: using avx2x2 recovery algorithm Apr 13 20:32:18.056092 kernel: xor: automatically using best checksumming function avx Apr 13 20:32:18.253122 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 13 20:32:18.268960 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:32:18.275483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:32:18.337148 systemd-udevd[401]: Using default interface naming scheme 'v255'. Apr 13 20:32:18.346398 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:32:18.385451 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 13 20:32:18.407927 dracut-pre-trigger[412]: rd.md=0: removing MD RAID activation Apr 13 20:32:18.456844 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:32:18.463298 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:32:18.607741 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:32:18.629492 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 13 20:32:18.686834 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 13 20:32:18.710711 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:32:18.733231 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:32:18.757074 kernel: cryptd: max_cpu_qlen set to 1000 Apr 13 20:32:18.770746 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:32:18.790552 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 13 20:32:18.815119 kernel: scsi host0: Virtio SCSI HBA Apr 13 20:32:18.827450 kernel: blk-mq: reduced tag depth to 10240 Apr 13 20:32:18.848144 kernel: AVX2 version of gcm_enc/dec engaged. Apr 13 20:32:18.874112 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Apr 13 20:32:18.874427 kernel: AES CTR mode by8 optimization enabled Apr 13 20:32:18.886245 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:32:18.935041 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:32:18.935363 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:32:18.971432 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:32:19.060748 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Apr 13 20:32:19.061993 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Apr 13 20:32:19.062908 kernel: sd 0:0:1:0: [sda] Write Protect is off Apr 13 20:32:19.063678 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Apr 13 20:32:19.064420 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Apr 13 20:32:19.065204 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 13 20:32:19.065293 kernel: GPT:17805311 != 33554431 Apr 13 20:32:19.065370 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 13 20:32:19.065578 kernel: GPT:17805311 != 33554431 Apr 13 20:32:19.065712 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 13 20:32:19.065751 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:32:18.983224 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:32:19.091558 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Apr 13 20:32:18.983562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:32:19.022527 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:32:19.085646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:32:19.155082 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/sda6 scanned by (udev-worker) (449) Apr 13 20:32:19.170132 kernel: BTRFS: device fsid de1edd48-4571-4695-92f0-7af6e33c4e3d devid 1 transid 31 /dev/sda3 scanned by (udev-worker) (445) Apr 13 20:32:19.187817 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Apr 13 20:32:19.210626 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:32:19.219884 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Apr 13 20:32:19.257908 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Apr 13 20:32:19.269349 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Apr 13 20:32:19.302287 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 13 20:32:19.326354 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 13 20:32:19.343146 disk-uuid[541]: Primary Header is updated. Apr 13 20:32:19.343146 disk-uuid[541]: Secondary Entries is updated. Apr 13 20:32:19.343146 disk-uuid[541]: Secondary Header is updated. Apr 13 20:32:19.401235 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:32:19.401295 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:32:19.401323 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:32:19.360427 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 13 20:32:19.453625 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:32:20.399353 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Apr 13 20:32:20.399442 disk-uuid[542]: The operation has completed successfully. Apr 13 20:32:20.488222 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 13 20:32:20.488421 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 13 20:32:20.526447 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 13 20:32:20.547483 sh[568]: Success Apr 13 20:32:20.564104 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Apr 13 20:32:20.662368 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 13 20:32:20.687245 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 13 20:32:20.696827 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 13 20:32:20.765308 kernel: BTRFS info (device dm-0): first mount of filesystem de1edd48-4571-4695-92f0-7af6e33c4e3d Apr 13 20:32:20.765570 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:32:20.765659 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 13 20:32:20.774812 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 13 20:32:20.781672 kernel: BTRFS info (device dm-0): using free space tree Apr 13 20:32:20.811093 kernel: BTRFS info (device dm-0): enabling ssd optimizations Apr 13 20:32:20.816910 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 13 20:32:20.832154 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 13 20:32:20.837314 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 13 20:32:20.866317 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 13 20:32:20.908920 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:32:20.909230 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:32:20.909318 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:32:20.932613 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:32:20.932715 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:32:20.961840 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:32:20.961223 systemd[1]: mnt-oem.mount: Deactivated successfully. Apr 13 20:32:20.981523 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 13 20:32:21.009357 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 13 20:32:21.100325 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:32:21.106482 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:32:21.237508 ignition[684]: Ignition 2.19.0 Apr 13 20:32:21.238277 ignition[684]: Stage: fetch-offline Apr 13 20:32:21.242419 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:32:21.238373 ignition[684]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:32:21.242645 systemd-networkd[751]: lo: Link UP Apr 13 20:32:21.238431 ignition[684]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:32:21.242651 systemd-networkd[751]: lo: Gained carrier Apr 13 20:32:21.238673 ignition[684]: parsed url from cmdline: "" Apr 13 20:32:21.245311 systemd-networkd[751]: Enumeration completed Apr 13 20:32:21.238682 ignition[684]: no config URL provided Apr 13 20:32:21.246307 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:32:21.238710 ignition[684]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:32:21.246315 systemd-networkd[751]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:32:21.238749 ignition[684]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:32:21.248535 systemd-networkd[751]: eth0: Link UP Apr 13 20:32:21.238762 ignition[684]: failed to fetch config: resource requires networking Apr 13 20:32:21.248543 systemd-networkd[751]: eth0: Gained carrier Apr 13 20:32:21.239494 ignition[684]: Ignition finished successfully Apr 13 20:32:21.248556 systemd-networkd[751]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:32:21.340805 ignition[761]: Ignition 2.19.0 Apr 13 20:32:21.260358 systemd-networkd[751]: eth0: DHCPv4 address 10.128.0.70/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 13 20:32:21.340814 ignition[761]: Stage: fetch Apr 13 20:32:21.263464 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:32:21.341085 ignition[761]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:32:21.273733 systemd[1]: Reached target network.target - Network. Apr 13 20:32:21.341110 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:32:21.300343 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 13 20:32:21.341271 ignition[761]: parsed url from cmdline: "" Apr 13 20:32:21.354195 unknown[761]: fetched base config from "system" Apr 13 20:32:21.341281 ignition[761]: no config URL provided Apr 13 20:32:21.354211 unknown[761]: fetched base config from "system" Apr 13 20:32:21.341294 ignition[761]: reading system config file "/usr/lib/ignition/user.ign" Apr 13 20:32:21.354231 unknown[761]: fetched user config from "gcp" Apr 13 20:32:21.341306 ignition[761]: no config at "/usr/lib/ignition/user.ign" Apr 13 20:32:21.358687 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 13 20:32:21.341331 ignition[761]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Apr 13 20:32:21.385404 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 13 20:32:21.346482 ignition[761]: GET result: OK Apr 13 20:32:21.430245 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 13 20:32:21.346602 ignition[761]: parsing config with SHA512: b0de58f18b3930f1d37c810a7515d180ae703a911d7cc57467610260aef8df08d9549b3d2e97cfcf5ca82bb22fec67491914077ce642a8ac77c4d404cbcce8fe Apr 13 20:32:21.457310 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 13 20:32:21.355370 ignition[761]: fetch: fetch complete Apr 13 20:32:21.508642 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 13 20:32:21.355382 ignition[761]: fetch: fetch passed Apr 13 20:32:21.538465 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 13 20:32:21.355464 ignition[761]: Ignition finished successfully Apr 13 20:32:21.557537 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 13 20:32:21.426784 ignition[767]: Ignition 2.19.0 Apr 13 20:32:21.577546 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:32:21.426796 ignition[767]: Stage: kargs Apr 13 20:32:21.597612 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:32:21.427038 ignition[767]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:32:21.617501 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:32:21.427109 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:32:21.640362 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 13 20:32:21.428176 ignition[767]: kargs: kargs passed Apr 13 20:32:21.428259 ignition[767]: Ignition finished successfully Apr 13 20:32:21.505431 ignition[773]: Ignition 2.19.0 Apr 13 20:32:21.505441 ignition[773]: Stage: disks Apr 13 20:32:21.505659 ignition[773]: no configs at "/usr/lib/ignition/base.d" Apr 13 20:32:21.505684 ignition[773]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:32:21.506889 ignition[773]: disks: disks passed Apr 13 20:32:21.506949 ignition[773]: Ignition finished successfully Apr 13 20:32:21.700619 systemd-fsck[781]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 13 20:32:21.874258 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 13 20:32:21.893215 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 13 20:32:22.053104 kernel: EXT4-fs (sda9): mounted filesystem e02793bf-3e0d-4c7e-b11a-92c664da7ce3 r/w with ordered data mode. Quota mode: none. Apr 13 20:32:22.054734 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 13 20:32:22.055874 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 13 20:32:22.090424 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:32:22.106248 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 13 20:32:22.126932 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 13 20:32:22.186258 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 scanned by mount (789) Apr 13 20:32:22.186296 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:32:22.186313 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:32:22.186329 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:32:22.127033 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 13 20:32:22.218575 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:32:22.218617 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:32:22.127102 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:32:22.144774 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 13 20:32:22.229683 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:32:22.252358 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 13 20:32:22.372840 initrd-setup-root[813]: cut: /sysroot/etc/passwd: No such file or directory Apr 13 20:32:22.383275 initrd-setup-root[820]: cut: /sysroot/etc/group: No such file or directory Apr 13 20:32:22.394536 initrd-setup-root[827]: cut: /sysroot/etc/shadow: No such file or directory Apr 13 20:32:22.405228 initrd-setup-root[834]: cut: /sysroot/etc/gshadow: No such file or directory Apr 13 20:32:22.558444 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 13 20:32:22.587211 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 13 20:32:22.616284 kernel: BTRFS info (device sda6): last unmount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:32:22.613479 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 13 20:32:22.623266 systemd-networkd[751]: eth0: Gained IPv6LL Apr 13 20:32:22.645838 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 13 20:32:22.667315 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 13 20:32:22.686596 ignition[904]: INFO : Ignition 2.19.0 Apr 13 20:32:22.686596 ignition[904]: INFO : Stage: mount Apr 13 20:32:22.708272 ignition[904]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:32:22.708272 ignition[904]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:32:22.708272 ignition[904]: INFO : mount: mount passed Apr 13 20:32:22.708272 ignition[904]: INFO : Ignition finished successfully Apr 13 20:32:22.690864 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 13 20:32:22.699248 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 13 20:32:23.061347 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 13 20:32:23.108097 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by mount (916) Apr 13 20:32:23.126335 kernel: BTRFS info (device sda6): first mount of filesystem 7dd1319a-da93-42af-ac3b-f04d4587a8af Apr 13 20:32:23.126657 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Apr 13 20:32:23.126747 kernel: BTRFS info (device sda6): using free space tree Apr 13 20:32:23.149194 kernel: BTRFS info (device sda6): enabling ssd optimizations Apr 13 20:32:23.149287 kernel: BTRFS info (device sda6): auto enabling async discard Apr 13 20:32:23.152992 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 13 20:32:23.193038 ignition[933]: INFO : Ignition 2.19.0 Apr 13 20:32:23.200266 ignition[933]: INFO : Stage: files Apr 13 20:32:23.200266 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:32:23.200266 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:32:23.200266 ignition[933]: DEBUG : files: compiled without relabeling support, skipping Apr 13 20:32:23.200266 ignition[933]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 13 20:32:23.200266 ignition[933]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 13 20:32:23.265253 ignition[933]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 13 20:32:23.265253 ignition[933]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 13 20:32:23.265253 ignition[933]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 13 20:32:23.265253 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:32:23.265253 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Apr 13 20:32:23.206903 unknown[933]: wrote ssh authorized keys file for user: core Apr 13 20:32:23.354420 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Apr 13 20:32:23.526198 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:32:23.544229 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.35.1-x86-64.raw: attempt #1 Apr 13 20:32:39.025394 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Apr 13 20:32:39.502741 ignition[933]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.35.1-x86-64.raw" Apr 13 20:32:39.502741 ignition[933]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Apr 13 20:32:39.542250 ignition[933]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:32:39.542250 ignition[933]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Apr 13 20:32:39.542250 ignition[933]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Apr 13 20:32:39.542250 ignition[933]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Apr 13 20:32:39.542250 ignition[933]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Apr 13 20:32:39.542250 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:32:39.542250 ignition[933]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 13 20:32:39.542250 ignition[933]: INFO : files: files passed Apr 13 20:32:39.542250 ignition[933]: INFO : Ignition finished successfully Apr 13 20:32:39.509073 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 13 20:32:39.530345 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 13 20:32:39.585271 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 13 20:32:39.649835 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 13 20:32:39.766288 initrd-setup-root-after-ignition[960]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:32:39.766288 initrd-setup-root-after-ignition[960]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:32:39.649995 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 13 20:32:39.805287 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 13 20:32:39.676835 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:32:39.694698 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 13 20:32:39.720365 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 13 20:32:39.806502 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 13 20:32:39.806675 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 13 20:32:39.833483 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 13 20:32:39.853453 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 13 20:32:39.873613 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 13 20:32:39.879490 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 13 20:32:39.953133 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:32:39.980437 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 13 20:32:40.025417 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:32:40.037768 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:32:40.059805 systemd[1]: Stopped target timers.target - Timer Units. Apr 13 20:32:40.078755 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 13 20:32:40.079018 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 13 20:32:40.107772 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 13 20:32:40.128799 systemd[1]: Stopped target basic.target - Basic System. Apr 13 20:32:40.147777 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 13 20:32:40.166814 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 13 20:32:40.176895 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 13 20:32:40.207803 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 13 20:32:40.217813 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 13 20:32:40.246819 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 13 20:32:40.267612 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 13 20:32:40.287642 systemd[1]: Stopped target swap.target - Swaps. Apr 13 20:32:40.306645 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 13 20:32:40.306836 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 13 20:32:40.332719 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:32:40.352688 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:32:40.373573 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 13 20:32:40.373793 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:32:40.396703 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 13 20:32:40.396951 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 13 20:32:40.428749 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 13 20:32:40.429026 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 13 20:32:40.448724 systemd[1]: ignition-files.service: Deactivated successfully. Apr 13 20:32:40.448878 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 13 20:32:40.474618 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 13 20:32:40.487291 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 13 20:32:40.558310 ignition[985]: INFO : Ignition 2.19.0 Apr 13 20:32:40.558310 ignition[985]: INFO : Stage: umount Apr 13 20:32:40.558310 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 13 20:32:40.558310 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Apr 13 20:32:40.558310 ignition[985]: INFO : umount: umount passed Apr 13 20:32:40.558310 ignition[985]: INFO : Ignition finished successfully Apr 13 20:32:40.487768 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:32:40.530151 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 13 20:32:40.548414 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 13 20:32:40.548781 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:32:40.566854 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 13 20:32:40.567118 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 13 20:32:40.634355 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 13 20:32:40.635695 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 13 20:32:40.635844 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 13 20:32:40.652290 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 13 20:32:40.652487 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 13 20:32:40.674736 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 13 20:32:40.674883 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 13 20:32:40.692420 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 13 20:32:40.692573 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 13 20:32:40.711680 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 13 20:32:40.711767 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 13 20:32:40.731721 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 13 20:32:40.731801 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 13 20:32:40.751602 systemd[1]: Stopped target network.target - Network. Apr 13 20:32:40.769503 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 13 20:32:40.769604 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 13 20:32:40.789699 systemd[1]: Stopped target paths.target - Path Units. Apr 13 20:32:40.807269 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 13 20:32:40.812182 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:32:40.828534 systemd[1]: Stopped target slices.target - Slice Units. Apr 13 20:32:40.846495 systemd[1]: Stopped target sockets.target - Socket Units. Apr 13 20:32:40.855630 systemd[1]: iscsid.socket: Deactivated successfully. Apr 13 20:32:40.855698 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 13 20:32:40.884664 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 13 20:32:40.884763 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 13 20:32:40.903573 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 13 20:32:40.903661 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 13 20:32:40.922635 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 13 20:32:40.922720 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 13 20:32:40.941629 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 13 20:32:40.941717 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 13 20:32:40.960870 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 13 20:32:40.966281 systemd-networkd[751]: eth0: DHCPv6 lease lost Apr 13 20:32:40.979815 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 13 20:32:40.999199 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 13 20:32:40.999395 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 13 20:32:41.020594 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 13 20:32:41.020819 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 13 20:32:41.039520 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 13 20:32:41.039594 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:32:41.493281 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Apr 13 20:32:41.064235 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 13 20:32:41.077203 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 13 20:32:41.077350 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 13 20:32:41.089531 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 13 20:32:41.089612 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:32:41.109553 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 13 20:32:41.109648 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 13 20:32:41.132543 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 13 20:32:41.132636 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:32:41.151699 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:32:41.170982 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 13 20:32:41.171267 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:32:41.199990 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 13 20:32:41.200237 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 13 20:32:41.218388 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 13 20:32:41.218511 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:32:41.228348 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 13 20:32:41.228538 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 13 20:32:41.256651 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 13 20:32:41.256745 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 13 20:32:41.285547 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 13 20:32:41.285765 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 13 20:32:41.320323 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 13 20:32:41.334254 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 13 20:32:41.334432 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:32:41.346343 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 13 20:32:41.346462 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:32:41.359169 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 13 20:32:41.359334 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 13 20:32:41.378928 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 13 20:32:41.379095 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 13 20:32:41.397278 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 13 20:32:41.426398 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 13 20:32:41.450472 systemd[1]: Switching root. Apr 13 20:32:41.836302 systemd-journald[184]: Journal stopped Apr 13 20:32:44.355173 kernel: SELinux: policy capability network_peer_controls=1 Apr 13 20:32:44.355249 kernel: SELinux: policy capability open_perms=1 Apr 13 20:32:44.355272 kernel: SELinux: policy capability extended_socket_class=1 Apr 13 20:32:44.355292 kernel: SELinux: policy capability always_check_network=0 Apr 13 20:32:44.355309 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 13 20:32:44.355326 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 13 20:32:44.355348 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 13 20:32:44.355371 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 13 20:32:44.355390 kernel: audit: type=1403 audit(1776112362.053:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 13 20:32:44.355412 systemd[1]: Successfully loaded SELinux policy in 93.699ms. Apr 13 20:32:44.355435 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.943ms. Apr 13 20:32:44.355458 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Apr 13 20:32:44.355478 systemd[1]: Detected virtualization google. Apr 13 20:32:44.355505 systemd[1]: Detected architecture x86-64. Apr 13 20:32:44.355533 systemd[1]: Detected first boot. Apr 13 20:32:44.355557 systemd[1]: Initializing machine ID from random generator. Apr 13 20:32:44.355578 zram_generator::config[1027]: No configuration found. Apr 13 20:32:44.355601 systemd[1]: Populated /etc with preset unit settings. Apr 13 20:32:44.355622 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 13 20:32:44.355648 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 13 20:32:44.355670 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 13 20:32:44.355692 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 13 20:32:44.355714 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 13 20:32:44.355735 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 13 20:32:44.355759 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 13 20:32:44.355782 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 13 20:32:44.355811 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 13 20:32:44.355833 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 13 20:32:44.355855 systemd[1]: Created slice user.slice - User and Session Slice. Apr 13 20:32:44.355877 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 13 20:32:44.355899 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 13 20:32:44.355924 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 13 20:32:44.355946 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 13 20:32:44.355977 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 13 20:32:44.356005 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 13 20:32:44.356027 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 13 20:32:44.356076 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 13 20:32:44.356111 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 13 20:32:44.356144 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 13 20:32:44.356178 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 13 20:32:44.356221 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 13 20:32:44.356258 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 13 20:32:44.356293 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 13 20:32:44.356328 systemd[1]: Reached target slices.target - Slice Units. Apr 13 20:32:44.356353 systemd[1]: Reached target swap.target - Swaps. Apr 13 20:32:44.356383 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 13 20:32:44.356410 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 13 20:32:44.356439 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 13 20:32:44.356465 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 13 20:32:44.356493 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 13 20:32:44.356534 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 13 20:32:44.356568 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 13 20:32:44.356602 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 13 20:32:44.356636 systemd[1]: Mounting media.mount - External Media Directory... Apr 13 20:32:44.356668 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:32:44.356712 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 13 20:32:44.356743 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 13 20:32:44.356777 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 13 20:32:44.356811 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 13 20:32:44.356846 systemd[1]: Reached target machines.target - Containers. Apr 13 20:32:44.356880 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 13 20:32:44.356914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:32:44.356948 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 13 20:32:44.357002 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 13 20:32:44.357033 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:32:44.357086 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:32:44.357130 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:32:44.357152 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 13 20:32:44.357176 kernel: fuse: init (API version 7.39) Apr 13 20:32:44.357199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:32:44.357221 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 13 20:32:44.357250 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 13 20:32:44.357275 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 13 20:32:44.357300 kernel: ACPI: bus type drm_connector registered Apr 13 20:32:44.357322 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 13 20:32:44.357343 systemd[1]: Stopped systemd-fsck-usr.service. Apr 13 20:32:44.357365 kernel: loop: module loaded Apr 13 20:32:44.357386 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 13 20:32:44.357408 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 13 20:32:44.357431 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 13 20:32:44.357496 systemd-journald[1114]: Collecting audit messages is disabled. Apr 13 20:32:44.357548 systemd-journald[1114]: Journal started Apr 13 20:32:44.357609 systemd-journald[1114]: Runtime Journal (/run/log/journal/701983bf4df94a8d846a484e089a6a03) is 8.0M, max 148.7M, 140.7M free. Apr 13 20:32:43.034029 systemd[1]: Queued start job for default target multi-user.target. Apr 13 20:32:43.060295 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Apr 13 20:32:43.060944 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 13 20:32:44.376136 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 13 20:32:44.403179 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 13 20:32:44.412099 systemd[1]: verity-setup.service: Deactivated successfully. Apr 13 20:32:44.412383 systemd[1]: Stopped verity-setup.service. Apr 13 20:32:44.448112 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:32:44.458696 systemd[1]: Started systemd-journald.service - Journal Service. Apr 13 20:32:44.469299 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 13 20:32:44.479540 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 13 20:32:44.490610 systemd[1]: Mounted media.mount - External Media Directory. Apr 13 20:32:44.500560 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 13 20:32:44.510542 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 13 20:32:44.520580 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 13 20:32:44.528685 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 13 20:32:44.540905 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 13 20:32:44.552920 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 13 20:32:44.553509 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 13 20:32:44.565863 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:32:44.566437 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:32:44.578890 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:32:44.579452 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:32:44.589781 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:32:44.590284 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:32:44.601838 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 13 20:32:44.602381 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 13 20:32:44.612716 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:32:44.613036 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:32:44.623725 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 13 20:32:44.634694 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 13 20:32:44.646721 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 13 20:32:44.658703 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 13 20:32:44.685778 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 13 20:32:44.709203 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 13 20:32:44.725210 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 13 20:32:44.735287 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 13 20:32:44.735547 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 13 20:32:44.746694 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Apr 13 20:32:44.764402 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 13 20:32:44.791349 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 13 20:32:44.802492 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:32:44.814375 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 13 20:32:44.835358 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 13 20:32:44.844786 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:32:44.863349 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 13 20:32:44.874705 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:32:44.887518 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 13 20:32:44.908322 systemd-journald[1114]: Time spent on flushing to /var/log/journal/701983bf4df94a8d846a484e089a6a03 is 122.874ms for 927 entries. Apr 13 20:32:44.908322 systemd-journald[1114]: System Journal (/var/log/journal/701983bf4df94a8d846a484e089a6a03) is 8.0M, max 584.8M, 576.8M free. Apr 13 20:32:45.115566 systemd-journald[1114]: Received client request to flush runtime journal. Apr 13 20:32:45.115676 kernel: loop0: detected capacity change from 0 to 217752 Apr 13 20:32:44.918777 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 13 20:32:44.938317 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 13 20:32:44.956392 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 13 20:32:44.974808 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 13 20:32:44.987452 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 13 20:32:44.999618 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 13 20:32:45.011860 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 13 20:32:45.041986 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 13 20:32:45.074176 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Apr 13 20:32:45.100139 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 13 20:32:45.125317 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 13 20:32:45.143779 udevadm[1147]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Apr 13 20:32:45.166449 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 13 20:32:45.172155 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Apr 13 20:32:45.189092 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 13 20:32:45.196525 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 13 20:32:45.218278 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 13 20:32:45.234667 kernel: loop1: detected capacity change from 0 to 54824 Apr 13 20:32:45.348345 kernel: loop2: detected capacity change from 0 to 142488 Apr 13 20:32:45.350452 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Apr 13 20:32:45.350516 systemd-tmpfiles[1163]: ACLs are not supported, ignoring. Apr 13 20:32:45.371890 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 13 20:32:45.468389 kernel: loop3: detected capacity change from 0 to 140768 Apr 13 20:32:45.577090 kernel: loop4: detected capacity change from 0 to 217752 Apr 13 20:32:45.627107 kernel: loop5: detected capacity change from 0 to 54824 Apr 13 20:32:45.681334 kernel: loop6: detected capacity change from 0 to 142488 Apr 13 20:32:45.751103 kernel: loop7: detected capacity change from 0 to 140768 Apr 13 20:32:45.807123 (sd-merge)[1169]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Apr 13 20:32:45.808898 (sd-merge)[1169]: Merged extensions into '/usr'. Apr 13 20:32:45.819039 systemd[1]: Reloading requested from client PID 1145 ('systemd-sysext') (unit systemd-sysext.service)... Apr 13 20:32:45.819087 systemd[1]: Reloading... Apr 13 20:32:46.060105 zram_generator::config[1196]: No configuration found. Apr 13 20:32:46.084178 ldconfig[1140]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 13 20:32:46.346785 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:32:46.467217 systemd[1]: Reloading finished in 646 ms. Apr 13 20:32:46.505898 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 13 20:32:46.518122 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 13 20:32:46.543378 systemd[1]: Starting ensure-sysext.service... Apr 13 20:32:46.560269 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 13 20:32:46.575868 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Apr 13 20:32:46.575895 systemd[1]: Reloading... Apr 13 20:32:46.645710 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 13 20:32:46.646424 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 13 20:32:46.652721 systemd-tmpfiles[1237]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 13 20:32:46.654657 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Apr 13 20:32:46.654951 systemd-tmpfiles[1237]: ACLs are not supported, ignoring. Apr 13 20:32:46.669413 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:32:46.669660 systemd-tmpfiles[1237]: Skipping /boot Apr 13 20:32:46.701845 systemd-tmpfiles[1237]: Detected autofs mount point /boot during canonicalization of boot. Apr 13 20:32:46.702284 systemd-tmpfiles[1237]: Skipping /boot Apr 13 20:32:46.759125 zram_generator::config[1266]: No configuration found. Apr 13 20:32:46.902365 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:32:46.973103 systemd[1]: Reloading finished in 396 ms. Apr 13 20:32:46.998223 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 13 20:32:47.014790 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 13 20:32:47.042349 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:32:47.069431 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 13 20:32:47.090344 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 13 20:32:47.110241 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 13 20:32:47.127318 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 13 20:32:47.144319 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 13 20:32:47.174696 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 13 20:32:47.191467 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:32:47.192024 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:32:47.201395 augenrules[1326]: No rules Apr 13 20:32:47.200492 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:32:47.218233 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:32:47.239611 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:32:47.249353 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:32:47.250266 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:32:47.254479 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:32:47.266393 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 13 20:32:47.279215 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:32:47.280168 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:32:47.287258 systemd-udevd[1322]: Using default interface naming scheme 'v255'. Apr 13 20:32:47.314326 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:32:47.314908 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:32:47.327956 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 13 20:32:47.341327 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 13 20:32:47.352787 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:32:47.357274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:32:47.369488 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 13 20:32:47.388888 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 13 20:32:47.423727 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:32:47.427283 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 13 20:32:47.437285 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 13 20:32:47.457265 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 13 20:32:47.477424 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 13 20:32:47.500332 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 13 20:32:47.520355 systemd[1]: Starting setup-oem.service - Setup OEM... Apr 13 20:32:47.529380 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 13 20:32:47.543382 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 13 20:32:47.553282 systemd[1]: Reached target time-set.target - System Time Set. Apr 13 20:32:47.573416 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 13 20:32:47.583217 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 13 20:32:47.583287 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 13 20:32:47.586062 systemd[1]: Finished ensure-sysext.service. Apr 13 20:32:47.595812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 13 20:32:47.597420 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 13 20:32:47.609897 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 13 20:32:47.610427 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 13 20:32:47.620999 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 13 20:32:47.621372 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 13 20:32:47.634011 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 13 20:32:47.634577 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 13 20:32:47.640841 systemd-resolved[1319]: Positive Trust Anchors: Apr 13 20:32:47.640868 systemd-resolved[1319]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 13 20:32:47.640940 systemd-resolved[1319]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 13 20:32:47.672539 systemd-resolved[1319]: Defaulting to hostname 'linux'. Apr 13 20:32:47.678188 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 13 20:32:47.688713 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 13 20:32:47.701109 systemd[1]: Finished setup-oem.service - Setup OEM. Apr 13 20:32:47.728149 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 13 20:32:47.728251 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 13 20:32:47.751323 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Apr 13 20:32:47.762221 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 13 20:32:47.762344 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 13 20:32:47.845104 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Apr 13 20:32:47.851719 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Apr 13 20:32:47.853282 systemd-networkd[1373]: lo: Link UP Apr 13 20:32:47.853314 systemd-networkd[1373]: lo: Gained carrier Apr 13 20:32:47.861076 kernel: ACPI: button: Power Button [PWRF] Apr 13 20:32:47.861292 systemd-networkd[1373]: Enumeration completed Apr 13 20:32:47.864670 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:32:47.864874 systemd-networkd[1373]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 13 20:32:47.865992 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:32:47.866234 systemd-networkd[1373]: eth0: Link UP Apr 13 20:32:47.866348 systemd-networkd[1373]: eth0: Gained carrier Apr 13 20:32:47.866530 systemd-networkd[1373]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 13 20:32:47.868490 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 13 20:32:47.881106 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Apr 13 20:32:47.883158 systemd-networkd[1373]: eth0: DHCPv4 address 10.128.0.70/32, gateway 10.128.0.1 acquired from 169.254.169.254 Apr 13 20:32:47.889078 kernel: ACPI: button: Sleep Button [SLPF] Apr 13 20:32:47.891846 systemd[1]: Reached target network.target - Network. Apr 13 20:32:47.912117 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 13 20:32:47.922103 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Apr 13 20:32:47.988085 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1366) Apr 13 20:32:48.008101 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 13 20:32:48.101090 kernel: EDAC MC: Ver: 3.0.0 Apr 13 20:32:48.153147 kernel: mousedev: PS/2 mouse device common for all mice Apr 13 20:32:48.153512 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 13 20:32:48.192276 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Apr 13 20:32:48.203692 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 13 20:32:48.213469 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 13 20:32:48.216170 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 13 20:32:48.244512 lvm[1412]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:32:48.265632 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 13 20:32:48.286846 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 13 20:32:48.288703 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 13 20:32:48.294713 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 13 20:32:48.315131 lvm[1417]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 13 20:32:48.335878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 13 20:32:48.347597 systemd[1]: Reached target sysinit.target - System Initialization. Apr 13 20:32:48.357396 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 13 20:32:48.369330 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 13 20:32:48.381481 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 13 20:32:48.391508 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 13 20:32:48.403256 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 13 20:32:48.415297 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 13 20:32:48.415382 systemd[1]: Reached target paths.target - Path Units. Apr 13 20:32:48.424271 systemd[1]: Reached target timers.target - Timer Units. Apr 13 20:32:48.434192 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 13 20:32:48.446420 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 13 20:32:48.470230 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 13 20:32:48.481414 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 13 20:32:48.493684 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 13 20:32:48.504313 systemd[1]: Reached target sockets.target - Socket Units. Apr 13 20:32:48.514260 systemd[1]: Reached target basic.target - Basic System. Apr 13 20:32:48.523330 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:32:48.523430 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 13 20:32:48.529233 systemd[1]: Starting containerd.service - containerd container runtime... Apr 13 20:32:48.553407 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 13 20:32:48.572387 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 13 20:32:48.602351 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 13 20:32:48.620303 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 13 20:32:48.630229 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 13 20:32:48.637429 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 13 20:32:48.650827 jq[1427]: false Apr 13 20:32:48.657728 systemd[1]: Started ntpd.service - Network Time Service. Apr 13 20:32:48.676683 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Apr 13 20:32:48.694404 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 13 20:32:48.701959 coreos-metadata[1425]: Apr 13 20:32:48.698 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Apr 13 20:32:48.701959 coreos-metadata[1425]: Apr 13 20:32:48.699 INFO Fetch successful Apr 13 20:32:48.701959 coreos-metadata[1425]: Apr 13 20:32:48.699 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Apr 13 20:32:48.701959 coreos-metadata[1425]: Apr 13 20:32:48.701 INFO Fetch successful Apr 13 20:32:48.701959 coreos-metadata[1425]: Apr 13 20:32:48.701 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Apr 13 20:32:48.706106 coreos-metadata[1425]: Apr 13 20:32:48.702 INFO Fetch successful Apr 13 20:32:48.706106 coreos-metadata[1425]: Apr 13 20:32:48.703 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Apr 13 20:32:48.706106 coreos-metadata[1425]: Apr 13 20:32:48.703 INFO Fetch successful Apr 13 20:32:48.714303 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 13 20:32:48.737157 extend-filesystems[1429]: Found loop4 Apr 13 20:32:48.737157 extend-filesystems[1429]: Found loop5 Apr 13 20:32:48.737157 extend-filesystems[1429]: Found loop6 Apr 13 20:32:48.737157 extend-filesystems[1429]: Found loop7 Apr 13 20:32:48.737157 extend-filesystems[1429]: Found sda Apr 13 20:32:48.737157 extend-filesystems[1429]: Found sda1 Apr 13 20:32:48.737157 extend-filesystems[1429]: Found sda2 Apr 13 20:32:48.737157 extend-filesystems[1429]: Found sda3 Apr 13 20:32:48.737157 extend-filesystems[1429]: Found usr Apr 13 20:32:48.737157 extend-filesystems[1429]: Found sda4 Apr 13 20:32:48.737157 extend-filesystems[1429]: Found sda6 Apr 13 20:32:48.887292 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 18:02:33 UTC 2026 (1): Starting Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: ---------------------------------------------------- Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: ntp-4 is maintained by Network Time Foundation, Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: corporation. Support and training for ntp-4 are Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: available at https://www.nwtime.org/support Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: ---------------------------------------------------- Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: proto: precision = 0.095 usec (-23) Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: basedate set to 2026-04-01 Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: gps base set to 2026-04-05 (week 2413) Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: Listen normally on 3 eth0 10.128.0.70:123 Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: Listen normally on 4 lo [::1]:123 Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: bind(21) AF_INET6 fe80::4001:aff:fe80:46%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:46%2#123 Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: failed to init interface for address fe80::4001:aff:fe80:46%2 Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: Listening on routing socket on fd #21 for interface updates Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:32:48.887519 ntpd[1432]: 13 Apr 20:32:48 ntpd[1432]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:32:48.741364 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 13 20:32:48.893735 extend-filesystems[1429]: Found sda7 Apr 13 20:32:48.893735 extend-filesystems[1429]: Found sda9 Apr 13 20:32:48.893735 extend-filesystems[1429]: Checking size of /dev/sda9 Apr 13 20:32:48.893735 extend-filesystems[1429]: Resized partition /dev/sda9 Apr 13 20:32:48.949254 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Apr 13 20:32:48.771818 ntpd[1432]: ntpd 4.2.8p17@1.4004-o Mon Apr 13 18:02:33 UTC 2026 (1): Starting Apr 13 20:32:48.965121 update_engine[1447]: I20260413 20:32:48.861464 1447 main.cc:92] Flatcar Update Engine starting Apr 13 20:32:48.965121 update_engine[1447]: I20260413 20:32:48.871399 1447 update_check_scheduler.cc:74] Next update check in 4m53s Apr 13 20:32:48.751868 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Apr 13 20:32:48.965832 extend-filesystems[1451]: resize2fs 1.47.1 (20-May-2024) Apr 13 20:32:48.771887 ntpd[1432]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Apr 13 20:32:48.753595 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 13 20:32:48.975650 extend-filesystems[1451]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Apr 13 20:32:48.975650 extend-filesystems[1451]: old_desc_blocks = 1, new_desc_blocks = 2 Apr 13 20:32:48.975650 extend-filesystems[1451]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Apr 13 20:32:48.771907 ntpd[1432]: ---------------------------------------------------- Apr 13 20:32:48.759691 systemd[1]: Starting update-engine.service - Update Engine... Apr 13 20:32:49.019022 extend-filesystems[1429]: Resized filesystem in /dev/sda9 Apr 13 20:32:49.065071 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (1366) Apr 13 20:32:48.771924 ntpd[1432]: ntp-4 is maintained by Network Time Foundation, Apr 13 20:32:49.065266 jq[1450]: true Apr 13 20:32:48.787106 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 13 20:32:48.771943 ntpd[1432]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Apr 13 20:32:48.800581 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 13 20:32:48.771963 ntpd[1432]: corporation. Support and training for ntp-4 are Apr 13 20:32:48.875900 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 13 20:32:48.771980 ntpd[1432]: available at https://www.nwtime.org/support Apr 13 20:32:48.877155 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 13 20:32:48.772036 ntpd[1432]: ---------------------------------------------------- Apr 13 20:32:49.069823 jq[1461]: true Apr 13 20:32:48.877715 systemd[1]: motdgen.service: Deactivated successfully. Apr 13 20:32:48.777950 dbus-daemon[1426]: [system] SELinux support is enabled Apr 13 20:32:48.878795 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 13 20:32:48.778678 ntpd[1432]: proto: precision = 0.095 usec (-23) Apr 13 20:32:48.897691 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 13 20:32:48.782948 ntpd[1432]: basedate set to 2026-04-01 Apr 13 20:32:48.898016 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 13 20:32:48.782977 ntpd[1432]: gps base set to 2026-04-05 (week 2413) Apr 13 20:32:48.984901 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 13 20:32:48.794982 dbus-daemon[1426]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1373 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 13 20:32:49.007407 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 13 20:32:48.797075 ntpd[1432]: Listen and drop on 0 v6wildcard [::]:123 Apr 13 20:32:49.007465 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 13 20:32:48.797145 ntpd[1432]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Apr 13 20:32:49.029308 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 13 20:32:48.802697 ntpd[1432]: Listen normally on 2 lo 127.0.0.1:123 Apr 13 20:32:49.029348 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 13 20:32:48.802790 ntpd[1432]: Listen normally on 3 eth0 10.128.0.70:123 Apr 13 20:32:49.051083 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 13 20:32:48.802866 ntpd[1432]: Listen normally on 4 lo [::1]:123 Apr 13 20:32:49.052166 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 13 20:32:48.802944 ntpd[1432]: bind(21) AF_INET6 fe80::4001:aff:fe80:46%2#123 flags 0x11 failed: Cannot assign requested address Apr 13 20:32:49.061162 systemd-networkd[1373]: eth0: Gained IPv6LL Apr 13 20:32:48.802979 ntpd[1432]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:46%2#123 Apr 13 20:32:49.073937 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 13 20:32:48.803005 ntpd[1432]: failed to init interface for address fe80::4001:aff:fe80:46%2 Apr 13 20:32:49.080521 (ntainerd)[1462]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 13 20:32:48.804267 ntpd[1432]: Listening on routing socket on fd #21 for interface updates Apr 13 20:32:48.819060 ntpd[1432]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:32:48.820222 ntpd[1432]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Apr 13 20:32:48.990710 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 13 20:32:49.114481 systemd[1]: Started update-engine.service - Update Engine. Apr 13 20:32:49.130292 systemd[1]: Reached target network-online.target - Network is Online. Apr 13 20:32:49.144404 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:32:49.161386 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 13 20:32:49.181360 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Apr 13 20:32:49.202359 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 13 20:32:49.220414 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 13 20:32:49.234847 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 13 20:32:49.246795 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 13 20:32:49.256382 tar[1460]: linux-amd64/LICENSE Apr 13 20:32:49.256382 tar[1460]: linux-amd64/helm Apr 13 20:32:49.327603 init.sh[1483]: + '[' -e /etc/default/instance_configs.cfg.template ']' Apr 13 20:32:49.327603 init.sh[1483]: + echo -e '[InstanceSetup]\nset_host_keys = false' Apr 13 20:32:49.327603 init.sh[1483]: + /usr/bin/google_instance_setup Apr 13 20:32:49.428559 systemd-logind[1443]: Watching system buttons on /dev/input/event1 (Power Button) Apr 13 20:32:49.438180 systemd-logind[1443]: Watching system buttons on /dev/input/event2 (Sleep Button) Apr 13 20:32:49.441271 bash[1504]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:32:49.438230 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 13 20:32:49.444903 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 13 20:32:49.447134 systemd-logind[1443]: New seat seat0. Apr 13 20:32:49.458571 systemd[1]: Started systemd-logind.service - User Login Management. Apr 13 20:32:49.471134 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 13 20:32:49.507012 systemd[1]: Starting sshkeys.service... Apr 13 20:32:49.600944 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 13 20:32:49.601542 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 13 20:32:49.607242 dbus-daemon[1426]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1487 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 13 20:32:49.632780 systemd[1]: Starting polkit.service - Authorization Manager... Apr 13 20:32:49.653908 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 13 20:32:49.679976 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 13 20:32:49.847249 polkitd[1512]: Started polkitd version 121 Apr 13 20:32:49.908081 coreos-metadata[1513]: Apr 13 20:32:49.907 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Apr 13 20:32:49.913082 coreos-metadata[1513]: Apr 13 20:32:49.909 INFO Fetch failed with 404: resource not found Apr 13 20:32:49.913082 coreos-metadata[1513]: Apr 13 20:32:49.912 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Apr 13 20:32:49.913082 coreos-metadata[1513]: Apr 13 20:32:49.912 INFO Fetch successful Apr 13 20:32:49.913082 coreos-metadata[1513]: Apr 13 20:32:49.912 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Apr 13 20:32:49.917934 coreos-metadata[1513]: Apr 13 20:32:49.914 INFO Fetch failed with 404: resource not found Apr 13 20:32:49.917934 coreos-metadata[1513]: Apr 13 20:32:49.914 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Apr 13 20:32:49.917934 coreos-metadata[1513]: Apr 13 20:32:49.916 INFO Fetch failed with 404: resource not found Apr 13 20:32:49.917934 coreos-metadata[1513]: Apr 13 20:32:49.916 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Apr 13 20:32:49.917934 coreos-metadata[1513]: Apr 13 20:32:49.917 INFO Fetch successful Apr 13 20:32:49.918244 polkitd[1512]: Loading rules from directory /etc/polkit-1/rules.d Apr 13 20:32:49.918354 polkitd[1512]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 13 20:32:49.925466 polkitd[1512]: Finished loading, compiling and executing 2 rules Apr 13 20:32:49.926372 unknown[1513]: wrote ssh authorized keys file for user: core Apr 13 20:32:49.937876 dbus-daemon[1426]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 13 20:32:49.938207 systemd[1]: Started polkit.service - Authorization Manager. Apr 13 20:32:49.940988 polkitd[1512]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 13 20:32:49.990319 locksmithd[1488]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 13 20:32:50.044128 update-ssh-keys[1531]: Updated "/home/core/.ssh/authorized_keys" Apr 13 20:32:50.048901 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 13 20:32:50.058735 systemd-resolved[1319]: System hostname changed to 'ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal'. Apr 13 20:32:50.058983 systemd-hostnamed[1487]: Hostname set to (transient) Apr 13 20:32:50.065449 systemd[1]: Finished sshkeys.service. Apr 13 20:32:50.374431 containerd[1462]: time="2026-04-13T20:32:50.374247911Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Apr 13 20:32:50.565005 containerd[1462]: time="2026-04-13T20:32:50.564901832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:32:50.576090 containerd[1462]: time="2026-04-13T20:32:50.575340031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.127-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:32:50.576090 containerd[1462]: time="2026-04-13T20:32:50.575417839Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 13 20:32:50.576090 containerd[1462]: time="2026-04-13T20:32:50.575447604Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 13 20:32:50.576090 containerd[1462]: time="2026-04-13T20:32:50.575726243Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 13 20:32:50.576090 containerd[1462]: time="2026-04-13T20:32:50.575759560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 13 20:32:50.576090 containerd[1462]: time="2026-04-13T20:32:50.575847789Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:32:50.576090 containerd[1462]: time="2026-04-13T20:32:50.575867666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:32:50.577397 containerd[1462]: time="2026-04-13T20:32:50.577345948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:32:50.577397 containerd[1462]: time="2026-04-13T20:32:50.577396513Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 13 20:32:50.577560 containerd[1462]: time="2026-04-13T20:32:50.577422018Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:32:50.577560 containerd[1462]: time="2026-04-13T20:32:50.577440425Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 13 20:32:50.577710 containerd[1462]: time="2026-04-13T20:32:50.577617074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:32:50.580076 containerd[1462]: time="2026-04-13T20:32:50.577971651Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 13 20:32:50.581070 containerd[1462]: time="2026-04-13T20:32:50.581006054Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 13 20:32:50.581170 containerd[1462]: time="2026-04-13T20:32:50.581080344Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 13 20:32:50.581263 containerd[1462]: time="2026-04-13T20:32:50.581233996Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 13 20:32:50.581353 containerd[1462]: time="2026-04-13T20:32:50.581328157Z" level=info msg="metadata content store policy set" policy=shared Apr 13 20:32:50.591519 containerd[1462]: time="2026-04-13T20:32:50.591460308Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 13 20:32:50.592552 containerd[1462]: time="2026-04-13T20:32:50.592458069Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 13 20:32:50.592690 containerd[1462]: time="2026-04-13T20:32:50.592565707Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 13 20:32:50.592690 containerd[1462]: time="2026-04-13T20:32:50.592596555Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 13 20:32:50.592690 containerd[1462]: time="2026-04-13T20:32:50.592633881Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 13 20:32:50.595571 containerd[1462]: time="2026-04-13T20:32:50.592860815Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 13 20:32:50.595571 containerd[1462]: time="2026-04-13T20:32:50.593984380Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 13 20:32:50.595571 containerd[1462]: time="2026-04-13T20:32:50.594186761Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 13 20:32:50.595571 containerd[1462]: time="2026-04-13T20:32:50.594214631Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 13 20:32:50.595571 containerd[1462]: time="2026-04-13T20:32:50.594237018Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 13 20:32:50.595571 containerd[1462]: time="2026-04-13T20:32:50.594261468Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 13 20:32:50.595571 containerd[1462]: time="2026-04-13T20:32:50.594283912Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 13 20:32:50.595571 containerd[1462]: time="2026-04-13T20:32:50.594305940Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 13 20:32:50.595571 containerd[1462]: time="2026-04-13T20:32:50.594330253Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 13 20:32:50.595571 containerd[1462]: time="2026-04-13T20:32:50.594354126Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 13 20:32:50.595571 containerd[1462]: time="2026-04-13T20:32:50.594392271Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 13 20:32:50.595571 containerd[1462]: time="2026-04-13T20:32:50.594413064Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 13 20:32:50.595571 containerd[1462]: time="2026-04-13T20:32:50.594432102Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 13 20:32:50.595571 containerd[1462]: time="2026-04-13T20:32:50.594463418Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594484989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594504284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594525899Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594549297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594580840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594601653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594623950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594663460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594691350Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594711635Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594741394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594764434Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594793595Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594826985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.596468 containerd[1462]: time="2026-04-13T20:32:50.594847136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.598243 containerd[1462]: time="2026-04-13T20:32:50.594866323Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 13 20:32:50.598243 containerd[1462]: time="2026-04-13T20:32:50.597117712Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 13 20:32:50.598243 containerd[1462]: time="2026-04-13T20:32:50.597724194Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 13 20:32:50.598243 containerd[1462]: time="2026-04-13T20:32:50.597753880Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 13 20:32:50.598243 containerd[1462]: time="2026-04-13T20:32:50.597779263Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 13 20:32:50.598243 containerd[1462]: time="2026-04-13T20:32:50.597802427Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.598243 containerd[1462]: time="2026-04-13T20:32:50.597831319Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 13 20:32:50.598243 containerd[1462]: time="2026-04-13T20:32:50.597851796Z" level=info msg="NRI interface is disabled by configuration." Apr 13 20:32:50.598243 containerd[1462]: time="2026-04-13T20:32:50.597875800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 13 20:32:50.598677 containerd[1462]: time="2026-04-13T20:32:50.598477562Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 13 20:32:50.598677 containerd[1462]: time="2026-04-13T20:32:50.598581713Z" level=info msg="Connect containerd service" Apr 13 20:32:50.598677 containerd[1462]: time="2026-04-13T20:32:50.598645446Z" level=info msg="using legacy CRI server" Apr 13 20:32:50.598677 containerd[1462]: time="2026-04-13T20:32:50.598659758Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 13 20:32:50.599503 containerd[1462]: time="2026-04-13T20:32:50.598851455Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 13 20:32:50.608771 containerd[1462]: time="2026-04-13T20:32:50.603597430Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 13 20:32:50.608771 containerd[1462]: time="2026-04-13T20:32:50.605182530Z" level=info msg="Start subscribing containerd event" Apr 13 20:32:50.608771 containerd[1462]: time="2026-04-13T20:32:50.605259831Z" level=info msg="Start recovering state" Apr 13 20:32:50.608771 containerd[1462]: time="2026-04-13T20:32:50.605353895Z" level=info msg="Start event monitor" Apr 13 20:32:50.608771 containerd[1462]: time="2026-04-13T20:32:50.605393335Z" level=info msg="Start snapshots syncer" Apr 13 20:32:50.608771 containerd[1462]: time="2026-04-13T20:32:50.605408786Z" level=info msg="Start cni network conf syncer for default" Apr 13 20:32:50.608771 containerd[1462]: time="2026-04-13T20:32:50.605421435Z" level=info msg="Start streaming server" Apr 13 20:32:50.608771 containerd[1462]: time="2026-04-13T20:32:50.607393513Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 13 20:32:50.608771 containerd[1462]: time="2026-04-13T20:32:50.607476222Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 13 20:32:50.619330 systemd[1]: Started containerd.service - containerd container runtime. Apr 13 20:32:50.622682 containerd[1462]: time="2026-04-13T20:32:50.619798791Z" level=info msg="containerd successfully booted in 0.253037s" Apr 13 20:32:51.157272 instance-setup[1502]: INFO Running google_set_multiqueue. Apr 13 20:32:51.181301 sshd_keygen[1456]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 13 20:32:51.217313 instance-setup[1502]: INFO Set channels for eth0 to 2. Apr 13 20:32:51.245088 instance-setup[1502]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Apr 13 20:32:51.254630 instance-setup[1502]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Apr 13 20:32:51.257832 instance-setup[1502]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Apr 13 20:32:51.264291 instance-setup[1502]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Apr 13 20:32:51.264424 instance-setup[1502]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Apr 13 20:32:51.270939 instance-setup[1502]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Apr 13 20:32:51.271846 instance-setup[1502]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Apr 13 20:32:51.274036 instance-setup[1502]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Apr 13 20:32:51.276161 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 13 20:32:51.294608 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 13 20:32:51.301733 instance-setup[1502]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Apr 13 20:32:51.310564 systemd[1]: Started sshd@0-10.128.0.70:22-20.229.252.112:36196.service - OpenSSH per-connection server daemon (20.229.252.112:36196). Apr 13 20:32:51.329756 instance-setup[1502]: INFO /usr/bin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Apr 13 20:32:51.340233 instance-setup[1502]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Apr 13 20:32:51.340338 instance-setup[1502]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Apr 13 20:32:51.370472 systemd[1]: issuegen.service: Deactivated successfully. Apr 13 20:32:51.370810 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 13 20:32:51.390552 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 13 20:32:51.422894 tar[1460]: linux-amd64/README.md Apr 13 20:32:51.442085 init.sh[1483]: + /usr/bin/google_metadata_script_runner --script-type startup Apr 13 20:32:51.472069 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 13 20:32:51.486084 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Apr 13 20:32:51.510312 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 13 20:32:51.527327 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 13 20:32:51.537251 systemd[1]: Reached target getty.target - Login Prompts. Apr 13 20:32:51.694408 startup-script[1587]: INFO Starting startup scripts. Apr 13 20:32:51.702784 startup-script[1587]: INFO No startup scripts found in metadata. Apr 13 20:32:51.702841 startup-script[1587]: INFO Finished running startup scripts. Apr 13 20:32:51.735765 init.sh[1483]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Apr 13 20:32:51.735926 init.sh[1483]: + daemon_pids=() Apr 13 20:32:51.736008 init.sh[1483]: + for d in accounts clock_skew network Apr 13 20:32:51.736461 init.sh[1483]: + daemon_pids+=($!) Apr 13 20:32:51.736629 init.sh[1594]: + /usr/bin/google_accounts_daemon Apr 13 20:32:51.738204 init.sh[1483]: + for d in accounts clock_skew network Apr 13 20:32:51.738204 init.sh[1483]: + daemon_pids+=($!) Apr 13 20:32:51.738204 init.sh[1483]: + for d in accounts clock_skew network Apr 13 20:32:51.738204 init.sh[1483]: + daemon_pids+=($!) Apr 13 20:32:51.738204 init.sh[1483]: + NOTIFY_SOCKET=/run/systemd/notify Apr 13 20:32:51.738204 init.sh[1483]: + /usr/bin/systemd-notify --ready Apr 13 20:32:51.738845 init.sh[1595]: + /usr/bin/google_clock_skew_daemon Apr 13 20:32:51.741323 init.sh[1596]: + /usr/bin/google_network_daemon Apr 13 20:32:51.774155 systemd[1]: Started oem-gce.service - GCE Linux Agent. Apr 13 20:32:51.776208 ntpd[1432]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:46%2]:123 Apr 13 20:32:51.776852 ntpd[1432]: 13 Apr 20:32:51 ntpd[1432]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:46%2]:123 Apr 13 20:32:51.789198 init.sh[1483]: + wait -n 1594 1595 1596 Apr 13 20:32:52.100262 sshd[1576]: Accepted publickey for core from 20.229.252.112 port 36196 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:32:52.103446 sshd[1576]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:32:52.134241 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 13 20:32:52.154714 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 13 20:32:52.171722 systemd-logind[1443]: New session 1 of user core. Apr 13 20:32:52.213869 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 13 20:32:52.240461 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 13 20:32:52.279086 (systemd)[1606]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 13 20:32:52.355436 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:32:52.368551 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 13 20:32:52.376667 google-clock-skew[1595]: INFO Starting Google Clock Skew daemon. Apr 13 20:32:52.389889 (kubelet)[1618]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:32:52.392965 groupadd[1609]: group added to /etc/group: name=google-sudoers, GID=1000 Apr 13 20:32:52.403543 groupadd[1609]: group added to /etc/gshadow: name=google-sudoers Apr 13 20:32:52.405359 google-clock-skew[1595]: INFO Clock drift token has changed: 0. Apr 13 20:32:52.430757 google-networking[1596]: INFO Starting Google Networking daemon. Apr 13 20:32:53.000814 google-clock-skew[1595]: INFO Synced system time with hardware clock. Apr 13 20:32:53.001800 systemd-resolved[1319]: Clock change detected. Flushing caches. Apr 13 20:32:53.023498 groupadd[1609]: new group: name=google-sudoers, GID=1000 Apr 13 20:32:53.083399 google-accounts[1594]: INFO Starting Google Accounts daemon. Apr 13 20:32:53.112466 google-accounts[1594]: WARNING OS Login not installed. Apr 13 20:32:53.115442 google-accounts[1594]: INFO Creating a new user account for 0. Apr 13 20:32:53.127460 init.sh[1633]: useradd: invalid user name '0': use --badname to ignore Apr 13 20:32:53.126710 systemd[1606]: Queued start job for default target default.target. Apr 13 20:32:53.127753 google-accounts[1594]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Apr 13 20:32:53.136048 systemd[1606]: Created slice app.slice - User Application Slice. Apr 13 20:32:53.136115 systemd[1606]: Reached target paths.target - Paths. Apr 13 20:32:53.136151 systemd[1606]: Reached target timers.target - Timers. Apr 13 20:32:53.152110 systemd[1606]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 13 20:32:53.164316 systemd[1606]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 13 20:32:53.165592 systemd[1606]: Reached target sockets.target - Sockets. Apr 13 20:32:53.165660 systemd[1606]: Reached target basic.target - Basic System. Apr 13 20:32:53.165757 systemd[1606]: Reached target default.target - Main User Target. Apr 13 20:32:53.165824 systemd[1606]: Startup finished in 371ms. Apr 13 20:32:53.166483 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 13 20:32:53.188380 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 13 20:32:53.200211 systemd[1]: Startup finished in 1.295s (kernel) + 25.212s (initrd) + 10.734s (userspace) = 37.242s. Apr 13 20:32:53.712339 systemd[1]: Started sshd@1-10.128.0.70:22-20.229.252.112:36208.service - OpenSSH per-connection server daemon (20.229.252.112:36208). Apr 13 20:32:53.797469 kubelet[1618]: E0413 20:32:53.797339 1618 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:32:53.801305 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:32:53.801617 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:32:53.802366 systemd[1]: kubelet.service: Consumed 1.314s CPU time. Apr 13 20:32:54.432968 sshd[1644]: Accepted publickey for core from 20.229.252.112 port 36208 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:32:54.435156 sshd[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:32:54.443512 systemd-logind[1443]: New session 2 of user core. Apr 13 20:32:54.457283 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 13 20:32:54.934335 sshd[1644]: pam_unix(sshd:session): session closed for user core Apr 13 20:32:54.940837 systemd[1]: sshd@1-10.128.0.70:22-20.229.252.112:36208.service: Deactivated successfully. Apr 13 20:32:54.944139 systemd[1]: session-2.scope: Deactivated successfully. Apr 13 20:32:54.945404 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Apr 13 20:32:54.947294 systemd-logind[1443]: Removed session 2. Apr 13 20:32:55.070490 systemd[1]: Started sshd@2-10.128.0.70:22-20.229.252.112:47620.service - OpenSSH per-connection server daemon (20.229.252.112:47620). Apr 13 20:32:55.789262 sshd[1652]: Accepted publickey for core from 20.229.252.112 port 47620 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:32:55.791381 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:32:55.798917 systemd-logind[1443]: New session 3 of user core. Apr 13 20:32:55.808301 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 13 20:32:56.281312 sshd[1652]: pam_unix(sshd:session): session closed for user core Apr 13 20:32:56.288221 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Apr 13 20:32:56.289549 systemd[1]: sshd@2-10.128.0.70:22-20.229.252.112:47620.service: Deactivated successfully. Apr 13 20:32:56.292276 systemd[1]: session-3.scope: Deactivated successfully. Apr 13 20:32:56.293932 systemd-logind[1443]: Removed session 3. Apr 13 20:32:56.406429 systemd[1]: Started sshd@3-10.128.0.70:22-20.229.252.112:47634.service - OpenSSH per-connection server daemon (20.229.252.112:47634). Apr 13 20:32:57.102689 sshd[1659]: Accepted publickey for core from 20.229.252.112 port 47634 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:32:57.105068 sshd[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:32:57.112022 systemd-logind[1443]: New session 4 of user core. Apr 13 20:32:57.116218 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 13 20:32:57.587565 sshd[1659]: pam_unix(sshd:session): session closed for user core Apr 13 20:32:57.593045 systemd[1]: sshd@3-10.128.0.70:22-20.229.252.112:47634.service: Deactivated successfully. Apr 13 20:32:57.595790 systemd[1]: session-4.scope: Deactivated successfully. Apr 13 20:32:57.598263 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Apr 13 20:32:57.599749 systemd-logind[1443]: Removed session 4. Apr 13 20:32:57.706640 systemd[1]: Started sshd@4-10.128.0.70:22-20.229.252.112:47648.service - OpenSSH per-connection server daemon (20.229.252.112:47648). Apr 13 20:32:58.402194 sshd[1666]: Accepted publickey for core from 20.229.252.112 port 47648 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:32:58.404560 sshd[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:32:58.412724 systemd-logind[1443]: New session 5 of user core. Apr 13 20:32:58.415232 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 13 20:32:58.806761 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 13 20:32:58.807481 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:32:58.827730 sudo[1669]: pam_unix(sudo:session): session closed for user root Apr 13 20:32:58.938135 sshd[1666]: pam_unix(sshd:session): session closed for user core Apr 13 20:32:58.943699 systemd[1]: sshd@4-10.128.0.70:22-20.229.252.112:47648.service: Deactivated successfully. Apr 13 20:32:58.946813 systemd[1]: session-5.scope: Deactivated successfully. Apr 13 20:32:58.949598 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Apr 13 20:32:58.951351 systemd-logind[1443]: Removed session 5. Apr 13 20:32:59.062390 systemd[1]: Started sshd@5-10.128.0.70:22-20.229.252.112:47660.service - OpenSSH per-connection server daemon (20.229.252.112:47660). Apr 13 20:32:59.753330 sshd[1674]: Accepted publickey for core from 20.229.252.112 port 47660 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:32:59.755451 sshd[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:32:59.762783 systemd-logind[1443]: New session 6 of user core. Apr 13 20:32:59.772261 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 13 20:33:00.136120 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 13 20:33:00.136848 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:33:00.143823 sudo[1678]: pam_unix(sudo:session): session closed for user root Apr 13 20:33:00.160821 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Apr 13 20:33:00.161490 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:33:00.183389 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Apr 13 20:33:00.188650 auditctl[1681]: No rules Apr 13 20:33:00.189369 systemd[1]: audit-rules.service: Deactivated successfully. Apr 13 20:33:00.189691 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Apr 13 20:33:00.194079 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Apr 13 20:33:00.256005 augenrules[1699]: No rules Apr 13 20:33:00.258135 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Apr 13 20:33:00.260420 sudo[1677]: pam_unix(sudo:session): session closed for user root Apr 13 20:33:00.370861 sshd[1674]: pam_unix(sshd:session): session closed for user core Apr 13 20:33:00.376329 systemd[1]: sshd@5-10.128.0.70:22-20.229.252.112:47660.service: Deactivated successfully. Apr 13 20:33:00.379277 systemd[1]: session-6.scope: Deactivated successfully. Apr 13 20:33:00.381667 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Apr 13 20:33:00.383595 systemd-logind[1443]: Removed session 6. Apr 13 20:33:00.499803 systemd[1]: Started sshd@6-10.128.0.70:22-20.229.252.112:47676.service - OpenSSH per-connection server daemon (20.229.252.112:47676). Apr 13 20:33:01.227303 sshd[1707]: Accepted publickey for core from 20.229.252.112 port 47676 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:33:01.229450 sshd[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:33:01.235992 systemd-logind[1443]: New session 7 of user core. Apr 13 20:33:01.244193 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 13 20:33:01.624236 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 13 20:33:01.624866 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 13 20:33:02.089854 systemd[1]: Starting docker.service - Docker Application Container Engine... Apr 13 20:33:02.101738 (dockerd)[1726]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Apr 13 20:33:02.577330 dockerd[1726]: time="2026-04-13T20:33:02.577229826Z" level=info msg="Starting up" Apr 13 20:33:02.710525 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3949733618-merged.mount: Deactivated successfully. Apr 13 20:33:02.733237 systemd[1]: var-lib-docker-metacopy\x2dcheck570995400-merged.mount: Deactivated successfully. Apr 13 20:33:02.758171 dockerd[1726]: time="2026-04-13T20:33:02.758038073Z" level=info msg="Loading containers: start." Apr 13 20:33:02.921019 kernel: Initializing XFRM netlink socket Apr 13 20:33:03.065499 systemd-networkd[1373]: docker0: Link UP Apr 13 20:33:03.090974 dockerd[1726]: time="2026-04-13T20:33:03.090887147Z" level=info msg="Loading containers: done." Apr 13 20:33:03.115077 dockerd[1726]: time="2026-04-13T20:33:03.114984395Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Apr 13 20:33:03.115341 dockerd[1726]: time="2026-04-13T20:33:03.115164471Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Apr 13 20:33:03.115427 dockerd[1726]: time="2026-04-13T20:33:03.115334715Z" level=info msg="Daemon has completed initialization" Apr 13 20:33:03.166682 dockerd[1726]: time="2026-04-13T20:33:03.165851067Z" level=info msg="API listen on /run/docker.sock" Apr 13 20:33:03.166260 systemd[1]: Started docker.service - Docker Application Container Engine. Apr 13 20:33:03.955951 containerd[1462]: time="2026-04-13T20:33:03.954990947Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\"" Apr 13 20:33:04.020288 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 13 20:33:04.031471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:33:04.396980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:33:04.405435 (kubelet)[1874]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:33:04.475113 kubelet[1874]: E0413 20:33:04.475027 1874 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:33:04.480569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:33:04.480874 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:33:04.818074 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2483105499.mount: Deactivated successfully. Apr 13 20:33:07.149117 containerd[1462]: time="2026-04-13T20:33:07.149029915Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:07.150960 containerd[1462]: time="2026-04-13T20:33:07.150840956Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.35.3: active requests=0, bytes read=27570527" Apr 13 20:33:07.152577 containerd[1462]: time="2026-04-13T20:33:07.151989774Z" level=info msg="ImageCreate event name:\"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:07.159382 containerd[1462]: time="2026-04-13T20:33:07.159280017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:07.162573 containerd[1462]: time="2026-04-13T20:33:07.162493350Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.35.3\" with image id \"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\", repo tag \"registry.k8s.io/kube-apiserver:v1.35.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6c6e2571f98e738015a39ed21305ab4166a3e2873f9cc01d7fa58371cf0f5d30\", size \"27566295\" in 3.20743997s" Apr 13 20:33:07.162721 containerd[1462]: time="2026-04-13T20:33:07.162577813Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.35.3\" returns image reference \"sha256:0f2b96c93465f04111c58c3fc41ad0ed2e16b5b3c4b6282b84dc951ad0ea4d66\"" Apr 13 20:33:07.163866 containerd[1462]: time="2026-04-13T20:33:07.163475501Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\"" Apr 13 20:33:08.755232 containerd[1462]: time="2026-04-13T20:33:08.755134290Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:08.757157 containerd[1462]: time="2026-04-13T20:33:08.757051950Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.35.3: active requests=0, bytes read=21449841" Apr 13 20:33:08.759489 containerd[1462]: time="2026-04-13T20:33:08.758496818Z" level=info msg="ImageCreate event name:\"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:08.766096 containerd[1462]: time="2026-04-13T20:33:08.766022293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:08.767826 containerd[1462]: time="2026-04-13T20:33:08.767770233Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.35.3\" with image id \"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\", repo tag \"registry.k8s.io/kube-controller-manager:v1.35.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:23a24aafa10831eb47477b0b31a525ee8a4a99d2c17251aac46c43be8201ec59\", size \"23014443\" in 1.60424381s" Apr 13 20:33:08.768031 containerd[1462]: time="2026-04-13T20:33:08.768001994Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.35.3\" returns image reference \"sha256:0eb506280f9bca2258673771e7029de0d5e92881f0fbaebd4a835e7e302b7d27\"" Apr 13 20:33:08.768720 containerd[1462]: time="2026-04-13T20:33:08.768649671Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\"" Apr 13 20:33:10.115981 containerd[1462]: time="2026-04-13T20:33:10.115859115Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:10.121938 containerd[1462]: time="2026-04-13T20:33:10.119787992Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.35.3: active requests=0, bytes read=15548654" Apr 13 20:33:10.121938 containerd[1462]: time="2026-04-13T20:33:10.120032814Z" level=info msg="ImageCreate event name:\"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:10.127999 containerd[1462]: time="2026-04-13T20:33:10.127944552Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:10.129717 containerd[1462]: time="2026-04-13T20:33:10.129657944Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.35.3\" with image id \"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\", repo tag \"registry.k8s.io/kube-scheduler:v1.35.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:7070dff574916315268ab483f1088a107b1f3a8a1a87f3e3645933111ade7013\", size \"17113292\" in 1.360754413s" Apr 13 20:33:10.129931 containerd[1462]: time="2026-04-13T20:33:10.129881361Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.35.3\" returns image reference \"sha256:87c9b0e4f80d3039b60fbfaf2a4d423e6a891df883a55adb58b8d5b37a4cb23c\"" Apr 13 20:33:10.131120 containerd[1462]: time="2026-04-13T20:33:10.131069822Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\"" Apr 13 20:33:11.506377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount519310064.mount: Deactivated successfully. Apr 13 20:33:12.029033 containerd[1462]: time="2026-04-13T20:33:12.028941698Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.35.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:12.030557 containerd[1462]: time="2026-04-13T20:33:12.030381841Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.35.3: active requests=0, bytes read=25685528" Apr 13 20:33:12.031953 containerd[1462]: time="2026-04-13T20:33:12.031757342Z" level=info msg="ImageCreate event name:\"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:12.034878 containerd[1462]: time="2026-04-13T20:33:12.034807567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:12.036040 containerd[1462]: time="2026-04-13T20:33:12.035992742Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.35.3\" with image id \"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\", repo tag \"registry.k8s.io/kube-proxy:v1.35.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:8743aec6a360aedcb7a076cbecea367b072abe1bfade2e2098650df502e2bc89\", size \"25684340\" in 1.904828664s" Apr 13 20:33:12.036158 containerd[1462]: time="2026-04-13T20:33:12.036045858Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.35.3\" returns image reference \"sha256:53ed370019059b0cdce5a02a20f8aca81f977e34956368c7f1b7ce9709398b79\"" Apr 13 20:33:12.037110 containerd[1462]: time="2026-04-13T20:33:12.037063162Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\"" Apr 13 20:33:12.644707 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2837159647.mount: Deactivated successfully. Apr 13 20:33:14.400672 containerd[1462]: time="2026-04-13T20:33:14.400574190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.13.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:14.402694 containerd[1462]: time="2026-04-13T20:33:14.402617893Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.13.1: active requests=0, bytes read=23557388" Apr 13 20:33:14.405436 containerd[1462]: time="2026-04-13T20:33:14.403972967Z" level=info msg="ImageCreate event name:\"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:14.410085 containerd[1462]: time="2026-04-13T20:33:14.410007022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:14.412140 containerd[1462]: time="2026-04-13T20:33:14.411855119Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.13.1\" with image id \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\", repo tag \"registry.k8s.io/coredns/coredns:v1.13.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6\", size \"23553139\" in 2.374734009s" Apr 13 20:33:14.412140 containerd[1462]: time="2026-04-13T20:33:14.411928563Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.13.1\" returns image reference \"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139\"" Apr 13 20:33:14.413293 containerd[1462]: time="2026-04-13T20:33:14.413233314Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Apr 13 20:33:14.520369 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 13 20:33:14.529680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:33:14.873167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:33:14.883605 (kubelet)[2019]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 13 20:33:14.943285 kubelet[2019]: E0413 20:33:14.943228 2019 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 13 20:33:14.947427 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 13 20:33:14.947739 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 13 20:33:15.115334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2095914659.mount: Deactivated successfully. Apr 13 20:33:15.123397 containerd[1462]: time="2026-04-13T20:33:15.123322590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:15.124753 containerd[1462]: time="2026-04-13T20:33:15.124595032Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=321308" Apr 13 20:33:15.126069 containerd[1462]: time="2026-04-13T20:33:15.126030528Z" level=info msg="ImageCreate event name:\"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:15.133522 containerd[1462]: time="2026-04-13T20:33:15.131818280Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:15.133522 containerd[1462]: time="2026-04-13T20:33:15.133085813Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"320448\" in 719.78465ms" Apr 13 20:33:15.133522 containerd[1462]: time="2026-04-13T20:33:15.133127898Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f\"" Apr 13 20:33:15.134165 containerd[1462]: time="2026-04-13T20:33:15.134113595Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\"" Apr 13 20:33:15.697341 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3257369490.mount: Deactivated successfully. Apr 13 20:33:16.952854 containerd[1462]: time="2026-04-13T20:33:16.952770319Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.6-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:16.954830 containerd[1462]: time="2026-04-13T20:33:16.954754254Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.6-0: active requests=0, bytes read=23644617" Apr 13 20:33:16.955969 containerd[1462]: time="2026-04-13T20:33:16.955922106Z" level=info msg="ImageCreate event name:\"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:16.960952 containerd[1462]: time="2026-04-13T20:33:16.960773405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:16.966621 containerd[1462]: time="2026-04-13T20:33:16.966546378Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.6-0\" with image id \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\", repo tag \"registry.k8s.io/etcd:3.6.6-0\", repo digest \"registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890\", size \"23641797\" in 1.832383345s" Apr 13 20:33:16.968670 containerd[1462]: time="2026-04-13T20:33:16.966798168Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.6-0\" returns image reference \"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2\"" Apr 13 20:33:18.994851 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:33:19.003363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:33:19.070796 systemd[1]: Reloading requested from client PID 2117 ('systemctl') (unit session-7.scope)... Apr 13 20:33:19.070826 systemd[1]: Reloading... Apr 13 20:33:19.293967 zram_generator::config[2158]: No configuration found. Apr 13 20:33:19.448667 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:33:19.592591 systemd[1]: Reloading finished in 520 ms. Apr 13 20:33:19.679766 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Apr 13 20:33:19.680065 systemd[1]: kubelet.service: Failed with result 'signal'. Apr 13 20:33:19.680554 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:33:19.688566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:33:20.201834 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:33:20.221682 (kubelet)[2207]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:33:20.300935 kubelet[2207]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:33:20.591917 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 13 20:33:20.663308 kubelet[2207]: I0413 20:33:20.663224 2207 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 13 20:33:20.663308 kubelet[2207]: I0413 20:33:20.663297 2207 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:33:20.663308 kubelet[2207]: I0413 20:33:20.663326 2207 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:33:20.663607 kubelet[2207]: I0413 20:33:20.663337 2207 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:33:20.663889 kubelet[2207]: I0413 20:33:20.663838 2207 server.go:951] "Client rotation is on, will bootstrap in background" Apr 13 20:33:20.678957 kubelet[2207]: E0413 20:33:20.678653 2207 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.70:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.70:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Apr 13 20:33:20.679717 kubelet[2207]: I0413 20:33:20.679528 2207 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:33:20.686255 kubelet[2207]: E0413 20:33:20.686191 2207 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:33:20.686408 kubelet[2207]: I0413 20:33:20.686301 2207 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:33:20.692357 kubelet[2207]: I0413 20:33:20.691814 2207 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:33:20.693358 kubelet[2207]: I0413 20:33:20.693304 2207 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:33:20.693666 kubelet[2207]: I0413 20:33:20.693458 2207 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:33:20.693999 kubelet[2207]: I0413 20:33:20.693981 2207 topology_manager.go:143] "Creating topology manager with none policy" Apr 13 20:33:20.694080 kubelet[2207]: I0413 20:33:20.694071 2207 container_manager_linux.go:308] "Creating device plugin manager" Apr 13 20:33:20.694229 kubelet[2207]: I0413 20:33:20.694218 2207 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:33:20.698012 kubelet[2207]: I0413 20:33:20.697945 2207 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 13 20:33:20.698271 kubelet[2207]: I0413 20:33:20.698233 2207 kubelet.go:482] "Attempting to sync node with API server" Apr 13 20:33:20.698361 kubelet[2207]: I0413 20:33:20.698276 2207 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:33:20.698361 kubelet[2207]: I0413 20:33:20.698330 2207 kubelet.go:394] "Adding apiserver pod source" Apr 13 20:33:20.698361 kubelet[2207]: I0413 20:33:20.698355 2207 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:33:20.702468 kubelet[2207]: I0413 20:33:20.701714 2207 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:33:20.705975 kubelet[2207]: I0413 20:33:20.705071 2207 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:33:20.705975 kubelet[2207]: I0413 20:33:20.705131 2207 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:33:20.705975 kubelet[2207]: W0413 20:33:20.705224 2207 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 13 20:33:20.723935 kubelet[2207]: I0413 20:33:20.723677 2207 server.go:1257] "Started kubelet" Apr 13 20:33:20.733161 kubelet[2207]: I0413 20:33:20.733114 2207 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 13 20:33:20.741955 kubelet[2207]: I0413 20:33:20.739540 2207 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:33:20.741955 kubelet[2207]: I0413 20:33:20.741783 2207 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:33:20.747936 kubelet[2207]: E0413 20:33:20.745690 2207 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.70:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.70:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal.18a604d1764bf109 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal,},FirstTimestamp:2026-04-13 20:33:20.723603721 +0000 UTC m=+0.494519742,LastTimestamp:2026-04-13 20:33:20.723603721 +0000 UTC m=+0.494519742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal,}" Apr 13 20:33:20.754930 kubelet[2207]: I0413 20:33:20.754055 2207 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:33:20.754930 kubelet[2207]: I0413 20:33:20.754152 2207 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:33:20.754930 kubelet[2207]: I0413 20:33:20.754437 2207 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:33:20.754930 kubelet[2207]: I0413 20:33:20.754826 2207 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:33:20.756883 kubelet[2207]: I0413 20:33:20.756856 2207 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 13 20:33:20.759154 kubelet[2207]: I0413 20:33:20.759123 2207 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:33:20.759154 kubelet[2207]: E0413 20:33:20.758185 2207 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.70:6443: connect: connection refused" interval="200ms" Apr 13 20:33:20.759154 kubelet[2207]: E0413 20:33:20.757198 2207 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" Apr 13 20:33:20.759381 kubelet[2207]: I0413 20:33:20.759250 2207 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:33:20.759780 kubelet[2207]: I0413 20:33:20.759739 2207 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:33:20.759780 kubelet[2207]: I0413 20:33:20.759785 2207 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:33:20.759973 kubelet[2207]: I0413 20:33:20.759936 2207 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:33:20.797194 kubelet[2207]: E0413 20:33:20.793806 2207 kubelet.go:1656] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 13 20:33:20.807137 kubelet[2207]: I0413 20:33:20.807071 2207 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:33:20.811129 kubelet[2207]: I0413 20:33:20.811054 2207 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:33:20.811316 kubelet[2207]: I0413 20:33:20.811178 2207 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 13 20:33:20.811386 kubelet[2207]: I0413 20:33:20.811338 2207 kubelet.go:2501] "Starting kubelet main sync loop" Apr 13 20:33:20.813931 kubelet[2207]: E0413 20:33:20.812669 2207 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:33:20.824069 kubelet[2207]: I0413 20:33:20.824011 2207 cpu_manager.go:225] "Starting" policy="none" Apr 13 20:33:20.824304 kubelet[2207]: I0413 20:33:20.824284 2207 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 13 20:33:20.824433 kubelet[2207]: I0413 20:33:20.824417 2207 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 13 20:33:20.827227 kubelet[2207]: I0413 20:33:20.827184 2207 policy_none.go:50] "Start" Apr 13 20:33:20.827417 kubelet[2207]: I0413 20:33:20.827380 2207 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:33:20.827581 kubelet[2207]: I0413 20:33:20.827564 2207 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:33:20.830069 kubelet[2207]: I0413 20:33:20.830041 2207 policy_none.go:44] "Start" Apr 13 20:33:20.837764 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 13 20:33:20.859548 kubelet[2207]: E0413 20:33:20.859396 2207 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" Apr 13 20:33:20.862640 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 13 20:33:20.876167 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 13 20:33:20.880657 kubelet[2207]: E0413 20:33:20.880244 2207 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:33:20.882014 kubelet[2207]: I0413 20:33:20.881009 2207 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 13 20:33:20.882014 kubelet[2207]: I0413 20:33:20.881041 2207 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:33:20.882014 kubelet[2207]: I0413 20:33:20.881717 2207 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 13 20:33:20.884141 kubelet[2207]: E0413 20:33:20.883864 2207 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:33:20.884141 kubelet[2207]: E0413 20:33:20.883948 2207 eviction_manager.go:297] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" Apr 13 20:33:20.947090 systemd[1]: Created slice kubepods-burstable-pod6da740952fe25e3b701fe16cefa6a290.slice - libcontainer container kubepods-burstable-pod6da740952fe25e3b701fe16cefa6a290.slice. Apr 13 20:33:20.959959 kubelet[2207]: I0413 20:33:20.959566 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2fb6d8b9f7d5900536212e824d02edd7-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"2fb6d8b9f7d5900536212e824d02edd7\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:20.959959 kubelet[2207]: E0413 20:33:20.959605 2207 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.70:6443: connect: connection refused" interval="400ms" Apr 13 20:33:20.959959 kubelet[2207]: I0413 20:33:20.959628 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43e8243a1e481b83e39621c20e84ef25-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"43e8243a1e481b83e39621c20e84ef25\") " pod="kube-system/kube-scheduler-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:20.959959 kubelet[2207]: I0413 20:33:20.959665 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6da740952fe25e3b701fe16cefa6a290-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"6da740952fe25e3b701fe16cefa6a290\") " pod="kube-system/kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:20.960228 kubelet[2207]: I0413 20:33:20.959700 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6da740952fe25e3b701fe16cefa6a290-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"6da740952fe25e3b701fe16cefa6a290\") " pod="kube-system/kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:20.960228 kubelet[2207]: I0413 20:33:20.959729 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6da740952fe25e3b701fe16cefa6a290-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"6da740952fe25e3b701fe16cefa6a290\") " pod="kube-system/kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:20.960228 kubelet[2207]: I0413 20:33:20.959759 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2fb6d8b9f7d5900536212e824d02edd7-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"2fb6d8b9f7d5900536212e824d02edd7\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:20.960228 kubelet[2207]: I0413 20:33:20.959816 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fb6d8b9f7d5900536212e824d02edd7-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"2fb6d8b9f7d5900536212e824d02edd7\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:20.960361 kubelet[2207]: I0413 20:33:20.959847 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2fb6d8b9f7d5900536212e824d02edd7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"2fb6d8b9f7d5900536212e824d02edd7\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:20.960361 kubelet[2207]: I0413 20:33:20.959889 2207 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2fb6d8b9f7d5900536212e824d02edd7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"2fb6d8b9f7d5900536212e824d02edd7\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:20.963520 kubelet[2207]: E0413 20:33:20.963202 2207 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:20.970547 systemd[1]: Created slice kubepods-burstable-pod2fb6d8b9f7d5900536212e824d02edd7.slice - libcontainer container kubepods-burstable-pod2fb6d8b9f7d5900536212e824d02edd7.slice. Apr 13 20:33:20.975092 kubelet[2207]: E0413 20:33:20.974366 2207 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:20.979074 systemd[1]: Created slice kubepods-burstable-pod43e8243a1e481b83e39621c20e84ef25.slice - libcontainer container kubepods-burstable-pod43e8243a1e481b83e39621c20e84ef25.slice. Apr 13 20:33:20.982163 kubelet[2207]: E0413 20:33:20.982105 2207 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:20.992352 kubelet[2207]: I0413 20:33:20.991731 2207 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:20.992352 kubelet[2207]: E0413 20:33:20.992306 2207 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.128.0.70:6443/api/v1/nodes\": dial tcp 10.128.0.70:6443: connect: connection refused" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:21.200189 kubelet[2207]: I0413 20:33:21.200040 2207 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:21.201143 kubelet[2207]: E0413 20:33:21.201088 2207 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.128.0.70:6443/api/v1/nodes\": dial tcp 10.128.0.70:6443: connect: connection refused" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:21.268657 containerd[1462]: time="2026-04-13T20:33:21.268566029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal,Uid:6da740952fe25e3b701fe16cefa6a290,Namespace:kube-system,Attempt:0,}" Apr 13 20:33:21.279101 containerd[1462]: time="2026-04-13T20:33:21.279026708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal,Uid:2fb6d8b9f7d5900536212e824d02edd7,Namespace:kube-system,Attempt:0,}" Apr 13 20:33:21.291595 containerd[1462]: time="2026-04-13T20:33:21.291109425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal,Uid:43e8243a1e481b83e39621c20e84ef25,Namespace:kube-system,Attempt:0,}" Apr 13 20:33:21.360796 kubelet[2207]: E0413 20:33:21.360574 2207 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.70:6443: connect: connection refused" interval="800ms" Apr 13 20:33:21.608264 kubelet[2207]: I0413 20:33:21.608212 2207 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:21.608716 kubelet[2207]: E0413 20:33:21.608672 2207 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.128.0.70:6443/api/v1/nodes\": dial tcp 10.128.0.70:6443: connect: connection refused" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:21.821051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount398560933.mount: Deactivated successfully. Apr 13 20:33:21.832777 containerd[1462]: time="2026-04-13T20:33:21.832685339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:33:21.834749 containerd[1462]: time="2026-04-13T20:33:21.834675977Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312146" Apr 13 20:33:21.837248 containerd[1462]: time="2026-04-13T20:33:21.837180540Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:33:21.840573 containerd[1462]: time="2026-04-13T20:33:21.839791726Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:33:21.840573 containerd[1462]: time="2026-04-13T20:33:21.840253884Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:33:21.842919 containerd[1462]: time="2026-04-13T20:33:21.842254845Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:33:21.842919 containerd[1462]: time="2026-04-13T20:33:21.842521435Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 13 20:33:21.847868 containerd[1462]: time="2026-04-13T20:33:21.847800841Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 13 20:33:21.851228 containerd[1462]: time="2026-04-13T20:33:21.851174039Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 572.038548ms" Apr 13 20:33:21.854124 containerd[1462]: time="2026-04-13T20:33:21.854050171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 562.847102ms" Apr 13 20:33:21.855408 containerd[1462]: time="2026-04-13T20:33:21.855335873Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 586.578313ms" Apr 13 20:33:22.094653 containerd[1462]: time="2026-04-13T20:33:22.094125894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:33:22.094653 containerd[1462]: time="2026-04-13T20:33:22.094205851Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:33:22.094653 containerd[1462]: time="2026-04-13T20:33:22.094248618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:33:22.094653 containerd[1462]: time="2026-04-13T20:33:22.094426331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:33:22.100941 containerd[1462]: time="2026-04-13T20:33:22.099830566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:33:22.100941 containerd[1462]: time="2026-04-13T20:33:22.100039943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:33:22.100941 containerd[1462]: time="2026-04-13T20:33:22.100108940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:33:22.100941 containerd[1462]: time="2026-04-13T20:33:22.100585161Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:33:22.102757 containerd[1462]: time="2026-04-13T20:33:22.102267108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:33:22.102757 containerd[1462]: time="2026-04-13T20:33:22.102340594Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:33:22.102757 containerd[1462]: time="2026-04-13T20:33:22.102401786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:33:22.102757 containerd[1462]: time="2026-04-13T20:33:22.102565160Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:33:22.153219 systemd[1]: Started cri-containerd-d43f2e5ea5a6e176f46ecc1d97022461f907b4973a9f2eb259d28b24ad152fb0.scope - libcontainer container d43f2e5ea5a6e176f46ecc1d97022461f907b4973a9f2eb259d28b24ad152fb0. Apr 13 20:33:22.163603 kubelet[2207]: E0413 20:33:22.163507 2207 controller.go:201] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.70:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.70:6443: connect: connection refused" interval="1.6s" Apr 13 20:33:22.172200 systemd[1]: Started cri-containerd-0ba4a988591f1a495d67bf13599acb4d4c5b0bd13fa315c88fe578ebc64c1793.scope - libcontainer container 0ba4a988591f1a495d67bf13599acb4d4c5b0bd13fa315c88fe578ebc64c1793. Apr 13 20:33:22.195303 systemd[1]: Started cri-containerd-de72375748412710e642799c386b529796deb15efdb36e0df21ae70cd55b87c7.scope - libcontainer container de72375748412710e642799c386b529796deb15efdb36e0df21ae70cd55b87c7. Apr 13 20:33:22.302448 containerd[1462]: time="2026-04-13T20:33:22.302379712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal,Uid:6da740952fe25e3b701fe16cefa6a290,Namespace:kube-system,Attempt:0,} returns sandbox id \"d43f2e5ea5a6e176f46ecc1d97022461f907b4973a9f2eb259d28b24ad152fb0\"" Apr 13 20:33:22.311347 kubelet[2207]: E0413 20:33:22.310417 2207 kubelet_pods.go:562] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-21291" Apr 13 20:33:22.318023 containerd[1462]: time="2026-04-13T20:33:22.317870589Z" level=info msg="CreateContainer within sandbox \"d43f2e5ea5a6e176f46ecc1d97022461f907b4973a9f2eb259d28b24ad152fb0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Apr 13 20:33:22.341097 containerd[1462]: time="2026-04-13T20:33:22.340978837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal,Uid:43e8243a1e481b83e39621c20e84ef25,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ba4a988591f1a495d67bf13599acb4d4c5b0bd13fa315c88fe578ebc64c1793\"" Apr 13 20:33:22.345435 containerd[1462]: time="2026-04-13T20:33:22.344088045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal,Uid:2fb6d8b9f7d5900536212e824d02edd7,Namespace:kube-system,Attempt:0,} returns sandbox id \"de72375748412710e642799c386b529796deb15efdb36e0df21ae70cd55b87c7\"" Apr 13 20:33:22.348565 kubelet[2207]: E0413 20:33:22.348511 2207 kubelet_pods.go:562] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-21291" Apr 13 20:33:22.353132 kubelet[2207]: E0413 20:33:22.353069 2207 kubelet_pods.go:562] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flat" Apr 13 20:33:22.357199 containerd[1462]: time="2026-04-13T20:33:22.357142563Z" level=info msg="CreateContainer within sandbox \"0ba4a988591f1a495d67bf13599acb4d4c5b0bd13fa315c88fe578ebc64c1793\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Apr 13 20:33:22.361166 containerd[1462]: time="2026-04-13T20:33:22.361123246Z" level=info msg="CreateContainer within sandbox \"de72375748412710e642799c386b529796deb15efdb36e0df21ae70cd55b87c7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Apr 13 20:33:22.367245 containerd[1462]: time="2026-04-13T20:33:22.367053433Z" level=info msg="CreateContainer within sandbox \"d43f2e5ea5a6e176f46ecc1d97022461f907b4973a9f2eb259d28b24ad152fb0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6ea235e6a247747cb7c02e16d8147b67b34ed2047a657417e5efd97c79fcf005\"" Apr 13 20:33:22.369922 containerd[1462]: time="2026-04-13T20:33:22.368184764Z" level=info msg="StartContainer for \"6ea235e6a247747cb7c02e16d8147b67b34ed2047a657417e5efd97c79fcf005\"" Apr 13 20:33:22.387874 containerd[1462]: time="2026-04-13T20:33:22.387756871Z" level=info msg="CreateContainer within sandbox \"de72375748412710e642799c386b529796deb15efdb36e0df21ae70cd55b87c7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3814ec6d24008f460e5b39916ba5dd5f42c57befed814482094b4b6048386714\"" Apr 13 20:33:22.389219 containerd[1462]: time="2026-04-13T20:33:22.389170091Z" level=info msg="StartContainer for \"3814ec6d24008f460e5b39916ba5dd5f42c57befed814482094b4b6048386714\"" Apr 13 20:33:22.396865 containerd[1462]: time="2026-04-13T20:33:22.396807956Z" level=info msg="CreateContainer within sandbox \"0ba4a988591f1a495d67bf13599acb4d4c5b0bd13fa315c88fe578ebc64c1793\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2a330707c6ccb00fdb6995a5d8d4178cadc96216d8c025ec4bf205265c3efaa5\"" Apr 13 20:33:22.397943 containerd[1462]: time="2026-04-13T20:33:22.397840160Z" level=info msg="StartContainer for \"2a330707c6ccb00fdb6995a5d8d4178cadc96216d8c025ec4bf205265c3efaa5\"" Apr 13 20:33:22.414410 kubelet[2207]: I0413 20:33:22.414371 2207 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:22.416808 kubelet[2207]: E0413 20:33:22.416037 2207 kubelet_node_status.go:106] "Unable to register node with API server" err="Post \"https://10.128.0.70:6443/api/v1/nodes\": dial tcp 10.128.0.70:6443: connect: connection refused" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:22.459227 systemd[1]: Started cri-containerd-6ea235e6a247747cb7c02e16d8147b67b34ed2047a657417e5efd97c79fcf005.scope - libcontainer container 6ea235e6a247747cb7c02e16d8147b67b34ed2047a657417e5efd97c79fcf005. Apr 13 20:33:22.481530 systemd[1]: Started cri-containerd-2a330707c6ccb00fdb6995a5d8d4178cadc96216d8c025ec4bf205265c3efaa5.scope - libcontainer container 2a330707c6ccb00fdb6995a5d8d4178cadc96216d8c025ec4bf205265c3efaa5. Apr 13 20:33:22.487533 systemd[1]: Started cri-containerd-3814ec6d24008f460e5b39916ba5dd5f42c57befed814482094b4b6048386714.scope - libcontainer container 3814ec6d24008f460e5b39916ba5dd5f42c57befed814482094b4b6048386714. Apr 13 20:33:22.576340 containerd[1462]: time="2026-04-13T20:33:22.576276795Z" level=info msg="StartContainer for \"6ea235e6a247747cb7c02e16d8147b67b34ed2047a657417e5efd97c79fcf005\" returns successfully" Apr 13 20:33:22.650056 containerd[1462]: time="2026-04-13T20:33:22.648430085Z" level=info msg="StartContainer for \"2a330707c6ccb00fdb6995a5d8d4178cadc96216d8c025ec4bf205265c3efaa5\" returns successfully" Apr 13 20:33:22.659054 containerd[1462]: time="2026-04-13T20:33:22.658987224Z" level=info msg="StartContainer for \"3814ec6d24008f460e5b39916ba5dd5f42c57befed814482094b4b6048386714\" returns successfully" Apr 13 20:33:22.850504 kubelet[2207]: E0413 20:33:22.850343 2207 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:22.851239 kubelet[2207]: E0413 20:33:22.850891 2207 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:22.861354 kubelet[2207]: E0413 20:33:22.861061 2207 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:23.860149 kubelet[2207]: E0413 20:33:23.860083 2207 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:23.861833 kubelet[2207]: E0413 20:33:23.861335 2207 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:24.027032 kubelet[2207]: I0413 20:33:24.025179 2207 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:24.315158 kubelet[2207]: E0413 20:33:24.315103 2207 nodelease.go:50] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:24.438713 kubelet[2207]: I0413 20:33:24.438627 2207 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:24.438713 kubelet[2207]: E0413 20:33:24.438712 2207 kubelet_node_status.go:474] "Error updating node status, will retry" err="error getting node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\": node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" Apr 13 20:33:24.456537 kubelet[2207]: E0413 20:33:24.456462 2207 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" Apr 13 20:33:24.486391 kubelet[2207]: E0413 20:33:24.486198 2207 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal.18a604d1764bf109 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal,UID:,APIVersion:v1,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal,},FirstTimestamp:2026-04-13 20:33:20.723603721 +0000 UTC m=+0.494519742,LastTimestamp:2026-04-13 20:33:20.723603721 +0000 UTC m=+0.494519742,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal,}" Apr 13 20:33:24.557031 kubelet[2207]: E0413 20:33:24.556692 2207 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" Apr 13 20:33:24.658441 kubelet[2207]: E0413 20:33:24.657192 2207 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" Apr 13 20:33:24.758043 kubelet[2207]: E0413 20:33:24.757994 2207 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" Apr 13 20:33:24.858981 kubelet[2207]: E0413 20:33:24.858931 2207 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" Apr 13 20:33:24.959760 kubelet[2207]: E0413 20:33:24.959508 2207 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" Apr 13 20:33:25.060388 kubelet[2207]: E0413 20:33:25.060317 2207 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" Apr 13 20:33:25.161191 kubelet[2207]: E0413 20:33:25.161103 2207 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" Apr 13 20:33:25.261527 kubelet[2207]: E0413 20:33:25.261343 2207 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" Apr 13 20:33:25.361792 kubelet[2207]: E0413 20:33:25.361701 2207 kubelet_node_status.go:392] "Error getting the current node from lister" err="node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" not found" Apr 13 20:33:25.458805 kubelet[2207]: I0413 20:33:25.458119 2207 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:25.467973 kubelet[2207]: I0413 20:33:25.467893 2207 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:33:25.468171 kubelet[2207]: I0413 20:33:25.468132 2207 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:25.478927 kubelet[2207]: I0413 20:33:25.478189 2207 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:33:25.478927 kubelet[2207]: I0413 20:33:25.478346 2207 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:25.486999 kubelet[2207]: I0413 20:33:25.486955 2207 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:33:25.704276 kubelet[2207]: I0413 20:33:25.703981 2207 apiserver.go:52] "Watching apiserver" Apr 13 20:33:25.760273 kubelet[2207]: I0413 20:33:25.760177 2207 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:33:26.745673 systemd[1]: Reloading requested from client PID 2494 ('systemctl') (unit session-7.scope)... Apr 13 20:33:26.745717 systemd[1]: Reloading... Apr 13 20:33:26.993942 zram_generator::config[2535]: No configuration found. Apr 13 20:33:27.192008 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 13 20:33:27.341947 systemd[1]: Reloading finished in 595 ms. Apr 13 20:33:27.412801 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:33:27.431343 systemd[1]: kubelet.service: Deactivated successfully. Apr 13 20:33:27.431883 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:33:27.432022 systemd[1]: kubelet.service: Consumed 1.120s CPU time, 128.6M memory peak, 0B memory swap peak. Apr 13 20:33:27.436616 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 13 20:33:27.750048 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 13 20:33:27.766757 (kubelet)[2582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 13 20:33:27.864822 kubelet[2582]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 13 20:33:27.883978 kubelet[2582]: I0413 20:33:27.882807 2582 server.go:525] "Kubelet version" kubeletVersion="v1.35.1" Apr 13 20:33:27.883978 kubelet[2582]: I0413 20:33:27.882882 2582 server.go:527] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 13 20:33:27.883978 kubelet[2582]: I0413 20:33:27.882931 2582 watchdog_linux.go:95] "Systemd watchdog is not enabled" Apr 13 20:33:27.883978 kubelet[2582]: I0413 20:33:27.882946 2582 watchdog_linux.go:138] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Apr 13 20:33:27.883978 kubelet[2582]: I0413 20:33:27.883499 2582 server.go:951] "Client rotation is on, will bootstrap in background" Apr 13 20:33:27.887019 kubelet[2582]: I0413 20:33:27.886952 2582 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Apr 13 20:33:27.892766 kubelet[2582]: I0413 20:33:27.892672 2582 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 13 20:33:27.908090 kubelet[2582]: E0413 20:33:27.908042 2582 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 13 20:33:27.909288 kubelet[2582]: I0413 20:33:27.908566 2582 server.go:1395] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Apr 13 20:33:27.915319 kubelet[2582]: I0413 20:33:27.915287 2582 server.go:775] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Apr 13 20:33:27.916924 kubelet[2582]: I0413 20:33:27.915865 2582 container_manager_linux.go:272] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 13 20:33:27.916924 kubelet[2582]: I0413 20:33:27.915974 2582 container_manager_linux.go:277] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 13 20:33:27.916924 kubelet[2582]: I0413 20:33:27.916283 2582 topology_manager.go:143] "Creating topology manager with none policy" Apr 13 20:33:27.916924 kubelet[2582]: I0413 20:33:27.916300 2582 container_manager_linux.go:308] "Creating device plugin manager" Apr 13 20:33:27.917358 kubelet[2582]: I0413 20:33:27.916362 2582 container_manager_linux.go:317] "Creating Dynamic Resource Allocation (DRA) manager" Apr 13 20:33:27.917358 kubelet[2582]: I0413 20:33:27.916666 2582 state_mem.go:41] "Initialized" logger="CPUManager state memory" Apr 13 20:33:27.917358 kubelet[2582]: I0413 20:33:27.916866 2582 kubelet.go:482] "Attempting to sync node with API server" Apr 13 20:33:27.918363 kubelet[2582]: I0413 20:33:27.916890 2582 kubelet.go:383] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 13 20:33:27.918363 kubelet[2582]: I0413 20:33:27.917574 2582 kubelet.go:394] "Adding apiserver pod source" Apr 13 20:33:27.918363 kubelet[2582]: I0413 20:33:27.917593 2582 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 13 20:33:27.923436 kubelet[2582]: I0413 20:33:27.923403 2582 kuberuntime_manager.go:294] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Apr 13 20:33:27.933717 kubelet[2582]: I0413 20:33:27.928538 2582 kubelet.go:943] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Apr 13 20:33:27.933717 kubelet[2582]: I0413 20:33:27.928618 2582 kubelet.go:970] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Apr 13 20:33:27.979178 kubelet[2582]: I0413 20:33:27.979100 2582 server.go:1257] "Started kubelet" Apr 13 20:33:27.980435 kubelet[2582]: I0413 20:33:27.979918 2582 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 13 20:33:27.980435 kubelet[2582]: I0413 20:33:27.979984 2582 server_v1.go:49] "podresources" method="list" useActivePods=true Apr 13 20:33:27.980435 kubelet[2582]: I0413 20:33:27.980341 2582 server.go:254] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 13 20:33:27.980435 kubelet[2582]: I0413 20:33:27.980423 2582 server.go:182] "Starting to listen" address="0.0.0.0" port=10250 Apr 13 20:33:27.984075 kubelet[2582]: I0413 20:33:27.983999 2582 server.go:317] "Adding debug handlers to kubelet server" Apr 13 20:33:27.988947 kubelet[2582]: I0413 20:33:27.986497 2582 fs_resource_analyzer.go:69] "Starting FS ResourceAnalyzer" Apr 13 20:33:27.994702 kubelet[2582]: I0413 20:33:27.994648 2582 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 13 20:33:28.006752 kubelet[2582]: I0413 20:33:28.001811 2582 volume_manager.go:311] "Starting Kubelet Volume Manager" Apr 13 20:33:28.008750 kubelet[2582]: I0413 20:33:28.008529 2582 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 13 20:33:28.010196 kubelet[2582]: I0413 20:33:28.010145 2582 reconciler.go:29] "Reconciler: start to sync state" Apr 13 20:33:28.020792 kubelet[2582]: I0413 20:33:28.020663 2582 factory.go:223] Registration of the systemd container factory successfully Apr 13 20:33:28.021046 kubelet[2582]: I0413 20:33:28.020808 2582 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 13 20:33:28.026770 kubelet[2582]: I0413 20:33:28.026328 2582 factory.go:223] Registration of the containerd container factory successfully Apr 13 20:33:28.039134 kubelet[2582]: I0413 20:33:28.038119 2582 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Apr 13 20:33:28.049816 kubelet[2582]: I0413 20:33:28.049777 2582 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Apr 13 20:33:28.053017 kubelet[2582]: I0413 20:33:28.051659 2582 status_manager.go:249] "Starting to sync pod status with apiserver" Apr 13 20:33:28.053017 kubelet[2582]: I0413 20:33:28.051712 2582 kubelet.go:2501] "Starting kubelet main sync loop" Apr 13 20:33:28.053017 kubelet[2582]: E0413 20:33:28.051811 2582 kubelet.go:2525] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 13 20:33:28.152105 kubelet[2582]: E0413 20:33:28.152047 2582 kubelet.go:2525] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Apr 13 20:33:28.192217 kubelet[2582]: I0413 20:33:28.191079 2582 cpu_manager.go:225] "Starting" policy="none" Apr 13 20:33:28.192217 kubelet[2582]: I0413 20:33:28.191106 2582 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Apr 13 20:33:28.192217 kubelet[2582]: I0413 20:33:28.191137 2582 state_mem.go:41] "Initialized" logger="CPUManager state checkpoint.CPUManager state memory" Apr 13 20:33:28.192217 kubelet[2582]: I0413 20:33:28.191379 2582 state_mem.go:94] "Updated default CPUSet" logger="CPUManager state checkpoint.CPUManager state memory" cpuSet="" Apr 13 20:33:28.192217 kubelet[2582]: I0413 20:33:28.191400 2582 state_mem.go:102] "Updated CPUSet assignments" logger="CPUManager state checkpoint.CPUManager state memory" assignments={} Apr 13 20:33:28.192217 kubelet[2582]: I0413 20:33:28.191432 2582 policy_none.go:50] "Start" Apr 13 20:33:28.192217 kubelet[2582]: I0413 20:33:28.191448 2582 memory_manager.go:187] "Starting memorymanager" policy="None" Apr 13 20:33:28.192217 kubelet[2582]: I0413 20:33:28.191467 2582 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Apr 13 20:33:28.192217 kubelet[2582]: I0413 20:33:28.191697 2582 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Apr 13 20:33:28.192217 kubelet[2582]: I0413 20:33:28.191714 2582 policy_none.go:44] "Start" Apr 13 20:33:28.205971 kubelet[2582]: E0413 20:33:28.205363 2582 manager.go:525] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Apr 13 20:33:28.205971 kubelet[2582]: I0413 20:33:28.205677 2582 eviction_manager.go:194] "Eviction manager: starting control loop" Apr 13 20:33:28.205971 kubelet[2582]: I0413 20:33:28.205725 2582 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 13 20:33:28.207492 kubelet[2582]: I0413 20:33:28.207433 2582 plugin_manager.go:121] "Starting Kubelet Plugin Manager" Apr 13 20:33:28.218279 kubelet[2582]: E0413 20:33:28.216415 2582 eviction_manager.go:272] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Apr 13 20:33:28.336107 kubelet[2582]: I0413 20:33:28.335059 2582 kubelet_node_status.go:74] "Attempting to register node" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.347644 kubelet[2582]: I0413 20:33:28.347118 2582 kubelet_node_status.go:123] "Node was previously registered" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.347644 kubelet[2582]: I0413 20:33:28.347261 2582 kubelet_node_status.go:77] "Successfully registered node" node="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.356930 kubelet[2582]: I0413 20:33:28.354979 2582 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.357136 kubelet[2582]: I0413 20:33:28.357033 2582 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.359602 kubelet[2582]: I0413 20:33:28.359559 2582 kubelet.go:3340] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.373953 kubelet[2582]: I0413 20:33:28.373589 2582 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:33:28.373953 kubelet[2582]: E0413 20:33:28.373667 2582 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-scheduler-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.377415 kubelet[2582]: I0413 20:33:28.377346 2582 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:33:28.377730 kubelet[2582]: I0413 20:33:28.377428 2582 warnings.go:107] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots]" Apr 13 20:33:28.377730 kubelet[2582]: E0413 20:33:28.377483 2582 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.377730 kubelet[2582]: E0413 20:33:28.377590 2582 kubelet.go:3342] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.415919 kubelet[2582]: I0413 20:33:28.415817 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2fb6d8b9f7d5900536212e824d02edd7-k8s-certs\") pod \"kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"2fb6d8b9f7d5900536212e824d02edd7\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.416127 kubelet[2582]: I0413 20:33:28.415932 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2fb6d8b9f7d5900536212e824d02edd7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"2fb6d8b9f7d5900536212e824d02edd7\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.416127 kubelet[2582]: I0413 20:33:28.415976 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2fb6d8b9f7d5900536212e824d02edd7-ca-certs\") pod \"kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"2fb6d8b9f7d5900536212e824d02edd7\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.416127 kubelet[2582]: I0413 20:33:28.416040 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2fb6d8b9f7d5900536212e824d02edd7-kubeconfig\") pod \"kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"2fb6d8b9f7d5900536212e824d02edd7\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.416127 kubelet[2582]: I0413 20:33:28.416073 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/43e8243a1e481b83e39621c20e84ef25-kubeconfig\") pod \"kube-scheduler-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"43e8243a1e481b83e39621c20e84ef25\") " pod="kube-system/kube-scheduler-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.416389 kubelet[2582]: I0413 20:33:28.416114 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6da740952fe25e3b701fe16cefa6a290-ca-certs\") pod \"kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"6da740952fe25e3b701fe16cefa6a290\") " pod="kube-system/kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.416389 kubelet[2582]: I0413 20:33:28.416144 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6da740952fe25e3b701fe16cefa6a290-k8s-certs\") pod \"kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"6da740952fe25e3b701fe16cefa6a290\") " pod="kube-system/kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.416389 kubelet[2582]: I0413 20:33:28.416184 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6da740952fe25e3b701fe16cefa6a290-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"6da740952fe25e3b701fe16cefa6a290\") " pod="kube-system/kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.416389 kubelet[2582]: I0413 20:33:28.416224 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2fb6d8b9f7d5900536212e824d02edd7-flexvolume-dir\") pod \"kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" (UID: \"2fb6d8b9f7d5900536212e824d02edd7\") " pod="kube-system/kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:33:28.919891 kubelet[2582]: I0413 20:33:28.919469 2582 apiserver.go:52] "Watching apiserver" Apr 13 20:33:29.010139 kubelet[2582]: I0413 20:33:29.010037 2582 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 13 20:33:29.248263 kubelet[2582]: I0413 20:33:29.247706 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" podStartSLOduration=4.247686194 podStartE2EDuration="4.247686194s" podCreationTimestamp="2026-04-13 20:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:33:29.247334235 +0000 UTC m=+1.469052189" watchObservedRunningTime="2026-04-13 20:33:29.247686194 +0000 UTC m=+1.469404105" Apr 13 20:33:29.300331 kubelet[2582]: I0413 20:33:29.299846 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" podStartSLOduration=4.299821619 podStartE2EDuration="4.299821619s" podCreationTimestamp="2026-04-13 20:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:33:29.275470949 +0000 UTC m=+1.497188884" watchObservedRunningTime="2026-04-13 20:33:29.299821619 +0000 UTC m=+1.521539555" Apr 13 20:33:29.318522 kubelet[2582]: I0413 20:33:29.318443 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" podStartSLOduration=4.318421541 podStartE2EDuration="4.318421541s" podCreationTimestamp="2026-04-13 20:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:33:29.30099583 +0000 UTC m=+1.522713763" watchObservedRunningTime="2026-04-13 20:33:29.318421541 +0000 UTC m=+1.540139476" Apr 13 20:33:31.738965 kubelet[2582]: I0413 20:33:31.738915 2582 kuberuntime_manager.go:2062] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Apr 13 20:33:31.739614 containerd[1462]: time="2026-04-13T20:33:31.739464694Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 13 20:33:31.743201 kubelet[2582]: I0413 20:33:31.740893 2582 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Apr 13 20:33:32.907986 systemd[1]: Created slice kubepods-besteffort-pod258bde81_6d34_4114_813c_63507b4692b8.slice - libcontainer container kubepods-besteffort-pod258bde81_6d34_4114_813c_63507b4692b8.slice. Apr 13 20:33:32.951552 kubelet[2582]: I0413 20:33:32.951490 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/258bde81-6d34-4114-813c-63507b4692b8-kube-proxy\") pod \"kube-proxy-6vg6b\" (UID: \"258bde81-6d34-4114-813c-63507b4692b8\") " pod="kube-system/kube-proxy-6vg6b" Apr 13 20:33:32.951552 kubelet[2582]: I0413 20:33:32.951553 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46gvc\" (UniqueName: \"kubernetes.io/projected/258bde81-6d34-4114-813c-63507b4692b8-kube-api-access-46gvc\") pod \"kube-proxy-6vg6b\" (UID: \"258bde81-6d34-4114-813c-63507b4692b8\") " pod="kube-system/kube-proxy-6vg6b" Apr 13 20:33:32.952445 kubelet[2582]: I0413 20:33:32.951589 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/258bde81-6d34-4114-813c-63507b4692b8-xtables-lock\") pod \"kube-proxy-6vg6b\" (UID: \"258bde81-6d34-4114-813c-63507b4692b8\") " pod="kube-system/kube-proxy-6vg6b" Apr 13 20:33:32.952445 kubelet[2582]: I0413 20:33:32.951621 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/258bde81-6d34-4114-813c-63507b4692b8-lib-modules\") pod \"kube-proxy-6vg6b\" (UID: \"258bde81-6d34-4114-813c-63507b4692b8\") " pod="kube-system/kube-proxy-6vg6b" Apr 13 20:33:33.034077 systemd[1]: Created slice kubepods-besteffort-pod2336f846_7101_45f4_8020_3cf5bfbed513.slice - libcontainer container kubepods-besteffort-pod2336f846_7101_45f4_8020_3cf5bfbed513.slice. Apr 13 20:33:33.052773 kubelet[2582]: I0413 20:33:33.052719 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2336f846-7101-45f4-8020-3cf5bfbed513-var-lib-calico\") pod \"tigera-operator-6cf4cccc57-44gsp\" (UID: \"2336f846-7101-45f4-8020-3cf5bfbed513\") " pod="tigera-operator/tigera-operator-6cf4cccc57-44gsp" Apr 13 20:33:33.052967 kubelet[2582]: I0413 20:33:33.052880 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh877\" (UniqueName: \"kubernetes.io/projected/2336f846-7101-45f4-8020-3cf5bfbed513-kube-api-access-mh877\") pod \"tigera-operator-6cf4cccc57-44gsp\" (UID: \"2336f846-7101-45f4-8020-3cf5bfbed513\") " pod="tigera-operator/tigera-operator-6cf4cccc57-44gsp" Apr 13 20:33:33.226436 containerd[1462]: time="2026-04-13T20:33:33.226280925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6vg6b,Uid:258bde81-6d34-4114-813c-63507b4692b8,Namespace:kube-system,Attempt:0,}" Apr 13 20:33:33.278203 containerd[1462]: time="2026-04-13T20:33:33.277662169Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:33:33.278203 containerd[1462]: time="2026-04-13T20:33:33.277775821Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:33:33.278203 containerd[1462]: time="2026-04-13T20:33:33.277810285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:33:33.278605 containerd[1462]: time="2026-04-13T20:33:33.278095123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:33:33.319212 systemd[1]: Started cri-containerd-c16d1a57f6a79746109ff6ac795033c6ab04e2d9b607cf465ca25fb9c26c1256.scope - libcontainer container c16d1a57f6a79746109ff6ac795033c6ab04e2d9b607cf465ca25fb9c26c1256. Apr 13 20:33:33.345313 containerd[1462]: time="2026-04-13T20:33:33.345257796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-44gsp,Uid:2336f846-7101-45f4-8020-3cf5bfbed513,Namespace:tigera-operator,Attempt:0,}" Apr 13 20:33:33.367943 containerd[1462]: time="2026-04-13T20:33:33.367196709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6vg6b,Uid:258bde81-6d34-4114-813c-63507b4692b8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c16d1a57f6a79746109ff6ac795033c6ab04e2d9b607cf465ca25fb9c26c1256\"" Apr 13 20:33:33.378949 containerd[1462]: time="2026-04-13T20:33:33.378875052Z" level=info msg="CreateContainer within sandbox \"c16d1a57f6a79746109ff6ac795033c6ab04e2d9b607cf465ca25fb9c26c1256\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 13 20:33:33.402738 containerd[1462]: time="2026-04-13T20:33:33.402415911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:33:33.402738 containerd[1462]: time="2026-04-13T20:33:33.402542181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:33:33.402738 containerd[1462]: time="2026-04-13T20:33:33.402578529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:33:33.403645 containerd[1462]: time="2026-04-13T20:33:33.402746441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:33:33.410766 containerd[1462]: time="2026-04-13T20:33:33.410707661Z" level=info msg="CreateContainer within sandbox \"c16d1a57f6a79746109ff6ac795033c6ab04e2d9b607cf465ca25fb9c26c1256\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cea4894cd6756067fe0091eb57c59b9d9d00b4d516ed5be4c009273868467cd3\"" Apr 13 20:33:33.412493 containerd[1462]: time="2026-04-13T20:33:33.412369870Z" level=info msg="StartContainer for \"cea4894cd6756067fe0091eb57c59b9d9d00b4d516ed5be4c009273868467cd3\"" Apr 13 20:33:33.445213 systemd[1]: Started cri-containerd-14fa379b3b3ff3a95d60d74e533469dea18cdd182437de52bd57ba81ee972fd5.scope - libcontainer container 14fa379b3b3ff3a95d60d74e533469dea18cdd182437de52bd57ba81ee972fd5. Apr 13 20:33:33.490328 systemd[1]: Started cri-containerd-cea4894cd6756067fe0091eb57c59b9d9d00b4d516ed5be4c009273868467cd3.scope - libcontainer container cea4894cd6756067fe0091eb57c59b9d9d00b4d516ed5be4c009273868467cd3. Apr 13 20:33:33.560072 containerd[1462]: time="2026-04-13T20:33:33.558486487Z" level=info msg="StartContainer for \"cea4894cd6756067fe0091eb57c59b9d9d00b4d516ed5be4c009273868467cd3\" returns successfully" Apr 13 20:33:33.571935 containerd[1462]: time="2026-04-13T20:33:33.571783844Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6cf4cccc57-44gsp,Uid:2336f846-7101-45f4-8020-3cf5bfbed513,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"14fa379b3b3ff3a95d60d74e533469dea18cdd182437de52bd57ba81ee972fd5\"" Apr 13 20:33:33.577094 containerd[1462]: time="2026-04-13T20:33:33.577044545Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Apr 13 20:33:34.751484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302054451.mount: Deactivated successfully. Apr 13 20:33:34.960423 update_engine[1447]: I20260413 20:33:34.960244 1447 update_attempter.cc:509] Updating boot flags... Apr 13 20:33:35.125947 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2896) Apr 13 20:33:35.348826 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 31 scanned by (udev-worker) (2896) Apr 13 20:33:35.790604 kubelet[2582]: I0413 20:33:35.790301 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/kube-proxy-6vg6b" podStartSLOduration=3.789388603 podStartE2EDuration="3.789388603s" podCreationTimestamp="2026-04-13 20:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:33:34.170607375 +0000 UTC m=+6.392325310" watchObservedRunningTime="2026-04-13 20:33:35.789388603 +0000 UTC m=+8.011106557" Apr 13 20:33:36.931434 containerd[1462]: time="2026-04-13T20:33:36.931358081Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:36.933850 containerd[1462]: time="2026-04-13T20:33:36.933758003Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Apr 13 20:33:36.935776 containerd[1462]: time="2026-04-13T20:33:36.935730218Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:36.940059 containerd[1462]: time="2026-04-13T20:33:36.939957414Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:36.941430 containerd[1462]: time="2026-04-13T20:33:36.941359847Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.364254839s" Apr 13 20:33:36.941430 containerd[1462]: time="2026-04-13T20:33:36.941424531Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Apr 13 20:33:36.948226 containerd[1462]: time="2026-04-13T20:33:36.948178315Z" level=info msg="CreateContainer within sandbox \"14fa379b3b3ff3a95d60d74e533469dea18cdd182437de52bd57ba81ee972fd5\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Apr 13 20:33:36.970412 containerd[1462]: time="2026-04-13T20:33:36.970342252Z" level=info msg="CreateContainer within sandbox \"14fa379b3b3ff3a95d60d74e533469dea18cdd182437de52bd57ba81ee972fd5\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7c7999710085846fc693ea838f04d8f71bdda0e68a58169dcf2180f7692a9cf3\"" Apr 13 20:33:36.972004 containerd[1462]: time="2026-04-13T20:33:36.971959924Z" level=info msg="StartContainer for \"7c7999710085846fc693ea838f04d8f71bdda0e68a58169dcf2180f7692a9cf3\"" Apr 13 20:33:37.028258 systemd[1]: Started cri-containerd-7c7999710085846fc693ea838f04d8f71bdda0e68a58169dcf2180f7692a9cf3.scope - libcontainer container 7c7999710085846fc693ea838f04d8f71bdda0e68a58169dcf2180f7692a9cf3. Apr 13 20:33:37.071232 containerd[1462]: time="2026-04-13T20:33:37.071100998Z" level=info msg="StartContainer for \"7c7999710085846fc693ea838f04d8f71bdda0e68a58169dcf2180f7692a9cf3\" returns successfully" Apr 13 20:33:37.195329 kubelet[2582]: I0413 20:33:37.194194 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6cf4cccc57-44gsp" podStartSLOduration=1.82716597 podStartE2EDuration="5.194004311s" podCreationTimestamp="2026-04-13 20:33:32 +0000 UTC" firstStartedPulling="2026-04-13 20:33:33.576135746 +0000 UTC m=+5.797853671" lastFinishedPulling="2026-04-13 20:33:36.942974103 +0000 UTC m=+9.164692012" observedRunningTime="2026-04-13 20:33:37.193973599 +0000 UTC m=+9.415691533" watchObservedRunningTime="2026-04-13 20:33:37.194004311 +0000 UTC m=+9.415722246" Apr 13 20:33:44.812002 sudo[1710]: pam_unix(sudo:session): session closed for user root Apr 13 20:33:44.931358 sshd[1707]: pam_unix(sshd:session): session closed for user core Apr 13 20:33:44.940597 systemd[1]: sshd@6-10.128.0.70:22-20.229.252.112:47676.service: Deactivated successfully. Apr 13 20:33:44.946498 systemd[1]: session-7.scope: Deactivated successfully. Apr 13 20:33:44.947402 systemd[1]: session-7.scope: Consumed 5.352s CPU time, 160.0M memory peak, 0B memory swap peak. Apr 13 20:33:44.951317 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Apr 13 20:33:44.953491 systemd-logind[1443]: Removed session 7. Apr 13 20:33:48.667804 systemd[1]: Created slice kubepods-besteffort-pod4ca3201f_b21b_4bec_925b_8f5085933bae.slice - libcontainer container kubepods-besteffort-pod4ca3201f_b21b_4bec_925b_8f5085933bae.slice. Apr 13 20:33:48.670542 kubelet[2582]: I0413 20:33:48.668843 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4ca3201f-b21b-4bec-925b-8f5085933bae-tigera-ca-bundle\") pod \"calico-typha-779db8bf99-m9flh\" (UID: \"4ca3201f-b21b-4bec-925b-8f5085933bae\") " pod="calico-system/calico-typha-779db8bf99-m9flh" Apr 13 20:33:48.670542 kubelet[2582]: I0413 20:33:48.669087 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4ca3201f-b21b-4bec-925b-8f5085933bae-typha-certs\") pod \"calico-typha-779db8bf99-m9flh\" (UID: \"4ca3201f-b21b-4bec-925b-8f5085933bae\") " pod="calico-system/calico-typha-779db8bf99-m9flh" Apr 13 20:33:48.670542 kubelet[2582]: I0413 20:33:48.669274 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xws2b\" (UniqueName: \"kubernetes.io/projected/4ca3201f-b21b-4bec-925b-8f5085933bae-kube-api-access-xws2b\") pod \"calico-typha-779db8bf99-m9flh\" (UID: \"4ca3201f-b21b-4bec-925b-8f5085933bae\") " pod="calico-system/calico-typha-779db8bf99-m9flh" Apr 13 20:33:48.832557 systemd[1]: Created slice kubepods-besteffort-pod81aa8571_4cec_4d76_a967_d69584fa3506.slice - libcontainer container kubepods-besteffort-pod81aa8571_4cec_4d76_a967_d69584fa3506.slice. Apr 13 20:33:48.870663 kubelet[2582]: I0413 20:33:48.870594 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/81aa8571-4cec-4d76-a967-d69584fa3506-bpffs\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.870663 kubelet[2582]: I0413 20:33:48.870663 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/81aa8571-4cec-4d76-a967-d69584fa3506-cni-net-dir\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.870958 kubelet[2582]: I0413 20:33:48.870697 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/81aa8571-4cec-4d76-a967-d69584fa3506-var-lib-calico\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.870958 kubelet[2582]: I0413 20:33:48.870721 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/81aa8571-4cec-4d76-a967-d69584fa3506-nodeproc\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.870958 kubelet[2582]: I0413 20:33:48.870750 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/81aa8571-4cec-4d76-a967-d69584fa3506-var-run-calico\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.870958 kubelet[2582]: I0413 20:33:48.870779 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/81aa8571-4cec-4d76-a967-d69584fa3506-xtables-lock\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.870958 kubelet[2582]: I0413 20:33:48.870810 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/81aa8571-4cec-4d76-a967-d69584fa3506-flexvol-driver-host\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.871277 kubelet[2582]: I0413 20:33:48.870838 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/81aa8571-4cec-4d76-a967-d69584fa3506-cni-bin-dir\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.871277 kubelet[2582]: I0413 20:33:48.870862 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/81aa8571-4cec-4d76-a967-d69584fa3506-node-certs\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.871805 kubelet[2582]: I0413 20:33:48.870890 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/81aa8571-4cec-4d76-a967-d69584fa3506-sys-fs\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.871987 kubelet[2582]: I0413 20:33:48.871868 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/81aa8571-4cec-4d76-a967-d69584fa3506-tigera-ca-bundle\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.872157 kubelet[2582]: I0413 20:33:48.872100 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9w24\" (UniqueName: \"kubernetes.io/projected/81aa8571-4cec-4d76-a967-d69584fa3506-kube-api-access-p9w24\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.872300 kubelet[2582]: I0413 20:33:48.872268 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/81aa8571-4cec-4d76-a967-d69584fa3506-cni-log-dir\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.872390 kubelet[2582]: I0413 20:33:48.872318 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/81aa8571-4cec-4d76-a967-d69584fa3506-lib-modules\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.872390 kubelet[2582]: I0413 20:33:48.872350 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/81aa8571-4cec-4d76-a967-d69584fa3506-policysync\") pod \"calico-node-sv82j\" (UID: \"81aa8571-4cec-4d76-a967-d69584fa3506\") " pod="calico-system/calico-node-sv82j" Apr 13 20:33:48.906600 kubelet[2582]: E0413 20:33:48.906526 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dn72w" podUID="3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2" Apr 13 20:33:48.972870 kubelet[2582]: I0413 20:33:48.972698 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2-socket-dir\") pod \"csi-node-driver-dn72w\" (UID: \"3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2\") " pod="calico-system/csi-node-driver-dn72w" Apr 13 20:33:48.972870 kubelet[2582]: I0413 20:33:48.972829 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2-kubelet-dir\") pod \"csi-node-driver-dn72w\" (UID: \"3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2\") " pod="calico-system/csi-node-driver-dn72w" Apr 13 20:33:48.973150 kubelet[2582]: I0413 20:33:48.972884 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgdch\" (UniqueName: \"kubernetes.io/projected/3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2-kube-api-access-wgdch\") pod \"csi-node-driver-dn72w\" (UID: \"3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2\") " pod="calico-system/csi-node-driver-dn72w" Apr 13 20:33:48.973150 kubelet[2582]: I0413 20:33:48.973032 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2-registration-dir\") pod \"csi-node-driver-dn72w\" (UID: \"3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2\") " pod="calico-system/csi-node-driver-dn72w" Apr 13 20:33:48.973150 kubelet[2582]: I0413 20:33:48.973063 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2-varrun\") pod \"csi-node-driver-dn72w\" (UID: \"3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2\") " pod="calico-system/csi-node-driver-dn72w" Apr 13 20:33:48.982486 containerd[1462]: time="2026-04-13T20:33:48.982411334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-779db8bf99-m9flh,Uid:4ca3201f-b21b-4bec-925b-8f5085933bae,Namespace:calico-system,Attempt:0,}" Apr 13 20:33:48.986628 kubelet[2582]: E0413 20:33:48.986287 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:48.986628 kubelet[2582]: W0413 20:33:48.986318 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:48.986628 kubelet[2582]: E0413 20:33:48.986357 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:48.988530 kubelet[2582]: E0413 20:33:48.988284 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:48.988530 kubelet[2582]: W0413 20:33:48.988306 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:48.988530 kubelet[2582]: E0413 20:33:48.988331 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:48.990302 kubelet[2582]: E0413 20:33:48.989009 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:48.990302 kubelet[2582]: W0413 20:33:48.989038 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:48.990302 kubelet[2582]: E0413 20:33:48.989060 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:48.990302 kubelet[2582]: E0413 20:33:48.990187 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:48.990302 kubelet[2582]: W0413 20:33:48.990205 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:48.990302 kubelet[2582]: E0413 20:33:48.990228 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:48.991628 kubelet[2582]: E0413 20:33:48.991144 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:48.991628 kubelet[2582]: W0413 20:33:48.991166 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:48.991628 kubelet[2582]: E0413 20:33:48.991185 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:48.995789 kubelet[2582]: E0413 20:33:48.995710 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:48.996013 kubelet[2582]: W0413 20:33:48.995738 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:48.996127 kubelet[2582]: E0413 20:33:48.996033 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:48.998812 kubelet[2582]: E0413 20:33:48.998322 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:48.998812 kubelet[2582]: W0413 20:33:48.998791 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.001108 kubelet[2582]: E0413 20:33:48.998819 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.001108 kubelet[2582]: E0413 20:33:49.000338 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.001108 kubelet[2582]: W0413 20:33:49.000355 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.001108 kubelet[2582]: E0413 20:33:49.000376 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.002928 kubelet[2582]: E0413 20:33:49.001672 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.002928 kubelet[2582]: W0413 20:33:49.001691 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.002928 kubelet[2582]: E0413 20:33:49.001710 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.003187 kubelet[2582]: E0413 20:33:49.003017 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.003187 kubelet[2582]: W0413 20:33:49.003033 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.003187 kubelet[2582]: E0413 20:33:49.003052 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.005179 kubelet[2582]: E0413 20:33:49.003980 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.005179 kubelet[2582]: W0413 20:33:49.004001 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.005179 kubelet[2582]: E0413 20:33:49.004154 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.005179 kubelet[2582]: E0413 20:33:49.004961 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.005179 kubelet[2582]: W0413 20:33:49.004976 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.005179 kubelet[2582]: E0413 20:33:49.005008 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.009226 kubelet[2582]: E0413 20:33:49.005408 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.009226 kubelet[2582]: W0413 20:33:49.005423 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.009226 kubelet[2582]: E0413 20:33:49.005442 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.009226 kubelet[2582]: E0413 20:33:49.007310 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.009226 kubelet[2582]: W0413 20:33:49.007326 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.009226 kubelet[2582]: E0413 20:33:49.007345 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.063297 kubelet[2582]: E0413 20:33:49.059036 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.063297 kubelet[2582]: W0413 20:33:49.059064 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.063297 kubelet[2582]: E0413 20:33:49.059104 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.075022 kubelet[2582]: E0413 20:33:49.074981 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.075022 kubelet[2582]: W0413 20:33:49.075019 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.075296 kubelet[2582]: E0413 20:33:49.075051 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.075968 kubelet[2582]: E0413 20:33:49.075573 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.075968 kubelet[2582]: W0413 20:33:49.075603 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.075968 kubelet[2582]: E0413 20:33:49.075624 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.076615 kubelet[2582]: E0413 20:33:49.076133 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.076615 kubelet[2582]: W0413 20:33:49.076147 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.076615 kubelet[2582]: E0413 20:33:49.076165 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.076615 kubelet[2582]: E0413 20:33:49.076601 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.076615 kubelet[2582]: W0413 20:33:49.076617 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.078154 kubelet[2582]: E0413 20:33:49.076635 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.078154 kubelet[2582]: E0413 20:33:49.077275 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.078154 kubelet[2582]: W0413 20:33:49.077291 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.078154 kubelet[2582]: E0413 20:33:49.077320 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.079047 kubelet[2582]: E0413 20:33:49.078193 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.079047 kubelet[2582]: W0413 20:33:49.078209 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.079047 kubelet[2582]: E0413 20:33:49.078230 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.079591 kubelet[2582]: E0413 20:33:49.079369 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.079591 kubelet[2582]: W0413 20:33:49.079389 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.079591 kubelet[2582]: E0413 20:33:49.079429 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.080635 kubelet[2582]: E0413 20:33:49.080358 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.080635 kubelet[2582]: W0413 20:33:49.080380 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.080635 kubelet[2582]: E0413 20:33:49.080401 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.081197 kubelet[2582]: E0413 20:33:49.081179 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.081407 kubelet[2582]: W0413 20:33:49.081295 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.081407 kubelet[2582]: E0413 20:33:49.081323 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.082150 kubelet[2582]: E0413 20:33:49.081946 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.082150 kubelet[2582]: W0413 20:33:49.081965 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.082150 kubelet[2582]: E0413 20:33:49.081983 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.082675 containerd[1462]: time="2026-04-13T20:33:49.081662931Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:33:49.082675 containerd[1462]: time="2026-04-13T20:33:49.081768229Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:33:49.082675 containerd[1462]: time="2026-04-13T20:33:49.081795766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:33:49.082675 containerd[1462]: time="2026-04-13T20:33:49.082015660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:33:49.083236 kubelet[2582]: E0413 20:33:49.082960 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.083236 kubelet[2582]: W0413 20:33:49.082975 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.083236 kubelet[2582]: E0413 20:33:49.082992 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.084181 kubelet[2582]: E0413 20:33:49.083924 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.084181 kubelet[2582]: W0413 20:33:49.083945 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.084181 kubelet[2582]: E0413 20:33:49.083967 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.084927 kubelet[2582]: E0413 20:33:49.084641 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.084927 kubelet[2582]: W0413 20:33:49.084660 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.084927 kubelet[2582]: E0413 20:33:49.084681 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.085520 kubelet[2582]: E0413 20:33:49.085353 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.085520 kubelet[2582]: W0413 20:33:49.085374 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.085520 kubelet[2582]: E0413 20:33:49.085392 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.086376 kubelet[2582]: E0413 20:33:49.086049 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.086376 kubelet[2582]: W0413 20:33:49.086066 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.086376 kubelet[2582]: E0413 20:33:49.086084 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.086785 kubelet[2582]: E0413 20:33:49.086662 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.086785 kubelet[2582]: W0413 20:33:49.086679 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.086785 kubelet[2582]: E0413 20:33:49.086697 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.087529 kubelet[2582]: E0413 20:33:49.087398 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.087529 kubelet[2582]: W0413 20:33:49.087418 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.087529 kubelet[2582]: E0413 20:33:49.087436 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.088602 kubelet[2582]: E0413 20:33:49.088176 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.088602 kubelet[2582]: W0413 20:33:49.088195 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.088602 kubelet[2582]: E0413 20:33:49.088215 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.089157 kubelet[2582]: E0413 20:33:49.088965 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.089157 kubelet[2582]: W0413 20:33:49.088985 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.089157 kubelet[2582]: E0413 20:33:49.089004 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.090096 kubelet[2582]: E0413 20:33:49.089981 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.090096 kubelet[2582]: W0413 20:33:49.090002 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.090372 kubelet[2582]: E0413 20:33:49.090232 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.091176 kubelet[2582]: E0413 20:33:49.090939 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.091176 kubelet[2582]: W0413 20:33:49.090963 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.091176 kubelet[2582]: E0413 20:33:49.090984 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.092147 kubelet[2582]: E0413 20:33:49.091964 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.092147 kubelet[2582]: W0413 20:33:49.091999 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.092147 kubelet[2582]: E0413 20:33:49.092018 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.093057 kubelet[2582]: E0413 20:33:49.092863 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.093057 kubelet[2582]: W0413 20:33:49.092883 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.093057 kubelet[2582]: E0413 20:33:49.092941 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.094297 kubelet[2582]: E0413 20:33:49.094147 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.094297 kubelet[2582]: W0413 20:33:49.094167 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.094297 kubelet[2582]: E0413 20:33:49.094220 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.095503 kubelet[2582]: E0413 20:33:49.095334 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.095503 kubelet[2582]: W0413 20:33:49.095403 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.095503 kubelet[2582]: E0413 20:33:49.095427 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.118522 kubelet[2582]: E0413 20:33:49.118488 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:49.121186 kubelet[2582]: W0413 20:33:49.120952 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:49.121186 kubelet[2582]: E0413 20:33:49.121004 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:49.125142 systemd[1]: Started cri-containerd-02197052eb2489751e46db37dcb5b7a1e8cfad1cf09aea8515005f653ea535a9.scope - libcontainer container 02197052eb2489751e46db37dcb5b7a1e8cfad1cf09aea8515005f653ea535a9. Apr 13 20:33:49.144147 containerd[1462]: time="2026-04-13T20:33:49.143511897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sv82j,Uid:81aa8571-4cec-4d76-a967-d69584fa3506,Namespace:calico-system,Attempt:0,}" Apr 13 20:33:49.201175 containerd[1462]: time="2026-04-13T20:33:49.199697549Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:33:49.201175 containerd[1462]: time="2026-04-13T20:33:49.199788135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:33:49.201175 containerd[1462]: time="2026-04-13T20:33:49.199828293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:33:49.201175 containerd[1462]: time="2026-04-13T20:33:49.199974740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:33:49.260239 systemd[1]: Started cri-containerd-e8c8e27cb702d81bf28db09521a9a5f7c5c4465092957be746a5902554936a52.scope - libcontainer container e8c8e27cb702d81bf28db09521a9a5f7c5c4465092957be746a5902554936a52. Apr 13 20:33:49.268501 containerd[1462]: time="2026-04-13T20:33:49.268192686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-779db8bf99-m9flh,Uid:4ca3201f-b21b-4bec-925b-8f5085933bae,Namespace:calico-system,Attempt:0,} returns sandbox id \"02197052eb2489751e46db37dcb5b7a1e8cfad1cf09aea8515005f653ea535a9\"" Apr 13 20:33:49.278887 containerd[1462]: time="2026-04-13T20:33:49.278793091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Apr 13 20:33:49.323982 containerd[1462]: time="2026-04-13T20:33:49.323768197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-sv82j,Uid:81aa8571-4cec-4d76-a967-d69584fa3506,Namespace:calico-system,Attempt:0,} returns sandbox id \"e8c8e27cb702d81bf28db09521a9a5f7c5c4465092957be746a5902554936a52\"" Apr 13 20:33:50.461433 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1525765268.mount: Deactivated successfully. Apr 13 20:33:51.052442 kubelet[2582]: E0413 20:33:51.052383 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dn72w" podUID="3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2" Apr 13 20:33:51.582273 containerd[1462]: time="2026-04-13T20:33:51.582201995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:51.583841 containerd[1462]: time="2026-04-13T20:33:51.583614897Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Apr 13 20:33:51.585443 containerd[1462]: time="2026-04-13T20:33:51.585337897Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:51.589956 containerd[1462]: time="2026-04-13T20:33:51.589113631Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:51.590809 containerd[1462]: time="2026-04-13T20:33:51.590401497Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.311555729s" Apr 13 20:33:51.590809 containerd[1462]: time="2026-04-13T20:33:51.590449344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Apr 13 20:33:51.594947 containerd[1462]: time="2026-04-13T20:33:51.592501781Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Apr 13 20:33:51.624747 containerd[1462]: time="2026-04-13T20:33:51.624696668Z" level=info msg="CreateContainer within sandbox \"02197052eb2489751e46db37dcb5b7a1e8cfad1cf09aea8515005f653ea535a9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Apr 13 20:33:51.650937 containerd[1462]: time="2026-04-13T20:33:51.650515515Z" level=info msg="CreateContainer within sandbox \"02197052eb2489751e46db37dcb5b7a1e8cfad1cf09aea8515005f653ea535a9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3a976b172935179829a92c419f4d285566a468811b6c97000248c693e1957c1d\"" Apr 13 20:33:51.650590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248447237.mount: Deactivated successfully. Apr 13 20:33:51.654944 containerd[1462]: time="2026-04-13T20:33:51.653820976Z" level=info msg="StartContainer for \"3a976b172935179829a92c419f4d285566a468811b6c97000248c693e1957c1d\"" Apr 13 20:33:51.715259 systemd[1]: Started cri-containerd-3a976b172935179829a92c419f4d285566a468811b6c97000248c693e1957c1d.scope - libcontainer container 3a976b172935179829a92c419f4d285566a468811b6c97000248c693e1957c1d. Apr 13 20:33:51.787869 containerd[1462]: time="2026-04-13T20:33:51.787791213Z" level=info msg="StartContainer for \"3a976b172935179829a92c419f4d285566a468811b6c97000248c693e1957c1d\" returns successfully" Apr 13 20:33:52.285055 kubelet[2582]: E0413 20:33:52.285010 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.285701 kubelet[2582]: W0413 20:33:52.285662 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.285864 kubelet[2582]: E0413 20:33:52.285842 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.286780 kubelet[2582]: E0413 20:33:52.286757 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.287731 kubelet[2582]: W0413 20:33:52.287679 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.287731 kubelet[2582]: E0413 20:33:52.287724 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.288426 kubelet[2582]: E0413 20:33:52.288353 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.288426 kubelet[2582]: W0413 20:33:52.288369 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.288426 kubelet[2582]: E0413 20:33:52.288392 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.289179 kubelet[2582]: E0413 20:33:52.289095 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.289179 kubelet[2582]: W0413 20:33:52.289115 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.289179 kubelet[2582]: E0413 20:33:52.289139 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.289566 kubelet[2582]: E0413 20:33:52.289517 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.289566 kubelet[2582]: W0413 20:33:52.289532 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.289566 kubelet[2582]: E0413 20:33:52.289550 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.290445 kubelet[2582]: E0413 20:33:52.289880 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.290445 kubelet[2582]: W0413 20:33:52.289930 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.290445 kubelet[2582]: E0413 20:33:52.289959 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.290445 kubelet[2582]: E0413 20:33:52.290353 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.290445 kubelet[2582]: W0413 20:33:52.290371 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.290445 kubelet[2582]: E0413 20:33:52.290389 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.290920 kubelet[2582]: E0413 20:33:52.290712 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.290920 kubelet[2582]: W0413 20:33:52.290727 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.290920 kubelet[2582]: E0413 20:33:52.290743 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.291151 kubelet[2582]: E0413 20:33:52.291131 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.291151 kubelet[2582]: W0413 20:33:52.291145 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.291316 kubelet[2582]: E0413 20:33:52.291166 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.292296 kubelet[2582]: E0413 20:33:52.291503 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.292296 kubelet[2582]: W0413 20:33:52.291522 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.292296 kubelet[2582]: E0413 20:33:52.291540 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.292296 kubelet[2582]: E0413 20:33:52.291931 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.292296 kubelet[2582]: W0413 20:33:52.291952 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.292296 kubelet[2582]: E0413 20:33:52.291979 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.292770 kubelet[2582]: E0413 20:33:52.292406 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.292770 kubelet[2582]: W0413 20:33:52.292421 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.292770 kubelet[2582]: E0413 20:33:52.292582 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.293062 kubelet[2582]: E0413 20:33:52.293036 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.293062 kubelet[2582]: W0413 20:33:52.293057 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.293232 kubelet[2582]: E0413 20:33:52.293076 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.293420 kubelet[2582]: E0413 20:33:52.293401 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.293492 kubelet[2582]: W0413 20:33:52.293420 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.293492 kubelet[2582]: E0413 20:33:52.293436 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.293776 kubelet[2582]: E0413 20:33:52.293756 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.293776 kubelet[2582]: W0413 20:33:52.293774 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.293945 kubelet[2582]: E0413 20:33:52.293793 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.304651 kubelet[2582]: E0413 20:33:52.304615 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.304651 kubelet[2582]: W0413 20:33:52.304645 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.305281 kubelet[2582]: E0413 20:33:52.304673 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.305281 kubelet[2582]: E0413 20:33:52.305124 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.305281 kubelet[2582]: W0413 20:33:52.305140 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.305281 kubelet[2582]: E0413 20:33:52.305158 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.305624 kubelet[2582]: E0413 20:33:52.305594 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.305624 kubelet[2582]: W0413 20:33:52.305610 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.305782 kubelet[2582]: E0413 20:33:52.305628 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.306046 kubelet[2582]: E0413 20:33:52.306000 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.306046 kubelet[2582]: W0413 20:33:52.306020 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.306046 kubelet[2582]: E0413 20:33:52.306038 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.306484 kubelet[2582]: E0413 20:33:52.306461 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.306484 kubelet[2582]: W0413 20:33:52.306482 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.306681 kubelet[2582]: E0413 20:33:52.306501 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.306989 kubelet[2582]: E0413 20:33:52.306954 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.306989 kubelet[2582]: W0413 20:33:52.306969 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.306989 kubelet[2582]: E0413 20:33:52.306986 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.307567 kubelet[2582]: E0413 20:33:52.307542 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.307567 kubelet[2582]: W0413 20:33:52.307563 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.307764 kubelet[2582]: E0413 20:33:52.307582 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.308087 kubelet[2582]: E0413 20:33:52.308065 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.308184 kubelet[2582]: W0413 20:33:52.308087 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.308184 kubelet[2582]: E0413 20:33:52.308107 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.308580 kubelet[2582]: E0413 20:33:52.308556 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.308580 kubelet[2582]: W0413 20:33:52.308578 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.308758 kubelet[2582]: E0413 20:33:52.308597 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.309079 kubelet[2582]: E0413 20:33:52.309056 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.309237 kubelet[2582]: W0413 20:33:52.309077 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.309237 kubelet[2582]: E0413 20:33:52.309102 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.309583 kubelet[2582]: E0413 20:33:52.309489 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.309583 kubelet[2582]: W0413 20:33:52.309505 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.309583 kubelet[2582]: E0413 20:33:52.309521 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.310054 kubelet[2582]: E0413 20:33:52.310033 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.310163 kubelet[2582]: W0413 20:33:52.310078 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.310163 kubelet[2582]: E0413 20:33:52.310102 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.310568 kubelet[2582]: E0413 20:33:52.310547 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.310568 kubelet[2582]: W0413 20:33:52.310567 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.310715 kubelet[2582]: E0413 20:33:52.310585 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.311587 kubelet[2582]: E0413 20:33:52.311549 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.311683 kubelet[2582]: W0413 20:33:52.311568 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.311683 kubelet[2582]: E0413 20:33:52.311618 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.312343 kubelet[2582]: E0413 20:33:52.312316 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.312343 kubelet[2582]: W0413 20:33:52.312342 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.312496 kubelet[2582]: E0413 20:33:52.312364 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.313097 kubelet[2582]: E0413 20:33:52.313073 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.313097 kubelet[2582]: W0413 20:33:52.313096 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.313351 kubelet[2582]: E0413 20:33:52.313117 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.314100 kubelet[2582]: E0413 20:33:52.314076 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.314100 kubelet[2582]: W0413 20:33:52.314098 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.314296 kubelet[2582]: E0413 20:33:52.314117 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.314728 kubelet[2582]: E0413 20:33:52.314706 2582 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 13 20:33:52.314728 kubelet[2582]: W0413 20:33:52.314726 2582 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 13 20:33:52.314872 kubelet[2582]: E0413 20:33:52.314748 2582 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 13 20:33:52.704105 containerd[1462]: time="2026-04-13T20:33:52.703125006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Apr 13 20:33:52.707627 containerd[1462]: time="2026-04-13T20:33:52.707549184Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:52.717377 containerd[1462]: time="2026-04-13T20:33:52.717226239Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:52.718528 containerd[1462]: time="2026-04-13T20:33:52.718328547Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.125778031s" Apr 13 20:33:52.718528 containerd[1462]: time="2026-04-13T20:33:52.718389786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Apr 13 20:33:52.719930 containerd[1462]: time="2026-04-13T20:33:52.719590246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:33:52.729851 containerd[1462]: time="2026-04-13T20:33:52.729664505Z" level=info msg="CreateContainer within sandbox \"e8c8e27cb702d81bf28db09521a9a5f7c5c4465092957be746a5902554936a52\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 13 20:33:52.755105 containerd[1462]: time="2026-04-13T20:33:52.755029267Z" level=info msg="CreateContainer within sandbox \"e8c8e27cb702d81bf28db09521a9a5f7c5c4465092957be746a5902554936a52\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"436c2cb2c8432962f3a1b4d4195328fafceeb6ce0f5547c30434f1a414e88443\"" Apr 13 20:33:52.757854 containerd[1462]: time="2026-04-13T20:33:52.755825858Z" level=info msg="StartContainer for \"436c2cb2c8432962f3a1b4d4195328fafceeb6ce0f5547c30434f1a414e88443\"" Apr 13 20:33:52.829144 systemd[1]: Started cri-containerd-436c2cb2c8432962f3a1b4d4195328fafceeb6ce0f5547c30434f1a414e88443.scope - libcontainer container 436c2cb2c8432962f3a1b4d4195328fafceeb6ce0f5547c30434f1a414e88443. Apr 13 20:33:53.028188 containerd[1462]: time="2026-04-13T20:33:53.028112148Z" level=info msg="StartContainer for \"436c2cb2c8432962f3a1b4d4195328fafceeb6ce0f5547c30434f1a414e88443\" returns successfully" Apr 13 20:33:53.052710 kubelet[2582]: E0413 20:33:53.052549 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dn72w" podUID="3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2" Apr 13 20:33:53.081824 systemd[1]: cri-containerd-436c2cb2c8432962f3a1b4d4195328fafceeb6ce0f5547c30434f1a414e88443.scope: Deactivated successfully. Apr 13 20:33:53.279985 kubelet[2582]: I0413 20:33:53.279756 2582 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:33:53.306213 kubelet[2582]: I0413 20:33:53.306129 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-typha-779db8bf99-m9flh" podStartSLOduration=2.990310063 podStartE2EDuration="5.306096256s" podCreationTimestamp="2026-04-13 20:33:48 +0000 UTC" firstStartedPulling="2026-04-13 20:33:49.276342832 +0000 UTC m=+21.498060750" lastFinishedPulling="2026-04-13 20:33:51.592129021 +0000 UTC m=+23.813846943" observedRunningTime="2026-04-13 20:33:52.287051371 +0000 UTC m=+24.508769305" watchObservedRunningTime="2026-04-13 20:33:53.306096256 +0000 UTC m=+25.527814196" Apr 13 20:33:53.604229 systemd[1]: run-containerd-runc-k8s.io-436c2cb2c8432962f3a1b4d4195328fafceeb6ce0f5547c30434f1a414e88443-runc.uaTo5d.mount: Deactivated successfully. Apr 13 20:33:53.604432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-436c2cb2c8432962f3a1b4d4195328fafceeb6ce0f5547c30434f1a414e88443-rootfs.mount: Deactivated successfully. Apr 13 20:33:54.028210 containerd[1462]: time="2026-04-13T20:33:54.028075234Z" level=info msg="shim disconnected" id=436c2cb2c8432962f3a1b4d4195328fafceeb6ce0f5547c30434f1a414e88443 namespace=k8s.io Apr 13 20:33:54.028210 containerd[1462]: time="2026-04-13T20:33:54.028192403Z" level=warning msg="cleaning up after shim disconnected" id=436c2cb2c8432962f3a1b4d4195328fafceeb6ce0f5547c30434f1a414e88443 namespace=k8s.io Apr 13 20:33:54.028210 containerd[1462]: time="2026-04-13T20:33:54.028209175Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:33:54.288426 containerd[1462]: time="2026-04-13T20:33:54.287835311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Apr 13 20:33:55.052458 kubelet[2582]: E0413 20:33:55.052367 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dn72w" podUID="3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2" Apr 13 20:33:57.052778 kubelet[2582]: E0413 20:33:57.052685 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dn72w" podUID="3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2" Apr 13 20:33:59.052712 kubelet[2582]: E0413 20:33:59.052532 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dn72w" podUID="3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2" Apr 13 20:34:01.053143 kubelet[2582]: E0413 20:34:01.053080 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dn72w" podUID="3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2" Apr 13 20:34:03.053184 kubelet[2582]: E0413 20:34:03.053099 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dn72w" podUID="3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2" Apr 13 20:34:03.526516 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2437386116.mount: Deactivated successfully. Apr 13 20:34:03.565389 containerd[1462]: time="2026-04-13T20:34:03.565307284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:03.566935 containerd[1462]: time="2026-04-13T20:34:03.566725549Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Apr 13 20:34:03.568610 containerd[1462]: time="2026-04-13T20:34:03.568526872Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:03.572147 containerd[1462]: time="2026-04-13T20:34:03.572059196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:03.573291 containerd[1462]: time="2026-04-13T20:34:03.573240834Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 9.285345988s" Apr 13 20:34:03.574186 containerd[1462]: time="2026-04-13T20:34:03.573296656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Apr 13 20:34:03.582112 containerd[1462]: time="2026-04-13T20:34:03.582016063Z" level=info msg="CreateContainer within sandbox \"e8c8e27cb702d81bf28db09521a9a5f7c5c4465092957be746a5902554936a52\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Apr 13 20:34:03.609949 containerd[1462]: time="2026-04-13T20:34:03.608271485Z" level=info msg="CreateContainer within sandbox \"e8c8e27cb702d81bf28db09521a9a5f7c5c4465092957be746a5902554936a52\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"15fb061b999fd919c6ab6d77166e7650afe206cd0753a8f97dcf1db1bd39628b\"" Apr 13 20:34:03.610692 containerd[1462]: time="2026-04-13T20:34:03.610538376Z" level=info msg="StartContainer for \"15fb061b999fd919c6ab6d77166e7650afe206cd0753a8f97dcf1db1bd39628b\"" Apr 13 20:34:03.667720 systemd[1]: run-containerd-runc-k8s.io-15fb061b999fd919c6ab6d77166e7650afe206cd0753a8f97dcf1db1bd39628b-runc.shlnFR.mount: Deactivated successfully. Apr 13 20:34:03.679315 systemd[1]: Started cri-containerd-15fb061b999fd919c6ab6d77166e7650afe206cd0753a8f97dcf1db1bd39628b.scope - libcontainer container 15fb061b999fd919c6ab6d77166e7650afe206cd0753a8f97dcf1db1bd39628b. Apr 13 20:34:03.738939 containerd[1462]: time="2026-04-13T20:34:03.737491461Z" level=info msg="StartContainer for \"15fb061b999fd919c6ab6d77166e7650afe206cd0753a8f97dcf1db1bd39628b\" returns successfully" Apr 13 20:34:03.809471 systemd[1]: cri-containerd-15fb061b999fd919c6ab6d77166e7650afe206cd0753a8f97dcf1db1bd39628b.scope: Deactivated successfully. Apr 13 20:34:04.524038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15fb061b999fd919c6ab6d77166e7650afe206cd0753a8f97dcf1db1bd39628b-rootfs.mount: Deactivated successfully. Apr 13 20:34:05.052512 kubelet[2582]: E0413 20:34:05.052420 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dn72w" podUID="3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2" Apr 13 20:34:05.440558 containerd[1462]: time="2026-04-13T20:34:05.440308191Z" level=info msg="shim disconnected" id=15fb061b999fd919c6ab6d77166e7650afe206cd0753a8f97dcf1db1bd39628b namespace=k8s.io Apr 13 20:34:05.440558 containerd[1462]: time="2026-04-13T20:34:05.440431455Z" level=warning msg="cleaning up after shim disconnected" id=15fb061b999fd919c6ab6d77166e7650afe206cd0753a8f97dcf1db1bd39628b namespace=k8s.io Apr 13 20:34:05.440558 containerd[1462]: time="2026-04-13T20:34:05.440449138Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:34:06.329581 containerd[1462]: time="2026-04-13T20:34:06.329155575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Apr 13 20:34:07.052607 kubelet[2582]: E0413 20:34:07.052491 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dn72w" podUID="3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2" Apr 13 20:34:09.052657 kubelet[2582]: E0413 20:34:09.052500 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dn72w" podUID="3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2" Apr 13 20:34:10.094213 containerd[1462]: time="2026-04-13T20:34:10.094132833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:10.096283 containerd[1462]: time="2026-04-13T20:34:10.096193681Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Apr 13 20:34:10.097892 containerd[1462]: time="2026-04-13T20:34:10.097805230Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:10.107716 containerd[1462]: time="2026-04-13T20:34:10.107659521Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:10.111179 containerd[1462]: time="2026-04-13T20:34:10.111117730Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.781904065s" Apr 13 20:34:10.111179 containerd[1462]: time="2026-04-13T20:34:10.111171297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Apr 13 20:34:10.120270 containerd[1462]: time="2026-04-13T20:34:10.120218712Z" level=info msg="CreateContainer within sandbox \"e8c8e27cb702d81bf28db09521a9a5f7c5c4465092957be746a5902554936a52\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 13 20:34:10.142559 containerd[1462]: time="2026-04-13T20:34:10.142481263Z" level=info msg="CreateContainer within sandbox \"e8c8e27cb702d81bf28db09521a9a5f7c5c4465092957be746a5902554936a52\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"902e0374c5460fcefa36575b10ac03401c05efa22fea8aab89c5e4634e5acd1a\"" Apr 13 20:34:10.145027 containerd[1462]: time="2026-04-13T20:34:10.143351679Z" level=info msg="StartContainer for \"902e0374c5460fcefa36575b10ac03401c05efa22fea8aab89c5e4634e5acd1a\"" Apr 13 20:34:10.219123 systemd[1]: Started cri-containerd-902e0374c5460fcefa36575b10ac03401c05efa22fea8aab89c5e4634e5acd1a.scope - libcontainer container 902e0374c5460fcefa36575b10ac03401c05efa22fea8aab89c5e4634e5acd1a. Apr 13 20:34:10.275451 containerd[1462]: time="2026-04-13T20:34:10.275262169Z" level=info msg="StartContainer for \"902e0374c5460fcefa36575b10ac03401c05efa22fea8aab89c5e4634e5acd1a\" returns successfully" Apr 13 20:34:11.053132 kubelet[2582]: E0413 20:34:11.052563 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dn72w" podUID="3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2" Apr 13 20:34:11.395193 systemd[1]: cri-containerd-902e0374c5460fcefa36575b10ac03401c05efa22fea8aab89c5e4634e5acd1a.scope: Deactivated successfully. Apr 13 20:34:11.438432 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-902e0374c5460fcefa36575b10ac03401c05efa22fea8aab89c5e4634e5acd1a-rootfs.mount: Deactivated successfully. Apr 13 20:34:11.485999 kubelet[2582]: I0413 20:34:11.485938 2582 kubelet_node_status.go:427] "Fast updating node status as it just became ready" Apr 13 20:34:11.708022 systemd[1]: Created slice kubepods-burstable-pod4f231b08_404f_4650_8082_80470e832cfe.slice - libcontainer container kubepods-burstable-pod4f231b08_404f_4650_8082_80470e832cfe.slice. Apr 13 20:34:11.761224 kubelet[2582]: I0413 20:34:11.761152 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9r58\" (UniqueName: \"kubernetes.io/projected/4f231b08-404f-4650-8082-80470e832cfe-kube-api-access-q9r58\") pod \"coredns-7d764666f9-kh7qp\" (UID: \"4f231b08-404f-4650-8082-80470e832cfe\") " pod="kube-system/coredns-7d764666f9-kh7qp" Apr 13 20:34:11.761532 kubelet[2582]: I0413 20:34:11.761235 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f231b08-404f-4650-8082-80470e832cfe-config-volume\") pod \"coredns-7d764666f9-kh7qp\" (UID: \"4f231b08-404f-4650-8082-80470e832cfe\") " pod="kube-system/coredns-7d764666f9-kh7qp" Apr 13 20:34:11.927859 systemd[1]: Created slice kubepods-burstable-pod24399e91_dbab_4831_aa98_8db96cfff9e4.slice - libcontainer container kubepods-burstable-pod24399e91_dbab_4831_aa98_8db96cfff9e4.slice. Apr 13 20:34:11.992537 kubelet[2582]: I0413 20:34:11.962642 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/24399e91-dbab-4831-aa98-8db96cfff9e4-config-volume\") pod \"coredns-7d764666f9-f8m2g\" (UID: \"24399e91-dbab-4831-aa98-8db96cfff9e4\") " pod="kube-system/coredns-7d764666f9-f8m2g" Apr 13 20:34:11.992537 kubelet[2582]: I0413 20:34:11.962773 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdzwc\" (UniqueName: \"kubernetes.io/projected/24399e91-dbab-4831-aa98-8db96cfff9e4-kube-api-access-qdzwc\") pod \"coredns-7d764666f9-f8m2g\" (UID: \"24399e91-dbab-4831-aa98-8db96cfff9e4\") " pod="kube-system/coredns-7d764666f9-f8m2g" Apr 13 20:34:12.000168 containerd[1462]: time="2026-04-13T20:34:11.998320464Z" level=info msg="shim disconnected" id=902e0374c5460fcefa36575b10ac03401c05efa22fea8aab89c5e4634e5acd1a namespace=k8s.io Apr 13 20:34:12.000168 containerd[1462]: time="2026-04-13T20:34:11.998392854Z" level=warning msg="cleaning up after shim disconnected" id=902e0374c5460fcefa36575b10ac03401c05efa22fea8aab89c5e4634e5acd1a namespace=k8s.io Apr 13 20:34:12.000168 containerd[1462]: time="2026-04-13T20:34:11.998406681Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 13 20:34:12.026324 systemd[1]: Created slice kubepods-besteffort-podac435b27_3a39_4ef6_8b1e_437562c1e7eb.slice - libcontainer container kubepods-besteffort-podac435b27_3a39_4ef6_8b1e_437562c1e7eb.slice. Apr 13 20:34:12.052258 containerd[1462]: time="2026-04-13T20:34:12.052044372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kh7qp,Uid:4f231b08-404f-4650-8082-80470e832cfe,Namespace:kube-system,Attempt:0,}" Apr 13 20:34:12.071961 kubelet[2582]: I0413 20:34:12.070587 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29fcd04c-8e33-4e32-b58c-36d11bc97ed6-goldmane-ca-bundle\") pod \"goldmane-9f7667bb8-9xj9t\" (UID: \"29fcd04c-8e33-4e32-b58c-36d11bc97ed6\") " pod="calico-system/goldmane-9f7667bb8-9xj9t" Apr 13 20:34:12.071961 kubelet[2582]: I0413 20:34:12.070680 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c693e029-794b-434b-97e0-e01594b71108-calico-apiserver-certs\") pod \"calico-apiserver-7d4d888f55-tqdxr\" (UID: \"c693e029-794b-434b-97e0-e01594b71108\") " pod="calico-system/calico-apiserver-7d4d888f55-tqdxr" Apr 13 20:34:12.086501 kubelet[2582]: I0413 20:34:12.081377 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2lvb\" (UniqueName: \"kubernetes.io/projected/c693e029-794b-434b-97e0-e01594b71108-kube-api-access-s2lvb\") pod \"calico-apiserver-7d4d888f55-tqdxr\" (UID: \"c693e029-794b-434b-97e0-e01594b71108\") " pod="calico-system/calico-apiserver-7d4d888f55-tqdxr" Apr 13 20:34:12.086501 kubelet[2582]: I0413 20:34:12.081481 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ac435b27-3a39-4ef6-8b1e-437562c1e7eb-calico-apiserver-certs\") pod \"calico-apiserver-7d4d888f55-gszcq\" (UID: \"ac435b27-3a39-4ef6-8b1e-437562c1e7eb\") " pod="calico-system/calico-apiserver-7d4d888f55-gszcq" Apr 13 20:34:12.086501 kubelet[2582]: I0413 20:34:12.081529 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l2gnh\" (UniqueName: \"kubernetes.io/projected/72d06a65-1282-431f-bff3-3de35ce0d86c-kube-api-access-l2gnh\") pod \"calico-kube-controllers-7ff99f9c59-94r4d\" (UID: \"72d06a65-1282-431f-bff3-3de35ce0d86c\") " pod="calico-system/calico-kube-controllers-7ff99f9c59-94r4d" Apr 13 20:34:12.086501 kubelet[2582]: I0413 20:34:12.081583 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/29fcd04c-8e33-4e32-b58c-36d11bc97ed6-goldmane-key-pair\") pod \"goldmane-9f7667bb8-9xj9t\" (UID: \"29fcd04c-8e33-4e32-b58c-36d11bc97ed6\") " pod="calico-system/goldmane-9f7667bb8-9xj9t" Apr 13 20:34:12.086501 kubelet[2582]: I0413 20:34:12.081729 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5d738dcc-413d-4de4-9c00-e5a6fce1be78-nginx-config\") pod \"whisker-7b55c6d7cc-6f99v\" (UID: \"5d738dcc-413d-4de4-9c00-e5a6fce1be78\") " pod="calico-system/whisker-7b55c6d7cc-6f99v" Apr 13 20:34:12.090163 kubelet[2582]: I0413 20:34:12.081780 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sp9wm\" (UniqueName: \"kubernetes.io/projected/ac435b27-3a39-4ef6-8b1e-437562c1e7eb-kube-api-access-sp9wm\") pod \"calico-apiserver-7d4d888f55-gszcq\" (UID: \"ac435b27-3a39-4ef6-8b1e-437562c1e7eb\") " pod="calico-system/calico-apiserver-7d4d888f55-gszcq" Apr 13 20:34:12.090163 kubelet[2582]: I0413 20:34:12.081822 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29fcd04c-8e33-4e32-b58c-36d11bc97ed6-config\") pod \"goldmane-9f7667bb8-9xj9t\" (UID: \"29fcd04c-8e33-4e32-b58c-36d11bc97ed6\") " pod="calico-system/goldmane-9f7667bb8-9xj9t" Apr 13 20:34:12.090163 kubelet[2582]: I0413 20:34:12.081857 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pb79f\" (UniqueName: \"kubernetes.io/projected/29fcd04c-8e33-4e32-b58c-36d11bc97ed6-kube-api-access-pb79f\") pod \"goldmane-9f7667bb8-9xj9t\" (UID: \"29fcd04c-8e33-4e32-b58c-36d11bc97ed6\") " pod="calico-system/goldmane-9f7667bb8-9xj9t" Apr 13 20:34:12.100243 kubelet[2582]: I0413 20:34:12.096031 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/72d06a65-1282-431f-bff3-3de35ce0d86c-tigera-ca-bundle\") pod \"calico-kube-controllers-7ff99f9c59-94r4d\" (UID: \"72d06a65-1282-431f-bff3-3de35ce0d86c\") " pod="calico-system/calico-kube-controllers-7ff99f9c59-94r4d" Apr 13 20:34:12.100243 kubelet[2582]: I0413 20:34:12.096146 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5d738dcc-413d-4de4-9c00-e5a6fce1be78-whisker-backend-key-pair\") pod \"whisker-7b55c6d7cc-6f99v\" (UID: \"5d738dcc-413d-4de4-9c00-e5a6fce1be78\") " pod="calico-system/whisker-7b55c6d7cc-6f99v" Apr 13 20:34:12.100243 kubelet[2582]: I0413 20:34:12.096226 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clffx\" (UniqueName: \"kubernetes.io/projected/5d738dcc-413d-4de4-9c00-e5a6fce1be78-kube-api-access-clffx\") pod \"whisker-7b55c6d7cc-6f99v\" (UID: \"5d738dcc-413d-4de4-9c00-e5a6fce1be78\") " pod="calico-system/whisker-7b55c6d7cc-6f99v" Apr 13 20:34:12.100243 kubelet[2582]: I0413 20:34:12.099816 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d738dcc-413d-4de4-9c00-e5a6fce1be78-whisker-ca-bundle\") pod \"whisker-7b55c6d7cc-6f99v\" (UID: \"5d738dcc-413d-4de4-9c00-e5a6fce1be78\") " pod="calico-system/whisker-7b55c6d7cc-6f99v" Apr 13 20:34:12.106027 systemd[1]: Created slice kubepods-besteffort-pod72d06a65_1282_431f_bff3_3de35ce0d86c.slice - libcontainer container kubepods-besteffort-pod72d06a65_1282_431f_bff3_3de35ce0d86c.slice. Apr 13 20:34:12.143613 systemd[1]: Created slice kubepods-besteffort-podc693e029_794b_434b_97e0_e01594b71108.slice - libcontainer container kubepods-besteffort-podc693e029_794b_434b_97e0_e01594b71108.slice. Apr 13 20:34:12.203758 systemd[1]: Created slice kubepods-besteffort-pod5d738dcc_413d_4de4_9c00_e5a6fce1be78.slice - libcontainer container kubepods-besteffort-pod5d738dcc_413d_4de4_9c00_e5a6fce1be78.slice. Apr 13 20:34:12.292008 containerd[1462]: time="2026-04-13T20:34:12.289539423Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-f8m2g,Uid:24399e91-dbab-4831-aa98-8db96cfff9e4,Namespace:kube-system,Attempt:0,}" Apr 13 20:34:12.319508 systemd[1]: Created slice kubepods-besteffort-pod29fcd04c_8e33_4e32_b58c_36d11bc97ed6.slice - libcontainer container kubepods-besteffort-pod29fcd04c_8e33_4e32_b58c_36d11bc97ed6.slice. Apr 13 20:34:12.321830 containerd[1462]: time="2026-04-13T20:34:12.321759934Z" level=error msg="Failed to destroy network for sandbox \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.322750 containerd[1462]: time="2026-04-13T20:34:12.322668412Z" level=error msg="encountered an error cleaning up failed sandbox \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.322873 containerd[1462]: time="2026-04-13T20:34:12.322812925Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kh7qp,Uid:4f231b08-404f-4650-8082-80470e832cfe,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.324959 kubelet[2582]: E0413 20:34:12.324880 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.327691 kubelet[2582]: E0413 20:34:12.326121 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-kh7qp" Apr 13 20:34:12.327691 kubelet[2582]: E0413 20:34:12.326171 2582 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-kh7qp" Apr 13 20:34:12.327691 kubelet[2582]: E0413 20:34:12.326262 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-kh7qp_kube-system(4f231b08-404f-4650-8082-80470e832cfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-kh7qp_kube-system(4f231b08-404f-4650-8082-80470e832cfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-kh7qp" podUID="4f231b08-404f-4650-8082-80470e832cfe" Apr 13 20:34:12.332879 containerd[1462]: time="2026-04-13T20:34:12.332825903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-9xj9t,Uid:29fcd04c-8e33-4e32-b58c-36d11bc97ed6,Namespace:calico-system,Attempt:0,}" Apr 13 20:34:12.386269 kubelet[2582]: I0413 20:34:12.386222 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Apr 13 20:34:12.391148 containerd[1462]: time="2026-04-13T20:34:12.391096575Z" level=info msg="StopPodSandbox for \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\"" Apr 13 20:34:12.391404 containerd[1462]: time="2026-04-13T20:34:12.391370912Z" level=info msg="Ensure that sandbox c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89 in task-service has been cleanup successfully" Apr 13 20:34:12.408742 containerd[1462]: time="2026-04-13T20:34:12.408602316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4d888f55-gszcq,Uid:ac435b27-3a39-4ef6-8b1e-437562c1e7eb,Namespace:calico-system,Attempt:0,}" Apr 13 20:34:12.416235 containerd[1462]: time="2026-04-13T20:34:12.414312845Z" level=info msg="CreateContainer within sandbox \"e8c8e27cb702d81bf28db09521a9a5f7c5c4465092957be746a5902554936a52\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 13 20:34:12.441123 containerd[1462]: time="2026-04-13T20:34:12.440980137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ff99f9c59-94r4d,Uid:72d06a65-1282-431f-bff3-3de35ce0d86c,Namespace:calico-system,Attempt:0,}" Apr 13 20:34:12.481453 containerd[1462]: time="2026-04-13T20:34:12.480994835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4d888f55-tqdxr,Uid:c693e029-794b-434b-97e0-e01594b71108,Namespace:calico-system,Attempt:0,}" Apr 13 20:34:12.483799 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89-shm.mount: Deactivated successfully. Apr 13 20:34:12.533370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1033165638.mount: Deactivated successfully. Apr 13 20:34:12.540162 containerd[1462]: time="2026-04-13T20:34:12.540105387Z" level=info msg="CreateContainer within sandbox \"e8c8e27cb702d81bf28db09521a9a5f7c5c4465092957be746a5902554936a52\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a0ad9f0cb5cb89f64334d3bf5f3d08e375dd5600b16b122ab45135289af27c95\"" Apr 13 20:34:12.546956 containerd[1462]: time="2026-04-13T20:34:12.546358666Z" level=info msg="StartContainer for \"a0ad9f0cb5cb89f64334d3bf5f3d08e375dd5600b16b122ab45135289af27c95\"" Apr 13 20:34:12.593735 containerd[1462]: time="2026-04-13T20:34:12.593671139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b55c6d7cc-6f99v,Uid:5d738dcc-413d-4de4-9c00-e5a6fce1be78,Namespace:calico-system,Attempt:0,}" Apr 13 20:34:12.620402 containerd[1462]: time="2026-04-13T20:34:12.620338111Z" level=error msg="StopPodSandbox for \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\" failed" error="failed to destroy network for sandbox \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.621196 kubelet[2582]: E0413 20:34:12.620991 2582 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Apr 13 20:34:12.621448 kubelet[2582]: E0413 20:34:12.621129 2582 kuberuntime_manager.go:1881] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89"} Apr 13 20:34:12.621656 kubelet[2582]: E0413 20:34:12.621580 2582 kuberuntime_manager.go:1422] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4f231b08-404f-4650-8082-80470e832cfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Apr 13 20:34:12.622021 kubelet[2582]: E0413 20:34:12.621864 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4f231b08-404f-4650-8082-80470e832cfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-kh7qp" podUID="4f231b08-404f-4650-8082-80470e832cfe" Apr 13 20:34:12.729450 containerd[1462]: time="2026-04-13T20:34:12.726627436Z" level=error msg="Failed to destroy network for sandbox \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.733399 containerd[1462]: time="2026-04-13T20:34:12.730889705Z" level=error msg="encountered an error cleaning up failed sandbox \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.733399 containerd[1462]: time="2026-04-13T20:34:12.732979961Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-f8m2g,Uid:24399e91-dbab-4831-aa98-8db96cfff9e4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.734792 kubelet[2582]: E0413 20:34:12.734734 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.735486 kubelet[2582]: E0413 20:34:12.735323 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-f8m2g" Apr 13 20:34:12.736106 kubelet[2582]: E0413 20:34:12.735866 2582 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7d764666f9-f8m2g" Apr 13 20:34:12.736486 systemd[1]: Started cri-containerd-a0ad9f0cb5cb89f64334d3bf5f3d08e375dd5600b16b122ab45135289af27c95.scope - libcontainer container a0ad9f0cb5cb89f64334d3bf5f3d08e375dd5600b16b122ab45135289af27c95. Apr 13 20:34:12.739336 kubelet[2582]: E0413 20:34:12.738957 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7d764666f9-f8m2g_kube-system(24399e91-dbab-4831-aa98-8db96cfff9e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7d764666f9-f8m2g_kube-system(24399e91-dbab-4831-aa98-8db96cfff9e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7d764666f9-f8m2g" podUID="24399e91-dbab-4831-aa98-8db96cfff9e4" Apr 13 20:34:12.865642 containerd[1462]: time="2026-04-13T20:34:12.864106570Z" level=error msg="Failed to destroy network for sandbox \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.884936 containerd[1462]: time="2026-04-13T20:34:12.884467008Z" level=error msg="encountered an error cleaning up failed sandbox \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.884936 containerd[1462]: time="2026-04-13T20:34:12.884592157Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-9xj9t,Uid:29fcd04c-8e33-4e32-b58c-36d11bc97ed6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.885193 kubelet[2582]: E0413 20:34:12.884944 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.885193 kubelet[2582]: E0413 20:34:12.885021 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-9xj9t" Apr 13 20:34:12.885193 kubelet[2582]: E0413 20:34:12.885052 2582 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-9f7667bb8-9xj9t" Apr 13 20:34:12.885428 kubelet[2582]: E0413 20:34:12.885128 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-9f7667bb8-9xj9t_calico-system(29fcd04c-8e33-4e32-b58c-36d11bc97ed6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-9f7667bb8-9xj9t_calico-system(29fcd04c-8e33-4e32-b58c-36d11bc97ed6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-9f7667bb8-9xj9t" podUID="29fcd04c-8e33-4e32-b58c-36d11bc97ed6" Apr 13 20:34:12.931673 containerd[1462]: time="2026-04-13T20:34:12.930329487Z" level=info msg="StartContainer for \"a0ad9f0cb5cb89f64334d3bf5f3d08e375dd5600b16b122ab45135289af27c95\" returns successfully" Apr 13 20:34:12.956807 containerd[1462]: time="2026-04-13T20:34:12.956738556Z" level=error msg="Failed to destroy network for sandbox \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.960355 containerd[1462]: time="2026-04-13T20:34:12.960097190Z" level=error msg="encountered an error cleaning up failed sandbox \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.960355 containerd[1462]: time="2026-04-13T20:34:12.960199262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4d888f55-gszcq,Uid:ac435b27-3a39-4ef6-8b1e-437562c1e7eb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.961600 kubelet[2582]: E0413 20:34:12.960872 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.961600 kubelet[2582]: E0413 20:34:12.961269 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d4d888f55-gszcq" Apr 13 20:34:12.961600 kubelet[2582]: E0413 20:34:12.961528 2582 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d4d888f55-gszcq" Apr 13 20:34:12.963715 kubelet[2582]: E0413 20:34:12.963522 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4d888f55-gszcq_calico-system(ac435b27-3a39-4ef6-8b1e-437562c1e7eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4d888f55-gszcq_calico-system(ac435b27-3a39-4ef6-8b1e-437562c1e7eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7d4d888f55-gszcq" podUID="ac435b27-3a39-4ef6-8b1e-437562c1e7eb" Apr 13 20:34:12.991405 containerd[1462]: time="2026-04-13T20:34:12.991227093Z" level=error msg="Failed to destroy network for sandbox \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.994915 containerd[1462]: time="2026-04-13T20:34:12.993090126Z" level=error msg="encountered an error cleaning up failed sandbox \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.994915 containerd[1462]: time="2026-04-13T20:34:12.993191500Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ff99f9c59-94r4d,Uid:72d06a65-1282-431f-bff3-3de35ce0d86c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.995222 kubelet[2582]: E0413 20:34:12.993509 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:12.995222 kubelet[2582]: E0413 20:34:12.993580 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7ff99f9c59-94r4d" Apr 13 20:34:12.995222 kubelet[2582]: E0413 20:34:12.993620 2582 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7ff99f9c59-94r4d" Apr 13 20:34:12.995426 kubelet[2582]: E0413 20:34:12.993700 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7ff99f9c59-94r4d_calico-system(72d06a65-1282-431f-bff3-3de35ce0d86c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7ff99f9c59-94r4d_calico-system(72d06a65-1282-431f-bff3-3de35ce0d86c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7ff99f9c59-94r4d" podUID="72d06a65-1282-431f-bff3-3de35ce0d86c" Apr 13 20:34:13.023407 containerd[1462]: time="2026-04-13T20:34:13.022699774Z" level=error msg="Failed to destroy network for sandbox \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:13.023407 containerd[1462]: time="2026-04-13T20:34:13.023232450Z" level=error msg="encountered an error cleaning up failed sandbox \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:13.023407 containerd[1462]: time="2026-04-13T20:34:13.023320222Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b55c6d7cc-6f99v,Uid:5d738dcc-413d-4de4-9c00-e5a6fce1be78,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:13.024970 kubelet[2582]: E0413 20:34:13.024522 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:13.024970 kubelet[2582]: E0413 20:34:13.024599 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b55c6d7cc-6f99v" Apr 13 20:34:13.024970 kubelet[2582]: E0413 20:34:13.024629 2582 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7b55c6d7cc-6f99v" Apr 13 20:34:13.025243 kubelet[2582]: E0413 20:34:13.024705 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7b55c6d7cc-6f99v_calico-system(5d738dcc-413d-4de4-9c00-e5a6fce1be78)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7b55c6d7cc-6f99v_calico-system(5d738dcc-413d-4de4-9c00-e5a6fce1be78)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7b55c6d7cc-6f99v" podUID="5d738dcc-413d-4de4-9c00-e5a6fce1be78" Apr 13 20:34:13.032872 containerd[1462]: time="2026-04-13T20:34:13.032087762Z" level=error msg="Failed to destroy network for sandbox \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:13.032872 containerd[1462]: time="2026-04-13T20:34:13.032611536Z" level=error msg="encountered an error cleaning up failed sandbox \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:13.032872 containerd[1462]: time="2026-04-13T20:34:13.032737423Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4d888f55-tqdxr,Uid:c693e029-794b-434b-97e0-e01594b71108,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:13.034701 kubelet[2582]: E0413 20:34:13.033456 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:13.034701 kubelet[2582]: E0413 20:34:13.033528 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d4d888f55-tqdxr" Apr 13 20:34:13.034701 kubelet[2582]: E0413 20:34:13.033560 2582 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-7d4d888f55-tqdxr" Apr 13 20:34:13.035104 kubelet[2582]: E0413 20:34:13.033641 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7d4d888f55-tqdxr_calico-system(c693e029-794b-434b-97e0-e01594b71108)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7d4d888f55-tqdxr_calico-system(c693e029-794b-434b-97e0-e01594b71108)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-7d4d888f55-tqdxr" podUID="c693e029-794b-434b-97e0-e01594b71108" Apr 13 20:34:13.065651 systemd[1]: Created slice kubepods-besteffort-pod3e73a65a_fec1_4a28_8ba2_fc4af2b02bb2.slice - libcontainer container kubepods-besteffort-pod3e73a65a_fec1_4a28_8ba2_fc4af2b02bb2.slice. Apr 13 20:34:13.075262 containerd[1462]: time="2026-04-13T20:34:13.075203954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dn72w,Uid:3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2,Namespace:calico-system,Attempt:0,}" Apr 13 20:34:13.392480 kubelet[2582]: I0413 20:34:13.392405 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Apr 13 20:34:13.394302 containerd[1462]: time="2026-04-13T20:34:13.393545375Z" level=info msg="StopPodSandbox for \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\"" Apr 13 20:34:13.395492 containerd[1462]: time="2026-04-13T20:34:13.395448144Z" level=info msg="Ensure that sandbox 66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b in task-service has been cleanup successfully" Apr 13 20:34:13.426148 kubelet[2582]: I0413 20:34:13.426030 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Apr 13 20:34:13.438677 containerd[1462]: time="2026-04-13T20:34:13.437189046Z" level=info msg="StopPodSandbox for \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\"" Apr 13 20:34:13.438677 containerd[1462]: time="2026-04-13T20:34:13.438249299Z" level=info msg="Ensure that sandbox 8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3 in task-service has been cleanup successfully" Apr 13 20:34:13.443954 containerd[1462]: 2026-04-13 20:34:13.319 [INFO][3702] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573" Apr 13 20:34:13.443954 containerd[1462]: 2026-04-13 20:34:13.319 [INFO][3702] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573" iface="eth0" netns="/var/run/netns/cni-2a2b6b50-173a-d078-15f8-17bbbfd077a3" Apr 13 20:34:13.443954 containerd[1462]: 2026-04-13 20:34:13.320 [INFO][3702] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573" iface="eth0" netns="/var/run/netns/cni-2a2b6b50-173a-d078-15f8-17bbbfd077a3" Apr 13 20:34:13.443954 containerd[1462]: 2026-04-13 20:34:13.320 [INFO][3702] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573" iface="eth0" netns="/var/run/netns/cni-2a2b6b50-173a-d078-15f8-17bbbfd077a3" Apr 13 20:34:13.443954 containerd[1462]: 2026-04-13 20:34:13.320 [INFO][3702] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573" Apr 13 20:34:13.443954 containerd[1462]: 2026-04-13 20:34:13.320 [INFO][3702] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573" Apr 13 20:34:13.443954 containerd[1462]: 2026-04-13 20:34:13.375 [INFO][3716] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573" HandleID="k8s-pod-network.2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0" Apr 13 20:34:13.443954 containerd[1462]: 2026-04-13 20:34:13.376 [INFO][3716] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:13.443954 containerd[1462]: 2026-04-13 20:34:13.376 [INFO][3716] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:13.443954 containerd[1462]: 2026-04-13 20:34:13.387 [WARNING][3716] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573" HandleID="k8s-pod-network.2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0" Apr 13 20:34:13.443954 containerd[1462]: 2026-04-13 20:34:13.387 [INFO][3716] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573" HandleID="k8s-pod-network.2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0" Apr 13 20:34:13.443954 containerd[1462]: 2026-04-13 20:34:13.392 [INFO][3716] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:13.443954 containerd[1462]: 2026-04-13 20:34:13.424 [INFO][3702] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573" Apr 13 20:34:13.464943 containerd[1462]: time="2026-04-13T20:34:13.464545235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dn72w,Uid:3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:13.465505 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6-shm.mount: Deactivated successfully. Apr 13 20:34:13.466607 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4-shm.mount: Deactivated successfully. Apr 13 20:34:13.467871 kubelet[2582]: E0413 20:34:13.466188 2582 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 13 20:34:13.467871 kubelet[2582]: E0413 20:34:13.466266 2582 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dn72w" Apr 13 20:34:13.467871 kubelet[2582]: E0413 20:34:13.466296 2582 kuberuntime_manager.go:1558] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dn72w" Apr 13 20:34:13.466796 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3-shm.mount: Deactivated successfully. Apr 13 20:34:13.468678 kubelet[2582]: E0413 20:34:13.466367 2582 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dn72w_calico-system(3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dn72w_calico-system(3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dn72w" podUID="3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2" Apr 13 20:34:13.466984 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b-shm.mount: Deactivated successfully. Apr 13 20:34:13.478340 kubelet[2582]: I0413 20:34:13.477563 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Apr 13 20:34:13.480793 containerd[1462]: time="2026-04-13T20:34:13.478935041Z" level=info msg="StopPodSandbox for \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\"" Apr 13 20:34:13.489358 containerd[1462]: time="2026-04-13T20:34:13.488960552Z" level=info msg="Ensure that sandbox abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257 in task-service has been cleanup successfully" Apr 13 20:34:13.490424 kubelet[2582]: I0413 20:34:13.489668 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-node-sv82j" podStartSLOduration=2.443057552 podStartE2EDuration="25.489639753s" podCreationTimestamp="2026-04-13 20:33:48 +0000 UTC" firstStartedPulling="2026-04-13 20:33:49.326306558 +0000 UTC m=+21.548024488" lastFinishedPulling="2026-04-13 20:34:12.372888764 +0000 UTC m=+44.594606689" observedRunningTime="2026-04-13 20:34:13.48141253 +0000 UTC m=+45.703130490" watchObservedRunningTime="2026-04-13 20:34:13.489639753 +0000 UTC m=+45.711357689" Apr 13 20:34:13.491367 systemd[1]: run-netns-cni\x2d2a2b6b50\x2d173a\x2dd078\x2d15f8\x2d17bbbfd077a3.mount: Deactivated successfully. Apr 13 20:34:13.491547 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2720a111022465b26b2b6dd29d6855ae4f0cb645bf0682465492d9094d684573-shm.mount: Deactivated successfully. Apr 13 20:34:13.511979 kubelet[2582]: I0413 20:34:13.509562 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Apr 13 20:34:13.512507 containerd[1462]: time="2026-04-13T20:34:13.512467514Z" level=info msg="StopPodSandbox for \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\"" Apr 13 20:34:13.518627 containerd[1462]: time="2026-04-13T20:34:13.518364769Z" level=info msg="Ensure that sandbox 99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6 in task-service has been cleanup successfully" Apr 13 20:34:13.526936 kubelet[2582]: I0413 20:34:13.526128 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Apr 13 20:34:13.532840 containerd[1462]: time="2026-04-13T20:34:13.532769515Z" level=info msg="StopPodSandbox for \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\"" Apr 13 20:34:13.547674 containerd[1462]: time="2026-04-13T20:34:13.547611584Z" level=info msg="Ensure that sandbox 7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4 in task-service has been cleanup successfully" Apr 13 20:34:13.554122 kubelet[2582]: I0413 20:34:13.553764 2582 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Apr 13 20:34:13.560003 containerd[1462]: time="2026-04-13T20:34:13.559828896Z" level=info msg="StopPodSandbox for \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\"" Apr 13 20:34:13.567518 containerd[1462]: time="2026-04-13T20:34:13.567016006Z" level=info msg="Ensure that sandbox dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3 in task-service has been cleanup successfully" Apr 13 20:34:14.152193 containerd[1462]: 2026-04-13 20:34:13.787 [INFO][3732] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Apr 13 20:34:14.152193 containerd[1462]: 2026-04-13 20:34:13.791 [INFO][3732] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" iface="eth0" netns="/var/run/netns/cni-f99e3e70-2b75-3a3c-d190-1fee9a24f001" Apr 13 20:34:14.152193 containerd[1462]: 2026-04-13 20:34:13.793 [INFO][3732] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" iface="eth0" netns="/var/run/netns/cni-f99e3e70-2b75-3a3c-d190-1fee9a24f001" Apr 13 20:34:14.152193 containerd[1462]: 2026-04-13 20:34:13.795 [INFO][3732] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" iface="eth0" netns="/var/run/netns/cni-f99e3e70-2b75-3a3c-d190-1fee9a24f001" Apr 13 20:34:14.152193 containerd[1462]: 2026-04-13 20:34:13.795 [INFO][3732] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Apr 13 20:34:14.152193 containerd[1462]: 2026-04-13 20:34:13.795 [INFO][3732] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Apr 13 20:34:14.152193 containerd[1462]: 2026-04-13 20:34:14.104 [INFO][3829] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" HandleID="k8s-pod-network.66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:14.152193 containerd[1462]: 2026-04-13 20:34:14.106 [INFO][3829] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:14.152193 containerd[1462]: 2026-04-13 20:34:14.106 [INFO][3829] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:14.152193 containerd[1462]: 2026-04-13 20:34:14.123 [WARNING][3829] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" HandleID="k8s-pod-network.66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:14.152193 containerd[1462]: 2026-04-13 20:34:14.124 [INFO][3829] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" HandleID="k8s-pod-network.66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:14.152193 containerd[1462]: 2026-04-13 20:34:14.138 [INFO][3829] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:14.152193 containerd[1462]: 2026-04-13 20:34:14.147 [INFO][3732] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Apr 13 20:34:14.154175 containerd[1462]: time="2026-04-13T20:34:14.154026647Z" level=info msg="TearDown network for sandbox \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\" successfully" Apr 13 20:34:14.154175 containerd[1462]: time="2026-04-13T20:34:14.154073736Z" level=info msg="StopPodSandbox for \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\" returns successfully" Apr 13 20:34:14.163938 containerd[1462]: time="2026-04-13T20:34:14.163824044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-f8m2g,Uid:24399e91-dbab-4831-aa98-8db96cfff9e4,Namespace:kube-system,Attempt:1,}" Apr 13 20:34:14.167472 systemd[1]: run-netns-cni\x2df99e3e70\x2d2b75\x2d3a3c\x2dd190\x2d1fee9a24f001.mount: Deactivated successfully. Apr 13 20:34:14.213118 containerd[1462]: 2026-04-13 20:34:13.736 [INFO][3752] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Apr 13 20:34:14.213118 containerd[1462]: 2026-04-13 20:34:13.739 [INFO][3752] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" iface="eth0" netns="/var/run/netns/cni-583166b0-fd65-7542-95a6-8bd8bc4fff5a" Apr 13 20:34:14.213118 containerd[1462]: 2026-04-13 20:34:13.744 [INFO][3752] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" iface="eth0" netns="/var/run/netns/cni-583166b0-fd65-7542-95a6-8bd8bc4fff5a" Apr 13 20:34:14.213118 containerd[1462]: 2026-04-13 20:34:13.748 [INFO][3752] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" iface="eth0" netns="/var/run/netns/cni-583166b0-fd65-7542-95a6-8bd8bc4fff5a" Apr 13 20:34:14.213118 containerd[1462]: 2026-04-13 20:34:13.748 [INFO][3752] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Apr 13 20:34:14.213118 containerd[1462]: 2026-04-13 20:34:13.748 [INFO][3752] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Apr 13 20:34:14.213118 containerd[1462]: 2026-04-13 20:34:14.109 [INFO][3823] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" HandleID="k8s-pod-network.8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:14.213118 containerd[1462]: 2026-04-13 20:34:14.110 [INFO][3823] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:14.213118 containerd[1462]: 2026-04-13 20:34:14.138 [INFO][3823] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:14.213118 containerd[1462]: 2026-04-13 20:34:14.169 [WARNING][3823] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" HandleID="k8s-pod-network.8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:14.213118 containerd[1462]: 2026-04-13 20:34:14.169 [INFO][3823] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" HandleID="k8s-pod-network.8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:14.213118 containerd[1462]: 2026-04-13 20:34:14.177 [INFO][3823] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:14.213118 containerd[1462]: 2026-04-13 20:34:14.189 [INFO][3752] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Apr 13 20:34:14.214259 containerd[1462]: time="2026-04-13T20:34:14.214210034Z" level=info msg="TearDown network for sandbox \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\" successfully" Apr 13 20:34:14.214464 containerd[1462]: time="2026-04-13T20:34:14.214434040Z" level=info msg="StopPodSandbox for \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\" returns successfully" Apr 13 20:34:14.218592 containerd[1462]: time="2026-04-13T20:34:14.218215187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4d888f55-tqdxr,Uid:c693e029-794b-434b-97e0-e01594b71108,Namespace:calico-system,Attempt:1,}" Apr 13 20:34:14.297928 containerd[1462]: 2026-04-13 20:34:14.018 [INFO][3794] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Apr 13 20:34:14.297928 containerd[1462]: 2026-04-13 20:34:14.018 [INFO][3794] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" iface="eth0" netns="/var/run/netns/cni-d5991d2b-385a-412f-d25f-f728803d0c69" Apr 13 20:34:14.297928 containerd[1462]: 2026-04-13 20:34:14.020 [INFO][3794] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" iface="eth0" netns="/var/run/netns/cni-d5991d2b-385a-412f-d25f-f728803d0c69" Apr 13 20:34:14.297928 containerd[1462]: 2026-04-13 20:34:14.020 [INFO][3794] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" iface="eth0" netns="/var/run/netns/cni-d5991d2b-385a-412f-d25f-f728803d0c69" Apr 13 20:34:14.297928 containerd[1462]: 2026-04-13 20:34:14.020 [INFO][3794] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Apr 13 20:34:14.297928 containerd[1462]: 2026-04-13 20:34:14.021 [INFO][3794] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Apr 13 20:34:14.297928 containerd[1462]: 2026-04-13 20:34:14.257 [INFO][3860] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" HandleID="k8s-pod-network.99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:14.297928 containerd[1462]: 2026-04-13 20:34:14.258 [INFO][3860] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:14.297928 containerd[1462]: 2026-04-13 20:34:14.258 [INFO][3860] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:14.297928 containerd[1462]: 2026-04-13 20:34:14.274 [WARNING][3860] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" HandleID="k8s-pod-network.99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:14.297928 containerd[1462]: 2026-04-13 20:34:14.274 [INFO][3860] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" HandleID="k8s-pod-network.99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:14.297928 containerd[1462]: 2026-04-13 20:34:14.279 [INFO][3860] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:14.297928 containerd[1462]: 2026-04-13 20:34:14.288 [INFO][3794] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Apr 13 20:34:14.299380 containerd[1462]: time="2026-04-13T20:34:14.298811453Z" level=info msg="TearDown network for sandbox \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\" successfully" Apr 13 20:34:14.299380 containerd[1462]: time="2026-04-13T20:34:14.298852141Z" level=info msg="StopPodSandbox for \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\" returns successfully" Apr 13 20:34:14.304180 containerd[1462]: time="2026-04-13T20:34:14.303588525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ff99f9c59-94r4d,Uid:72d06a65-1282-431f-bff3-3de35ce0d86c,Namespace:calico-system,Attempt:1,}" Apr 13 20:34:14.332131 containerd[1462]: 2026-04-13 20:34:14.028 [INFO][3792] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Apr 13 20:34:14.332131 containerd[1462]: 2026-04-13 20:34:14.031 [INFO][3792] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" iface="eth0" netns="/var/run/netns/cni-28417600-17e7-6129-a4a2-c02fc95bbb66" Apr 13 20:34:14.332131 containerd[1462]: 2026-04-13 20:34:14.031 [INFO][3792] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" iface="eth0" netns="/var/run/netns/cni-28417600-17e7-6129-a4a2-c02fc95bbb66" Apr 13 20:34:14.332131 containerd[1462]: 2026-04-13 20:34:14.032 [INFO][3792] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" iface="eth0" netns="/var/run/netns/cni-28417600-17e7-6129-a4a2-c02fc95bbb66" Apr 13 20:34:14.332131 containerd[1462]: 2026-04-13 20:34:14.032 [INFO][3792] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Apr 13 20:34:14.332131 containerd[1462]: 2026-04-13 20:34:14.032 [INFO][3792] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Apr 13 20:34:14.332131 containerd[1462]: 2026-04-13 20:34:14.264 [INFO][3862] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" HandleID="k8s-pod-network.7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:14.332131 containerd[1462]: 2026-04-13 20:34:14.266 [INFO][3862] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:14.332131 containerd[1462]: 2026-04-13 20:34:14.279 [INFO][3862] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:14.332131 containerd[1462]: 2026-04-13 20:34:14.313 [WARNING][3862] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" HandleID="k8s-pod-network.7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:14.332131 containerd[1462]: 2026-04-13 20:34:14.313 [INFO][3862] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" HandleID="k8s-pod-network.7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:14.332131 containerd[1462]: 2026-04-13 20:34:14.315 [INFO][3862] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:14.332131 containerd[1462]: 2026-04-13 20:34:14.326 [INFO][3792] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Apr 13 20:34:14.334311 containerd[1462]: time="2026-04-13T20:34:14.333217458Z" level=info msg="TearDown network for sandbox \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\" successfully" Apr 13 20:34:14.334311 containerd[1462]: time="2026-04-13T20:34:14.333264629Z" level=info msg="StopPodSandbox for \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\" returns successfully" Apr 13 20:34:14.336716 containerd[1462]: time="2026-04-13T20:34:14.336667698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4d888f55-gszcq,Uid:ac435b27-3a39-4ef6-8b1e-437562c1e7eb,Namespace:calico-system,Attempt:1,}" Apr 13 20:34:14.381337 containerd[1462]: 2026-04-13 20:34:13.930 [INFO][3793] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Apr 13 20:34:14.381337 containerd[1462]: 2026-04-13 20:34:13.930 [INFO][3793] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" iface="eth0" netns="/var/run/netns/cni-73337e6a-6d7e-7b12-655c-344f1ead54ea" Apr 13 20:34:14.381337 containerd[1462]: 2026-04-13 20:34:13.930 [INFO][3793] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" iface="eth0" netns="/var/run/netns/cni-73337e6a-6d7e-7b12-655c-344f1ead54ea" Apr 13 20:34:14.381337 containerd[1462]: 2026-04-13 20:34:13.932 [INFO][3793] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" iface="eth0" netns="/var/run/netns/cni-73337e6a-6d7e-7b12-655c-344f1ead54ea" Apr 13 20:34:14.381337 containerd[1462]: 2026-04-13 20:34:13.932 [INFO][3793] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Apr 13 20:34:14.381337 containerd[1462]: 2026-04-13 20:34:13.932 [INFO][3793] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Apr 13 20:34:14.381337 containerd[1462]: 2026-04-13 20:34:14.272 [INFO][3848] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" HandleID="k8s-pod-network.dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:14.381337 containerd[1462]: 2026-04-13 20:34:14.275 [INFO][3848] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:14.381337 containerd[1462]: 2026-04-13 20:34:14.317 [INFO][3848] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:14.381337 containerd[1462]: 2026-04-13 20:34:14.341 [WARNING][3848] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" HandleID="k8s-pod-network.dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:14.381337 containerd[1462]: 2026-04-13 20:34:14.341 [INFO][3848] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" HandleID="k8s-pod-network.dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:14.381337 containerd[1462]: 2026-04-13 20:34:14.345 [INFO][3848] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:14.381337 containerd[1462]: 2026-04-13 20:34:14.356 [INFO][3793] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Apr 13 20:34:14.382336 containerd[1462]: time="2026-04-13T20:34:14.381679730Z" level=info msg="TearDown network for sandbox \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\" successfully" Apr 13 20:34:14.382336 containerd[1462]: time="2026-04-13T20:34:14.381717690Z" level=info msg="StopPodSandbox for \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\" returns successfully" Apr 13 20:34:14.392543 containerd[1462]: time="2026-04-13T20:34:14.392118352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-9xj9t,Uid:29fcd04c-8e33-4e32-b58c-36d11bc97ed6,Namespace:calico-system,Attempt:1,}" Apr 13 20:34:14.411697 containerd[1462]: 2026-04-13 20:34:14.047 [INFO][3791] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Apr 13 20:34:14.411697 containerd[1462]: 2026-04-13 20:34:14.047 [INFO][3791] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" iface="eth0" netns="/var/run/netns/cni-751ef544-f407-85ed-24ab-806b69d597fb" Apr 13 20:34:14.411697 containerd[1462]: 2026-04-13 20:34:14.047 [INFO][3791] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" iface="eth0" netns="/var/run/netns/cni-751ef544-f407-85ed-24ab-806b69d597fb" Apr 13 20:34:14.411697 containerd[1462]: 2026-04-13 20:34:14.048 [INFO][3791] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" iface="eth0" netns="/var/run/netns/cni-751ef544-f407-85ed-24ab-806b69d597fb" Apr 13 20:34:14.411697 containerd[1462]: 2026-04-13 20:34:14.048 [INFO][3791] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Apr 13 20:34:14.411697 containerd[1462]: 2026-04-13 20:34:14.049 [INFO][3791] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Apr 13 20:34:14.411697 containerd[1462]: 2026-04-13 20:34:14.306 [INFO][3866] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" HandleID="k8s-pod-network.abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--7b55c6d7cc--6f99v-eth0" Apr 13 20:34:14.411697 containerd[1462]: 2026-04-13 20:34:14.306 [INFO][3866] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:14.411697 containerd[1462]: 2026-04-13 20:34:14.347 [INFO][3866] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:14.411697 containerd[1462]: 2026-04-13 20:34:14.375 [WARNING][3866] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" HandleID="k8s-pod-network.abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--7b55c6d7cc--6f99v-eth0" Apr 13 20:34:14.411697 containerd[1462]: 2026-04-13 20:34:14.375 [INFO][3866] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" HandleID="k8s-pod-network.abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--7b55c6d7cc--6f99v-eth0" Apr 13 20:34:14.411697 containerd[1462]: 2026-04-13 20:34:14.379 [INFO][3866] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:14.411697 containerd[1462]: 2026-04-13 20:34:14.402 [INFO][3791] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Apr 13 20:34:14.413228 containerd[1462]: time="2026-04-13T20:34:14.412647019Z" level=info msg="TearDown network for sandbox \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\" successfully" Apr 13 20:34:14.413228 containerd[1462]: time="2026-04-13T20:34:14.412693583Z" level=info msg="StopPodSandbox for \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\" returns successfully" Apr 13 20:34:14.464832 systemd[1]: run-netns-cni\x2d751ef544\x2df407\x2d85ed\x2d24ab\x2d806b69d597fb.mount: Deactivated successfully. Apr 13 20:34:14.467131 systemd[1]: run-netns-cni\x2d583166b0\x2dfd65\x2d7542\x2d95a6\x2d8bd8bc4fff5a.mount: Deactivated successfully. Apr 13 20:34:14.467261 systemd[1]: run-netns-cni\x2dd5991d2b\x2d385a\x2d412f\x2dd25f\x2df728803d0c69.mount: Deactivated successfully. Apr 13 20:34:14.467410 systemd[1]: run-netns-cni\x2d28417600\x2d17e7\x2d6129\x2da4a2\x2dc02fc95bbb66.mount: Deactivated successfully. Apr 13 20:34:14.467543 systemd[1]: run-netns-cni\x2d73337e6a\x2d6d7e\x2d7b12\x2d655c\x2d344f1ead54ea.mount: Deactivated successfully. Apr 13 20:34:14.559577 kubelet[2582]: I0413 20:34:14.558956 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/5d738dcc-413d-4de4-9c00-e5a6fce1be78-whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d738dcc-413d-4de4-9c00-e5a6fce1be78-whisker-ca-bundle\") pod \"5d738dcc-413d-4de4-9c00-e5a6fce1be78\" (UID: \"5d738dcc-413d-4de4-9c00-e5a6fce1be78\") " Apr 13 20:34:14.559577 kubelet[2582]: I0413 20:34:14.559040 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/configmap/5d738dcc-413d-4de4-9c00-e5a6fce1be78-nginx-config\" (UniqueName: \"kubernetes.io/configmap/5d738dcc-413d-4de4-9c00-e5a6fce1be78-nginx-config\") pod \"5d738dcc-413d-4de4-9c00-e5a6fce1be78\" (UID: \"5d738dcc-413d-4de4-9c00-e5a6fce1be78\") " Apr 13 20:34:14.559577 kubelet[2582]: I0413 20:34:14.559097 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/projected/5d738dcc-413d-4de4-9c00-e5a6fce1be78-kube-api-access-clffx\" (UniqueName: \"kubernetes.io/projected/5d738dcc-413d-4de4-9c00-e5a6fce1be78-kube-api-access-clffx\") pod \"5d738dcc-413d-4de4-9c00-e5a6fce1be78\" (UID: \"5d738dcc-413d-4de4-9c00-e5a6fce1be78\") " Apr 13 20:34:14.559577 kubelet[2582]: I0413 20:34:14.559132 2582 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kubernetes.io/secret/5d738dcc-413d-4de4-9c00-e5a6fce1be78-whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5d738dcc-413d-4de4-9c00-e5a6fce1be78-whisker-backend-key-pair\") pod \"5d738dcc-413d-4de4-9c00-e5a6fce1be78\" (UID: \"5d738dcc-413d-4de4-9c00-e5a6fce1be78\") " Apr 13 20:34:14.572020 containerd[1462]: time="2026-04-13T20:34:14.571344252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dn72w,Uid:3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2,Namespace:calico-system,Attempt:0,}" Apr 13 20:34:14.576008 kubelet[2582]: I0413 20:34:14.574101 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d738dcc-413d-4de4-9c00-e5a6fce1be78-whisker-backend-key-pair" pod "5d738dcc-413d-4de4-9c00-e5a6fce1be78" (UID: "5d738dcc-413d-4de4-9c00-e5a6fce1be78"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Apr 13 20:34:14.576008 kubelet[2582]: I0413 20:34:14.574674 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d738dcc-413d-4de4-9c00-e5a6fce1be78-whisker-ca-bundle" pod "5d738dcc-413d-4de4-9c00-e5a6fce1be78" (UID: "5d738dcc-413d-4de4-9c00-e5a6fce1be78"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:34:14.581157 kubelet[2582]: I0413 20:34:14.579606 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5d738dcc-413d-4de4-9c00-e5a6fce1be78-nginx-config" pod "5d738dcc-413d-4de4-9c00-e5a6fce1be78" (UID: "5d738dcc-413d-4de4-9c00-e5a6fce1be78"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Apr 13 20:34:14.580848 systemd[1]: var-lib-kubelet-pods-5d738dcc\x2d413d\x2d4de4\x2d9c00\x2de5a6fce1be78-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Apr 13 20:34:14.591669 kubelet[2582]: I0413 20:34:14.590056 2582 operation_generator.go:779] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d738dcc-413d-4de4-9c00-e5a6fce1be78-kube-api-access-clffx" pod "5d738dcc-413d-4de4-9c00-e5a6fce1be78" (UID: "5d738dcc-413d-4de4-9c00-e5a6fce1be78"). InnerVolumeSpecName "kube-api-access-clffx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Apr 13 20:34:14.601345 systemd[1]: var-lib-kubelet-pods-5d738dcc\x2d413d\x2d4de4\x2d9c00\x2de5a6fce1be78-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dclffx.mount: Deactivated successfully. Apr 13 20:34:14.660985 kubelet[2582]: I0413 20:34:14.660613 2582 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5d738dcc-413d-4de4-9c00-e5a6fce1be78-whisker-ca-bundle\") on node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" DevicePath \"\"" Apr 13 20:34:14.660985 kubelet[2582]: I0413 20:34:14.660670 2582 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/5d738dcc-413d-4de4-9c00-e5a6fce1be78-nginx-config\") on node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" DevicePath \"\"" Apr 13 20:34:14.660985 kubelet[2582]: I0413 20:34:14.660692 2582 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-clffx\" (UniqueName: \"kubernetes.io/projected/5d738dcc-413d-4de4-9c00-e5a6fce1be78-kube-api-access-clffx\") on node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" DevicePath \"\"" Apr 13 20:34:14.660985 kubelet[2582]: I0413 20:34:14.660711 2582 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/5d738dcc-413d-4de4-9c00-e5a6fce1be78-whisker-backend-key-pair\") on node \"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal\" DevicePath \"\"" Apr 13 20:34:14.889236 systemd[1]: Removed slice kubepods-besteffort-pod5d738dcc_413d_4de4_9c00_e5a6fce1be78.slice - libcontainer container kubepods-besteffort-pod5d738dcc_413d_4de4_9c00_e5a6fce1be78.slice. Apr 13 20:34:15.027119 systemd-networkd[1373]: cali79d06642b75: Link UP Apr 13 20:34:15.028408 systemd-networkd[1373]: cali79d06642b75: Gained carrier Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.337 [ERROR][3886] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.416 [INFO][3886] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0 coredns-7d764666f9- kube-system 24399e91-dbab-4831-aa98-8db96cfff9e4 931 0 2026-04-13 20:33:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal coredns-7d764666f9-f8m2g eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali79d06642b75 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" Namespace="kube-system" Pod="coredns-7d764666f9-f8m2g" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-" Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.416 [INFO][3886] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" Namespace="kube-system" Pod="coredns-7d764666f9-f8m2g" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.667 [INFO][3935] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" HandleID="k8s-pod-network.aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.714 [INFO][3935] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" HandleID="k8s-pod-network.aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fe60), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", "pod":"coredns-7d764666f9-f8m2g", "timestamp":"2026-04-13 20:34:14.667717403 +0000 UTC"}, Hostname:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00055c840)} Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.714 [INFO][3935] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.714 [INFO][3935] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.714 [INFO][3935] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal' Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.740 [INFO][3935] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.799 [INFO][3935] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.838 [INFO][3935] ipam/ipam.go 526: Trying affinity for 192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.863 [INFO][3935] ipam/ipam.go 160: Attempting to load block cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.875 [INFO][3935] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.875 [INFO][3935] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.897 [INFO][3935] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624 Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.956 [INFO][3935] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.983 [INFO][3935] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.82.1/26] block=192.168.82.0/26 handle="k8s-pod-network.aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.985 [INFO][3935] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.82.1/26] handle="k8s-pod-network.aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.985 [INFO][3935] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:15.092221 containerd[1462]: 2026-04-13 20:34:14.985 [INFO][3935] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.82.1/26] IPv6=[] ContainerID="aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" HandleID="k8s-pod-network.aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:15.095709 containerd[1462]: 2026-04-13 20:34:14.991 [INFO][3886] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" Namespace="kube-system" Pod="coredns-7d764666f9-f8m2g" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"24399e91-dbab-4831-aa98-8db96cfff9e4", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7d764666f9-f8m2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79d06642b75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:15.095709 containerd[1462]: 2026-04-13 20:34:14.993 [INFO][3886] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.1/32] ContainerID="aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" Namespace="kube-system" Pod="coredns-7d764666f9-f8m2g" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:15.095709 containerd[1462]: 2026-04-13 20:34:14.993 [INFO][3886] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79d06642b75 ContainerID="aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" Namespace="kube-system" Pod="coredns-7d764666f9-f8m2g" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:15.095709 containerd[1462]: 2026-04-13 20:34:15.030 [INFO][3886] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" Namespace="kube-system" Pod="coredns-7d764666f9-f8m2g" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:15.096728 containerd[1462]: 2026-04-13 20:34:15.033 [INFO][3886] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" Namespace="kube-system" Pod="coredns-7d764666f9-f8m2g" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"24399e91-dbab-4831-aa98-8db96cfff9e4", ResourceVersion:"931", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624", Pod:"coredns-7d764666f9-f8m2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79d06642b75", MAC:"22:2a:ce:a9:d4:00", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:15.096728 containerd[1462]: 2026-04-13 20:34:15.085 [INFO][3886] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624" Namespace="kube-system" Pod="coredns-7d764666f9-f8m2g" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:15.154741 systemd[1]: Created slice kubepods-besteffort-poda682c6c9_d8fe_4765_ad06_55b42b978886.slice - libcontainer container kubepods-besteffort-poda682c6c9_d8fe_4765_ad06_55b42b978886.slice. Apr 13 20:34:15.242633 containerd[1462]: time="2026-04-13T20:34:15.236441158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:34:15.242633 containerd[1462]: time="2026-04-13T20:34:15.236577020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:34:15.242633 containerd[1462]: time="2026-04-13T20:34:15.236633027Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:15.242633 containerd[1462]: time="2026-04-13T20:34:15.236825756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:15.265191 kubelet[2582]: I0413 20:34:15.264679 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkkdn\" (UniqueName: \"kubernetes.io/projected/a682c6c9-d8fe-4765-ad06-55b42b978886-kube-api-access-lkkdn\") pod \"whisker-658d45bc66-9ksdp\" (UID: \"a682c6c9-d8fe-4765-ad06-55b42b978886\") " pod="calico-system/whisker-658d45bc66-9ksdp" Apr 13 20:34:15.265191 kubelet[2582]: I0413 20:34:15.264750 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a682c6c9-d8fe-4765-ad06-55b42b978886-whisker-backend-key-pair\") pod \"whisker-658d45bc66-9ksdp\" (UID: \"a682c6c9-d8fe-4765-ad06-55b42b978886\") " pod="calico-system/whisker-658d45bc66-9ksdp" Apr 13 20:34:15.265191 kubelet[2582]: I0413 20:34:15.264801 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a682c6c9-d8fe-4765-ad06-55b42b978886-whisker-ca-bundle\") pod \"whisker-658d45bc66-9ksdp\" (UID: \"a682c6c9-d8fe-4765-ad06-55b42b978886\") " pod="calico-system/whisker-658d45bc66-9ksdp" Apr 13 20:34:15.265191 kubelet[2582]: I0413 20:34:15.264830 2582 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/a682c6c9-d8fe-4765-ad06-55b42b978886-nginx-config\") pod \"whisker-658d45bc66-9ksdp\" (UID: \"a682c6c9-d8fe-4765-ad06-55b42b978886\") " pod="calico-system/whisker-658d45bc66-9ksdp" Apr 13 20:34:15.295194 systemd[1]: Started cri-containerd-aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624.scope - libcontainer container aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624. Apr 13 20:34:15.319292 systemd-networkd[1373]: calid549b74edf6: Link UP Apr 13 20:34:15.326964 systemd-networkd[1373]: calid549b74edf6: Gained carrier Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:14.435 [ERROR][3911] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:14.529 [INFO][3911] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0 calico-kube-controllers-7ff99f9c59- calico-system 72d06a65-1282-431f-bff3-3de35ce0d86c 934 0 2026-04-13 20:33:48 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7ff99f9c59 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal calico-kube-controllers-7ff99f9c59-94r4d eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid549b74edf6 [] [] }} ContainerID="bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" Namespace="calico-system" Pod="calico-kube-controllers-7ff99f9c59-94r4d" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-" Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:14.529 [INFO][3911] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" Namespace="calico-system" Pod="calico-kube-controllers-7ff99f9c59-94r4d" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:14.752 [INFO][3957] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" HandleID="k8s-pod-network.bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:14.802 [INFO][3957] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" HandleID="k8s-pod-network.bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000398140), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", "pod":"calico-kube-controllers-7ff99f9c59-94r4d", "timestamp":"2026-04-13 20:34:14.75274388 +0000 UTC"}, Hostname:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:14.803 [INFO][3957] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:14.985 [INFO][3957] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:14.985 [INFO][3957] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal' Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:14.998 [INFO][3957] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:15.035 [INFO][3957] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:15.116 [INFO][3957] ipam/ipam.go 526: Trying affinity for 192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:15.127 [INFO][3957] ipam/ipam.go 160: Attempting to load block cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:15.182 [INFO][3957] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:15.182 [INFO][3957] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:15.194 [INFO][3957] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:15.236 [INFO][3957] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:15.268 [INFO][3957] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.82.2/26] block=192.168.82.0/26 handle="k8s-pod-network.bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:15.268 [INFO][3957] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.82.2/26] handle="k8s-pod-network.bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.390465 containerd[1462]: 2026-04-13 20:34:15.269 [INFO][3957] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:15.391853 containerd[1462]: 2026-04-13 20:34:15.269 [INFO][3957] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.82.2/26] IPv6=[] ContainerID="bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" HandleID="k8s-pod-network.bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:15.391853 containerd[1462]: 2026-04-13 20:34:15.283 [INFO][3911] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" Namespace="calico-system" Pod="calico-kube-controllers-7ff99f9c59-94r4d" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0", GenerateName:"calico-kube-controllers-7ff99f9c59-", Namespace:"calico-system", SelfLink:"", UID:"72d06a65-1282-431f-bff3-3de35ce0d86c", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ff99f9c59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-kube-controllers-7ff99f9c59-94r4d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid549b74edf6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:15.391853 containerd[1462]: 2026-04-13 20:34:15.283 [INFO][3911] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.2/32] ContainerID="bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" Namespace="calico-system" Pod="calico-kube-controllers-7ff99f9c59-94r4d" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:15.391853 containerd[1462]: 2026-04-13 20:34:15.283 [INFO][3911] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid549b74edf6 ContainerID="bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" Namespace="calico-system" Pod="calico-kube-controllers-7ff99f9c59-94r4d" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:15.391853 containerd[1462]: 2026-04-13 20:34:15.326 [INFO][3911] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" Namespace="calico-system" Pod="calico-kube-controllers-7ff99f9c59-94r4d" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:15.391853 containerd[1462]: 2026-04-13 20:34:15.331 [INFO][3911] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" Namespace="calico-system" Pod="calico-kube-controllers-7ff99f9c59-94r4d" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0", GenerateName:"calico-kube-controllers-7ff99f9c59-", Namespace:"calico-system", SelfLink:"", UID:"72d06a65-1282-431f-bff3-3de35ce0d86c", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ff99f9c59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d", Pod:"calico-kube-controllers-7ff99f9c59-94r4d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid549b74edf6", MAC:"76:48:96:35:b1:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:15.391853 containerd[1462]: 2026-04-13 20:34:15.386 [INFO][3911] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d" Namespace="calico-system" Pod="calico-kube-controllers-7ff99f9c59-94r4d" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:15.469174 containerd[1462]: time="2026-04-13T20:34:15.468396948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658d45bc66-9ksdp,Uid:a682c6c9-d8fe-4765-ad06-55b42b978886,Namespace:calico-system,Attempt:0,}" Apr 13 20:34:15.482753 containerd[1462]: time="2026-04-13T20:34:15.482570085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:34:15.483135 containerd[1462]: time="2026-04-13T20:34:15.482664156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:34:15.483135 containerd[1462]: time="2026-04-13T20:34:15.482693906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:15.487528 containerd[1462]: time="2026-04-13T20:34:15.485532945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:15.579386 systemd-networkd[1373]: cali9adf618b581: Link UP Apr 13 20:34:15.582263 systemd-networkd[1373]: cali9adf618b581: Gained carrier Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:14.540 [ERROR][3922] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:14.576 [INFO][3922] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0 calico-apiserver-7d4d888f55- calico-system ac435b27-3a39-4ef6-8b1e-437562c1e7eb 935 0 2026-04-13 20:33:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4d888f55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal calico-apiserver-7d4d888f55-gszcq eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali9adf618b581 [] [] }} ContainerID="878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-gszcq" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-" Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:14.577 [INFO][3922] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-gszcq" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:14.856 [INFO][3965] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" HandleID="k8s-pod-network.878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:14.897 [INFO][3965] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" HandleID="k8s-pod-network.878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d1490), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", "pod":"calico-apiserver-7d4d888f55-gszcq", "timestamp":"2026-04-13 20:34:14.85613978 +0000 UTC"}, Hostname:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00057e580)} Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:14.897 [INFO][3965] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:15.268 [INFO][3965] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:15.269 [INFO][3965] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal' Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:15.281 [INFO][3965] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:15.318 [INFO][3965] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:15.367 [INFO][3965] ipam/ipam.go 526: Trying affinity for 192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:15.387 [INFO][3965] ipam/ipam.go 160: Attempting to load block cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:15.398 [INFO][3965] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:15.398 [INFO][3965] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:15.405 [INFO][3965] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45 Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:15.442 [INFO][3965] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:15.482 [INFO][3965] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.82.3/26] block=192.168.82.0/26 handle="k8s-pod-network.878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:15.486 [INFO][3965] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.82.3/26] handle="k8s-pod-network.878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:15.486 [INFO][3965] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:15.645193 containerd[1462]: 2026-04-13 20:34:15.486 [INFO][3965] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.82.3/26] IPv6=[] ContainerID="878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" HandleID="k8s-pod-network.878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:15.648454 containerd[1462]: 2026-04-13 20:34:15.538 [INFO][3922] cni-plugin/k8s.go 418: Populated endpoint ContainerID="878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-gszcq" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0", GenerateName:"calico-apiserver-7d4d888f55-", Namespace:"calico-system", SelfLink:"", UID:"ac435b27-3a39-4ef6-8b1e-437562c1e7eb", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4d888f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7d4d888f55-gszcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9adf618b581", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:15.648454 containerd[1462]: 2026-04-13 20:34:15.538 [INFO][3922] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.3/32] ContainerID="878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-gszcq" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:15.648454 containerd[1462]: 2026-04-13 20:34:15.538 [INFO][3922] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9adf618b581 ContainerID="878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-gszcq" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:15.648454 containerd[1462]: 2026-04-13 20:34:15.589 [INFO][3922] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-gszcq" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:15.648454 containerd[1462]: 2026-04-13 20:34:15.596 [INFO][3922] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-gszcq" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0", GenerateName:"calico-apiserver-7d4d888f55-", Namespace:"calico-system", SelfLink:"", UID:"ac435b27-3a39-4ef6-8b1e-437562c1e7eb", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4d888f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45", Pod:"calico-apiserver-7d4d888f55-gszcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9adf618b581", MAC:"ca:3c:c7:56:4f:66", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:15.648454 containerd[1462]: 2026-04-13 20:34:15.638 [INFO][3922] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-gszcq" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:15.661203 systemd[1]: Started cri-containerd-bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d.scope - libcontainer container bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d. Apr 13 20:34:15.707928 containerd[1462]: time="2026-04-13T20:34:15.706669537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-f8m2g,Uid:24399e91-dbab-4831-aa98-8db96cfff9e4,Namespace:kube-system,Attempt:1,} returns sandbox id \"aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624\"" Apr 13 20:34:15.726104 containerd[1462]: time="2026-04-13T20:34:15.725864035Z" level=info msg="CreateContainer within sandbox \"aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:34:15.736084 systemd-networkd[1373]: cali0c742f22395: Link UP Apr 13 20:34:15.738220 systemd-networkd[1373]: cali0c742f22395: Gained carrier Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:14.514 [ERROR][3898] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:14.593 [INFO][3898] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0 calico-apiserver-7d4d888f55- calico-system c693e029-794b-434b-97e0-e01594b71108 928 0 2026-04-13 20:33:47 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d4d888f55 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal calico-apiserver-7d4d888f55-tqdxr eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] cali0c742f22395 [] [] }} ContainerID="5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-tqdxr" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-" Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:14.594 [INFO][3898] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-tqdxr" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:14.867 [INFO][3964] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" HandleID="k8s-pod-network.5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:14.959 [INFO][3964] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" HandleID="k8s-pod-network.5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004ed60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", "pod":"calico-apiserver-7d4d888f55-tqdxr", "timestamp":"2026-04-13 20:34:14.867598089 +0000 UTC"}, Hostname:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000464420)} Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:14.959 [INFO][3964] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:15.487 [INFO][3964] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:15.487 [INFO][3964] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal' Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:15.510 [INFO][3964] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:15.576 [INFO][3964] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:15.621 [INFO][3964] ipam/ipam.go 526: Trying affinity for 192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:15.631 [INFO][3964] ipam/ipam.go 160: Attempting to load block cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:15.641 [INFO][3964] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:15.641 [INFO][3964] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:15.650 [INFO][3964] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015 Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:15.672 [INFO][3964] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:15.697 [INFO][3964] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.82.4/26] block=192.168.82.0/26 handle="k8s-pod-network.5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:15.697 [INFO][3964] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.82.4/26] handle="k8s-pod-network.5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:15.704 [INFO][3964] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:15.801811 containerd[1462]: 2026-04-13 20:34:15.706 [INFO][3964] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.82.4/26] IPv6=[] ContainerID="5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" HandleID="k8s-pod-network.5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:15.806402 containerd[1462]: 2026-04-13 20:34:15.724 [INFO][3898] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-tqdxr" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0", GenerateName:"calico-apiserver-7d4d888f55-", Namespace:"calico-system", SelfLink:"", UID:"c693e029-794b-434b-97e0-e01594b71108", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4d888f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-7d4d888f55-tqdxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0c742f22395", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:15.806402 containerd[1462]: 2026-04-13 20:34:15.724 [INFO][3898] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.4/32] ContainerID="5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-tqdxr" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:15.806402 containerd[1462]: 2026-04-13 20:34:15.724 [INFO][3898] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0c742f22395 ContainerID="5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-tqdxr" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:15.806402 containerd[1462]: 2026-04-13 20:34:15.740 [INFO][3898] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-tqdxr" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:15.806402 containerd[1462]: 2026-04-13 20:34:15.762 [INFO][3898] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-tqdxr" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0", GenerateName:"calico-apiserver-7d4d888f55-", Namespace:"calico-system", SelfLink:"", UID:"c693e029-794b-434b-97e0-e01594b71108", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4d888f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015", Pod:"calico-apiserver-7d4d888f55-tqdxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0c742f22395", MAC:"ca:d0:13:54:a0:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:15.806402 containerd[1462]: 2026-04-13 20:34:15.795 [INFO][3898] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015" Namespace="calico-system" Pod="calico-apiserver-7d4d888f55-tqdxr" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:15.816567 containerd[1462]: time="2026-04-13T20:34:15.815547746Z" level=info msg="CreateContainer within sandbox \"aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"403b7bc9f3db308431af940edb21801157b84d7bf3a18749c2b3adb5c6907734\"" Apr 13 20:34:15.821261 containerd[1462]: time="2026-04-13T20:34:15.821205669Z" level=info msg="StartContainer for \"403b7bc9f3db308431af940edb21801157b84d7bf3a18749c2b3adb5c6907734\"" Apr 13 20:34:15.840841 containerd[1462]: time="2026-04-13T20:34:15.840676259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:34:15.840841 containerd[1462]: time="2026-04-13T20:34:15.840784877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:34:15.841187 containerd[1462]: time="2026-04-13T20:34:15.840829585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:15.841187 containerd[1462]: time="2026-04-13T20:34:15.841070203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:15.928830 systemd-networkd[1373]: calicc417c3a32e: Link UP Apr 13 20:34:15.941160 systemd-networkd[1373]: calicc417c3a32e: Gained carrier Apr 13 20:34:15.967166 systemd[1]: Started cri-containerd-403b7bc9f3db308431af940edb21801157b84d7bf3a18749c2b3adb5c6907734.scope - libcontainer container 403b7bc9f3db308431af940edb21801157b84d7bf3a18749c2b3adb5c6907734. Apr 13 20:34:15.997335 systemd[1]: Started cri-containerd-878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45.scope - libcontainer container 878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45. Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:14.697 [ERROR][3936] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:14.784 [INFO][3936] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0 goldmane-9f7667bb8- calico-system 29fcd04c-8e33-4e32-b58c-36d11bc97ed6 933 0 2026-04-13 20:33:47 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:9f7667bb8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal goldmane-9f7667bb8-9xj9t eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calicc417c3a32e [] [] }} ContainerID="6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" Namespace="calico-system" Pod="goldmane-9f7667bb8-9xj9t" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-" Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:14.784 [INFO][3936] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" Namespace="calico-system" Pod="goldmane-9f7667bb8-9xj9t" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:14.924 [INFO][4014] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" HandleID="k8s-pod-network.6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:14.963 [INFO][4014] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" HandleID="k8s-pod-network.6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000414060), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", "pod":"goldmane-9f7667bb8-9xj9t", "timestamp":"2026-04-13 20:34:14.924649529 +0000 UTC"}, Hostname:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002b9080)} Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:14.963 [INFO][4014] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:15.697 [INFO][4014] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:15.697 [INFO][4014] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal' Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:15.723 [INFO][4014] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:15.762 [INFO][4014] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:15.774 [INFO][4014] ipam/ipam.go 526: Trying affinity for 192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:15.780 [INFO][4014] ipam/ipam.go 160: Attempting to load block cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:15.784 [INFO][4014] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:15.789 [INFO][4014] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:15.796 [INFO][4014] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044 Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:15.812 [INFO][4014] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:15.858 [INFO][4014] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.82.5/26] block=192.168.82.0/26 handle="k8s-pod-network.6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:15.863 [INFO][4014] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.82.5/26] handle="k8s-pod-network.6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:15.864 [INFO][4014] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:16.018004 containerd[1462]: 2026-04-13 20:34:15.864 [INFO][4014] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.82.5/26] IPv6=[] ContainerID="6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" HandleID="k8s-pod-network.6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:16.019536 containerd[1462]: 2026-04-13 20:34:15.890 [INFO][3936] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" Namespace="calico-system" Pod="goldmane-9f7667bb8-9xj9t" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"29fcd04c-8e33-4e32-b58c-36d11bc97ed6", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"", Pod:"goldmane-9f7667bb8-9xj9t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicc417c3a32e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:16.019536 containerd[1462]: 2026-04-13 20:34:15.897 [INFO][3936] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.5/32] ContainerID="6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" Namespace="calico-system" Pod="goldmane-9f7667bb8-9xj9t" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:16.019536 containerd[1462]: 2026-04-13 20:34:15.898 [INFO][3936] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicc417c3a32e ContainerID="6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" Namespace="calico-system" Pod="goldmane-9f7667bb8-9xj9t" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:16.019536 containerd[1462]: 2026-04-13 20:34:15.943 [INFO][3936] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" Namespace="calico-system" Pod="goldmane-9f7667bb8-9xj9t" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:16.019536 containerd[1462]: 2026-04-13 20:34:15.949 [INFO][3936] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" Namespace="calico-system" Pod="goldmane-9f7667bb8-9xj9t" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"29fcd04c-8e33-4e32-b58c-36d11bc97ed6", ResourceVersion:"933", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044", Pod:"goldmane-9f7667bb8-9xj9t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicc417c3a32e", MAC:"a2:a8:d9:6c:5c:32", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:16.019536 containerd[1462]: 2026-04-13 20:34:15.995 [INFO][3936] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044" Namespace="calico-system" Pod="goldmane-9f7667bb8-9xj9t" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:16.045562 systemd-networkd[1373]: cali5a2b9c564f6: Link UP Apr 13 20:34:16.050102 kubelet[2582]: I0413 20:34:16.048407 2582 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:34:16.049332 systemd-networkd[1373]: cali5a2b9c564f6: Gained carrier Apr 13 20:34:16.067012 kubelet[2582]: I0413 20:34:16.066068 2582 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="5d738dcc-413d-4de4-9c00-e5a6fce1be78" path="/var/lib/kubelet/pods/5d738dcc-413d-4de4-9c00-e5a6fce1be78/volumes" Apr 13 20:34:16.087539 containerd[1462]: time="2026-04-13T20:34:16.084366729Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:34:16.092684 containerd[1462]: time="2026-04-13T20:34:16.089892808Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:34:16.092684 containerd[1462]: time="2026-04-13T20:34:16.089988079Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:16.092684 containerd[1462]: time="2026-04-13T20:34:16.090554973Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:14.908 [ERROR][3983] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.024 [INFO][3983] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0 csi-node-driver- calico-system 3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2 914 0 2026-04-13 20:33:48 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:589b8b8d94 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal csi-node-driver-dn72w eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5a2b9c564f6 [] [] }} ContainerID="fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" Namespace="calico-system" Pod="csi-node-driver-dn72w" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-" Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.024 [INFO][3983] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" Namespace="calico-system" Pod="csi-node-driver-dn72w" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0" Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.339 [INFO][4034] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" HandleID="k8s-pod-network.fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0" Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.362 [INFO][4034] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" HandleID="k8s-pod-network.fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f580), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", "pod":"csi-node-driver-dn72w", "timestamp":"2026-04-13 20:34:15.339557813 +0000 UTC"}, Hostname:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000248420)} Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.362 [INFO][4034] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.864 [INFO][4034] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.864 [INFO][4034] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal' Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.879 [INFO][4034] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.895 [INFO][4034] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.926 [INFO][4034] ipam/ipam.go 526: Trying affinity for 192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.945 [INFO][4034] ipam/ipam.go 160: Attempting to load block cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.957 [INFO][4034] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.957 [INFO][4034] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.974 [INFO][4034] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3 Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:15.994 [INFO][4034] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:16.024 [INFO][4034] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.82.6/26] block=192.168.82.0/26 handle="k8s-pod-network.fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:16.025 [INFO][4034] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.82.6/26] handle="k8s-pod-network.fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:16.026 [INFO][4034] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:16.151430 containerd[1462]: 2026-04-13 20:34:16.026 [INFO][4034] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.82.6/26] IPv6=[] ContainerID="fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" HandleID="k8s-pod-network.fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0" Apr 13 20:34:16.154129 containerd[1462]: 2026-04-13 20:34:16.034 [INFO][3983] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" Namespace="calico-system" Pod="csi-node-driver-dn72w" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-dn72w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5a2b9c564f6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:16.154129 containerd[1462]: 2026-04-13 20:34:16.034 [INFO][3983] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.6/32] ContainerID="fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" Namespace="calico-system" Pod="csi-node-driver-dn72w" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0" Apr 13 20:34:16.154129 containerd[1462]: 2026-04-13 20:34:16.034 [INFO][3983] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a2b9c564f6 ContainerID="fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" Namespace="calico-system" Pod="csi-node-driver-dn72w" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0" Apr 13 20:34:16.154129 containerd[1462]: 2026-04-13 20:34:16.053 [INFO][3983] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" Namespace="calico-system" Pod="csi-node-driver-dn72w" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0" Apr 13 20:34:16.154129 containerd[1462]: 2026-04-13 20:34:16.080 [INFO][3983] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" Namespace="calico-system" Pod="csi-node-driver-dn72w" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"589b8b8d94", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3", Pod:"csi-node-driver-dn72w", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.82.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5a2b9c564f6", MAC:"de:8e:43:43:25:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:16.154129 containerd[1462]: 2026-04-13 20:34:16.134 [INFO][3983] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3" Namespace="calico-system" Pod="csi-node-driver-dn72w" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-csi--node--driver--dn72w-eth0" Apr 13 20:34:16.177412 containerd[1462]: time="2026-04-13T20:34:16.176275384Z" level=info msg="StartContainer for \"403b7bc9f3db308431af940edb21801157b84d7bf3a18749c2b3adb5c6907734\" returns successfully" Apr 13 20:34:16.200193 systemd[1]: Started cri-containerd-5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015.scope - libcontainer container 5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015. Apr 13 20:34:16.217770 containerd[1462]: time="2026-04-13T20:34:16.217245018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:34:16.217770 containerd[1462]: time="2026-04-13T20:34:16.217414900Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:34:16.217770 containerd[1462]: time="2026-04-13T20:34:16.217449109Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:16.217770 containerd[1462]: time="2026-04-13T20:34:16.217601786Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:16.288942 systemd[1]: Started cri-containerd-6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044.scope - libcontainer container 6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044. Apr 13 20:34:16.326989 containerd[1462]: time="2026-04-13T20:34:16.326219189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:34:16.327649 containerd[1462]: time="2026-04-13T20:34:16.327048531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:34:16.330169 containerd[1462]: time="2026-04-13T20:34:16.328687395Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:16.330169 containerd[1462]: time="2026-04-13T20:34:16.328936498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:16.383983 systemd[1]: Started cri-containerd-fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3.scope - libcontainer container fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3. Apr 13 20:34:16.411271 systemd-networkd[1373]: calib6b6240de74: Link UP Apr 13 20:34:16.411881 systemd-networkd[1373]: calib6b6240de74: Gained carrier Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:15.878 [ERROR][4134] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:15.976 [INFO][4134] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--658d45bc66--9ksdp-eth0 whisker-658d45bc66- calico-system a682c6c9-d8fe-4765-ad06-55b42b978886 957 0 2026-04-13 20:34:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:658d45bc66 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal whisker-658d45bc66-9ksdp eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib6b6240de74 [] [] }} ContainerID="bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" Namespace="calico-system" Pod="whisker-658d45bc66-9ksdp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--658d45bc66--9ksdp-" Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:15.977 [INFO][4134] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" Namespace="calico-system" Pod="whisker-658d45bc66-9ksdp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--658d45bc66--9ksdp-eth0" Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.224 [INFO][4271] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" HandleID="k8s-pod-network.bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--658d45bc66--9ksdp-eth0" Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.255 [INFO][4271] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" HandleID="k8s-pod-network.bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--658d45bc66--9ksdp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006c5330), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", "pod":"whisker-658d45bc66-9ksdp", "timestamp":"2026-04-13 20:34:16.224129402 +0000 UTC"}, Hostname:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000188c60)} Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.255 [INFO][4271] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.255 [INFO][4271] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.255 [INFO][4271] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal' Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.264 [INFO][4271] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.279 [INFO][4271] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.292 [INFO][4271] ipam/ipam.go 526: Trying affinity for 192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.298 [INFO][4271] ipam/ipam.go 160: Attempting to load block cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.303 [INFO][4271] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.304 [INFO][4271] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.308 [INFO][4271] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.333 [INFO][4271] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.381 [INFO][4271] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.82.7/26] block=192.168.82.0/26 handle="k8s-pod-network.bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.382 [INFO][4271] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.82.7/26] handle="k8s-pod-network.bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.383 [INFO][4271] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:16.461213 containerd[1462]: 2026-04-13 20:34:16.383 [INFO][4271] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.82.7/26] IPv6=[] ContainerID="bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" HandleID="k8s-pod-network.bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--658d45bc66--9ksdp-eth0" Apr 13 20:34:16.466783 containerd[1462]: 2026-04-13 20:34:16.396 [INFO][4134] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" Namespace="calico-system" Pod="whisker-658d45bc66-9ksdp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--658d45bc66--9ksdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--658d45bc66--9ksdp-eth0", GenerateName:"whisker-658d45bc66-", Namespace:"calico-system", SelfLink:"", UID:"a682c6c9-d8fe-4765-ad06-55b42b978886", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"658d45bc66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"", Pod:"whisker-658d45bc66-9ksdp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib6b6240de74", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:16.466783 containerd[1462]: 2026-04-13 20:34:16.397 [INFO][4134] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.7/32] ContainerID="bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" Namespace="calico-system" Pod="whisker-658d45bc66-9ksdp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--658d45bc66--9ksdp-eth0" Apr 13 20:34:16.466783 containerd[1462]: 2026-04-13 20:34:16.397 [INFO][4134] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib6b6240de74 ContainerID="bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" Namespace="calico-system" Pod="whisker-658d45bc66-9ksdp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--658d45bc66--9ksdp-eth0" Apr 13 20:34:16.466783 containerd[1462]: 2026-04-13 20:34:16.412 [INFO][4134] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" Namespace="calico-system" Pod="whisker-658d45bc66-9ksdp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--658d45bc66--9ksdp-eth0" Apr 13 20:34:16.466783 containerd[1462]: 2026-04-13 20:34:16.414 [INFO][4134] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" Namespace="calico-system" Pod="whisker-658d45bc66-9ksdp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--658d45bc66--9ksdp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--658d45bc66--9ksdp-eth0", GenerateName:"whisker-658d45bc66-", Namespace:"calico-system", SelfLink:"", UID:"a682c6c9-d8fe-4765-ad06-55b42b978886", ResourceVersion:"957", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 34, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"658d45bc66", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca", Pod:"whisker-658d45bc66-9ksdp", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.82.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib6b6240de74", MAC:"12:e5:a4:03:79:02", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:16.466783 containerd[1462]: 2026-04-13 20:34:16.447 [INFO][4134] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca" Namespace="calico-system" Pod="whisker-658d45bc66-9ksdp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--658d45bc66--9ksdp-eth0" Apr 13 20:34:16.521684 containerd[1462]: time="2026-04-13T20:34:16.520758800Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:34:16.521684 containerd[1462]: time="2026-04-13T20:34:16.520847975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:34:16.521684 containerd[1462]: time="2026-04-13T20:34:16.521311957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:16.521684 containerd[1462]: time="2026-04-13T20:34:16.521494887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:16.526697 systemd-networkd[1373]: cali79d06642b75: Gained IPv6LL Apr 13 20:34:16.601507 containerd[1462]: time="2026-04-13T20:34:16.601417364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4d888f55-tqdxr,Uid:c693e029-794b-434b-97e0-e01594b71108,Namespace:calico-system,Attempt:1,} returns sandbox id \"5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015\"" Apr 13 20:34:16.613474 containerd[1462]: time="2026-04-13T20:34:16.613132770Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:34:16.628406 kubelet[2582]: I0413 20:34:16.627843 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-f8m2g" podStartSLOduration=44.627819412 podStartE2EDuration="44.627819412s" podCreationTimestamp="2026-04-13 20:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:34:16.625468848 +0000 UTC m=+48.847186784" watchObservedRunningTime="2026-04-13 20:34:16.627819412 +0000 UTC m=+48.849537349" Apr 13 20:34:16.681451 systemd[1]: Started cri-containerd-bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca.scope - libcontainer container bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca. Apr 13 20:34:16.723158 containerd[1462]: time="2026-04-13T20:34:16.722505472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7ff99f9c59-94r4d,Uid:72d06a65-1282-431f-bff3-3de35ce0d86c,Namespace:calico-system,Attempt:1,} returns sandbox id \"bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d\"" Apr 13 20:34:16.728556 containerd[1462]: time="2026-04-13T20:34:16.728497113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dn72w,Uid:3e73a65a-fec1-4a28-8ba2-fc4af2b02bb2,Namespace:calico-system,Attempt:0,} returns sandbox id \"fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3\"" Apr 13 20:34:16.793795 containerd[1462]: time="2026-04-13T20:34:16.793658595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-9f7667bb8-9xj9t,Uid:29fcd04c-8e33-4e32-b58c-36d11bc97ed6,Namespace:calico-system,Attempt:1,} returns sandbox id \"6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044\"" Apr 13 20:34:16.890306 containerd[1462]: time="2026-04-13T20:34:16.890250085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d4d888f55-gszcq,Uid:ac435b27-3a39-4ef6-8b1e-437562c1e7eb,Namespace:calico-system,Attempt:1,} returns sandbox id \"878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45\"" Apr 13 20:34:16.962316 containerd[1462]: time="2026-04-13T20:34:16.962254557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-658d45bc66-9ksdp,Uid:a682c6c9-d8fe-4765-ad06-55b42b978886,Namespace:calico-system,Attempt:0,} returns sandbox id \"bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca\"" Apr 13 20:34:17.102259 systemd-networkd[1373]: calid549b74edf6: Gained IPv6LL Apr 13 20:34:17.166104 systemd-networkd[1373]: cali9adf618b581: Gained IPv6LL Apr 13 20:34:17.294425 systemd-networkd[1373]: cali5a2b9c564f6: Gained IPv6LL Apr 13 20:34:17.486564 systemd-networkd[1373]: cali0c742f22395: Gained IPv6LL Apr 13 20:34:17.491185 systemd-networkd[1373]: calicc417c3a32e: Gained IPv6LL Apr 13 20:34:17.526988 kernel: calico-node[4127]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 13 20:34:17.614138 systemd-networkd[1373]: calib6b6240de74: Gained IPv6LL Apr 13 20:34:18.649488 systemd-networkd[1373]: vxlan.calico: Link UP Apr 13 20:34:18.649507 systemd-networkd[1373]: vxlan.calico: Gained carrier Apr 13 20:34:20.321965 containerd[1462]: time="2026-04-13T20:34:20.321459518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:20.323820 containerd[1462]: time="2026-04-13T20:34:20.323684852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Apr 13 20:34:20.324836 containerd[1462]: time="2026-04-13T20:34:20.324752832Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:20.329897 containerd[1462]: time="2026-04-13T20:34:20.329286428Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:20.330747 containerd[1462]: time="2026-04-13T20:34:20.330692561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.717320413s" Apr 13 20:34:20.330880 containerd[1462]: time="2026-04-13T20:34:20.330753228Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:34:20.336495 containerd[1462]: time="2026-04-13T20:34:20.334732640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Apr 13 20:34:20.340760 containerd[1462]: time="2026-04-13T20:34:20.340704654Z" level=info msg="CreateContainer within sandbox \"5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:34:20.367893 containerd[1462]: time="2026-04-13T20:34:20.366926723Z" level=info msg="CreateContainer within sandbox \"5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a7e3ff4bf06f4ba102bf0d8ffe8b3d637dc37dcd6a508543cf90520c5055d86f\"" Apr 13 20:34:20.371206 containerd[1462]: time="2026-04-13T20:34:20.370806259Z" level=info msg="StartContainer for \"a7e3ff4bf06f4ba102bf0d8ffe8b3d637dc37dcd6a508543cf90520c5055d86f\"" Apr 13 20:34:20.384958 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1756639525.mount: Deactivated successfully. Apr 13 20:34:20.431100 systemd-networkd[1373]: vxlan.calico: Gained IPv6LL Apr 13 20:34:20.447207 systemd[1]: Started cri-containerd-a7e3ff4bf06f4ba102bf0d8ffe8b3d637dc37dcd6a508543cf90520c5055d86f.scope - libcontainer container a7e3ff4bf06f4ba102bf0d8ffe8b3d637dc37dcd6a508543cf90520c5055d86f. Apr 13 20:34:20.535081 containerd[1462]: time="2026-04-13T20:34:20.534802231Z" level=info msg="StartContainer for \"a7e3ff4bf06f4ba102bf0d8ffe8b3d637dc37dcd6a508543cf90520c5055d86f\" returns successfully" Apr 13 20:34:20.659733 kubelet[2582]: I0413 20:34:20.658832 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7d4d888f55-tqdxr" podStartSLOduration=29.93547875 podStartE2EDuration="33.658804388s" podCreationTimestamp="2026-04-13 20:33:47 +0000 UTC" firstStartedPulling="2026-04-13 20:34:16.609240872 +0000 UTC m=+48.830958797" lastFinishedPulling="2026-04-13 20:34:20.332566502 +0000 UTC m=+52.554284435" observedRunningTime="2026-04-13 20:34:20.657136553 +0000 UTC m=+52.878854488" watchObservedRunningTime="2026-04-13 20:34:20.658804388 +0000 UTC m=+52.880522324" Apr 13 20:34:21.644925 kubelet[2582]: I0413 20:34:21.643620 2582 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:34:23.146944 kubelet[2582]: I0413 20:34:23.146073 2582 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:34:23.271422 ntpd[1432]: Listen normally on 7 vxlan.calico 192.168.82.0:123 Apr 13 20:34:23.274216 ntpd[1432]: 13 Apr 20:34:23 ntpd[1432]: Listen normally on 7 vxlan.calico 192.168.82.0:123 Apr 13 20:34:23.274216 ntpd[1432]: 13 Apr 20:34:23 ntpd[1432]: Listen normally on 8 cali79d06642b75 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 20:34:23.274216 ntpd[1432]: 13 Apr 20:34:23 ntpd[1432]: Listen normally on 9 calid549b74edf6 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 13 20:34:23.274216 ntpd[1432]: 13 Apr 20:34:23 ntpd[1432]: Listen normally on 10 cali9adf618b581 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 13 20:34:23.271972 ntpd[1432]: Listen normally on 8 cali79d06642b75 [fe80::ecee:eeff:feee:eeee%4]:123 Apr 13 20:34:23.272577 ntpd[1432]: Listen normally on 9 calid549b74edf6 [fe80::ecee:eeff:feee:eeee%5]:123 Apr 13 20:34:23.273199 ntpd[1432]: Listen normally on 10 cali9adf618b581 [fe80::ecee:eeff:feee:eeee%6]:123 Apr 13 20:34:23.273615 ntpd[1432]: Listen normally on 11 cali0c742f22395 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 13 20:34:23.277217 ntpd[1432]: 13 Apr 20:34:23 ntpd[1432]: Listen normally on 11 cali0c742f22395 [fe80::ecee:eeff:feee:eeee%7]:123 Apr 13 20:34:23.277217 ntpd[1432]: 13 Apr 20:34:23 ntpd[1432]: Listen normally on 12 calicc417c3a32e [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 20:34:23.277217 ntpd[1432]: 13 Apr 20:34:23 ntpd[1432]: Listen normally on 13 cali5a2b9c564f6 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 20:34:23.277217 ntpd[1432]: 13 Apr 20:34:23 ntpd[1432]: Listen normally on 14 calib6b6240de74 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 20:34:23.277217 ntpd[1432]: 13 Apr 20:34:23 ntpd[1432]: Listen normally on 15 vxlan.calico [fe80::647a:30ff:fe55:4402%11]:123 Apr 13 20:34:23.276542 ntpd[1432]: Listen normally on 12 calicc417c3a32e [fe80::ecee:eeff:feee:eeee%8]:123 Apr 13 20:34:23.276632 ntpd[1432]: Listen normally on 13 cali5a2b9c564f6 [fe80::ecee:eeff:feee:eeee%9]:123 Apr 13 20:34:23.276700 ntpd[1432]: Listen normally on 14 calib6b6240de74 [fe80::ecee:eeff:feee:eeee%10]:123 Apr 13 20:34:23.276764 ntpd[1432]: Listen normally on 15 vxlan.calico [fe80::647a:30ff:fe55:4402%11]:123 Apr 13 20:34:24.250177 containerd[1462]: time="2026-04-13T20:34:24.249846003Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:24.254820 containerd[1462]: time="2026-04-13T20:34:24.254239019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Apr 13 20:34:24.257329 containerd[1462]: time="2026-04-13T20:34:24.257291365Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:24.266086 containerd[1462]: time="2026-04-13T20:34:24.265994315Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:24.269009 containerd[1462]: time="2026-04-13T20:34:24.268952387Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.93105456s" Apr 13 20:34:24.269150 containerd[1462]: time="2026-04-13T20:34:24.269012850Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Apr 13 20:34:24.272333 containerd[1462]: time="2026-04-13T20:34:24.272073879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Apr 13 20:34:24.320772 containerd[1462]: time="2026-04-13T20:34:24.320588123Z" level=info msg="CreateContainer within sandbox \"bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Apr 13 20:34:24.347032 containerd[1462]: time="2026-04-13T20:34:24.346021590Z" level=info msg="CreateContainer within sandbox \"bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"121cb303174b9750126d0ffd52ab18709a20c3a556d0f6a48315772369190318\"" Apr 13 20:34:24.349992 containerd[1462]: time="2026-04-13T20:34:24.349839585Z" level=info msg="StartContainer for \"121cb303174b9750126d0ffd52ab18709a20c3a556d0f6a48315772369190318\"" Apr 13 20:34:24.446528 systemd[1]: Started cri-containerd-121cb303174b9750126d0ffd52ab18709a20c3a556d0f6a48315772369190318.scope - libcontainer container 121cb303174b9750126d0ffd52ab18709a20c3a556d0f6a48315772369190318. Apr 13 20:34:24.548449 containerd[1462]: time="2026-04-13T20:34:24.548270909Z" level=info msg="StartContainer for \"121cb303174b9750126d0ffd52ab18709a20c3a556d0f6a48315772369190318\" returns successfully" Apr 13 20:34:25.522285 containerd[1462]: time="2026-04-13T20:34:25.522206079Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:25.524238 containerd[1462]: time="2026-04-13T20:34:25.524151962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Apr 13 20:34:25.525965 containerd[1462]: time="2026-04-13T20:34:25.525850858Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:25.532944 containerd[1462]: time="2026-04-13T20:34:25.531508414Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:25.533651 containerd[1462]: time="2026-04-13T20:34:25.533604606Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.261487122s" Apr 13 20:34:25.533930 containerd[1462]: time="2026-04-13T20:34:25.533872882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Apr 13 20:34:25.537039 containerd[1462]: time="2026-04-13T20:34:25.537003470Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Apr 13 20:34:25.540887 containerd[1462]: time="2026-04-13T20:34:25.540823441Z" level=info msg="CreateContainer within sandbox \"fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 13 20:34:25.568453 containerd[1462]: time="2026-04-13T20:34:25.568370766Z" level=info msg="CreateContainer within sandbox \"fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c04f539f5fcb24c6edd4212bb11def8b53b5429f40c3b1819de0a66e2f67ab79\"" Apr 13 20:34:25.570166 containerd[1462]: time="2026-04-13T20:34:25.569865780Z" level=info msg="StartContainer for \"c04f539f5fcb24c6edd4212bb11def8b53b5429f40c3b1819de0a66e2f67ab79\"" Apr 13 20:34:25.640146 systemd[1]: Started cri-containerd-c04f539f5fcb24c6edd4212bb11def8b53b5429f40c3b1819de0a66e2f67ab79.scope - libcontainer container c04f539f5fcb24c6edd4212bb11def8b53b5429f40c3b1819de0a66e2f67ab79. Apr 13 20:34:25.699540 containerd[1462]: time="2026-04-13T20:34:25.698884323Z" level=info msg="StartContainer for \"c04f539f5fcb24c6edd4212bb11def8b53b5429f40c3b1819de0a66e2f67ab79\" returns successfully" Apr 13 20:34:25.779652 kubelet[2582]: I0413 20:34:25.778688 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7ff99f9c59-94r4d" podStartSLOduration=30.237655183 podStartE2EDuration="37.778662639s" podCreationTimestamp="2026-04-13 20:33:48 +0000 UTC" firstStartedPulling="2026-04-13 20:34:16.729239465 +0000 UTC m=+48.950957393" lastFinishedPulling="2026-04-13 20:34:24.270246926 +0000 UTC m=+56.491964849" observedRunningTime="2026-04-13 20:34:24.694321211 +0000 UTC m=+56.916039148" watchObservedRunningTime="2026-04-13 20:34:25.778662639 +0000 UTC m=+58.000380578" Apr 13 20:34:26.055117 containerd[1462]: time="2026-04-13T20:34:26.054936057Z" level=info msg="StopPodSandbox for \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\"" Apr 13 20:34:26.196819 containerd[1462]: 2026-04-13 20:34:26.135 [INFO][4864] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Apr 13 20:34:26.196819 containerd[1462]: 2026-04-13 20:34:26.137 [INFO][4864] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" iface="eth0" netns="/var/run/netns/cni-3c45f301-c336-f0e5-4691-552910dd2873" Apr 13 20:34:26.196819 containerd[1462]: 2026-04-13 20:34:26.138 [INFO][4864] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" iface="eth0" netns="/var/run/netns/cni-3c45f301-c336-f0e5-4691-552910dd2873" Apr 13 20:34:26.196819 containerd[1462]: 2026-04-13 20:34:26.138 [INFO][4864] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" iface="eth0" netns="/var/run/netns/cni-3c45f301-c336-f0e5-4691-552910dd2873" Apr 13 20:34:26.196819 containerd[1462]: 2026-04-13 20:34:26.138 [INFO][4864] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Apr 13 20:34:26.196819 containerd[1462]: 2026-04-13 20:34:26.138 [INFO][4864] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Apr 13 20:34:26.196819 containerd[1462]: 2026-04-13 20:34:26.175 [INFO][4871] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" HandleID="k8s-pod-network.c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:26.196819 containerd[1462]: 2026-04-13 20:34:26.175 [INFO][4871] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:26.196819 containerd[1462]: 2026-04-13 20:34:26.175 [INFO][4871] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:26.196819 containerd[1462]: 2026-04-13 20:34:26.188 [WARNING][4871] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" HandleID="k8s-pod-network.c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:26.196819 containerd[1462]: 2026-04-13 20:34:26.188 [INFO][4871] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" HandleID="k8s-pod-network.c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:26.196819 containerd[1462]: 2026-04-13 20:34:26.190 [INFO][4871] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:26.196819 containerd[1462]: 2026-04-13 20:34:26.193 [INFO][4864] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Apr 13 20:34:26.198587 containerd[1462]: time="2026-04-13T20:34:26.197845866Z" level=info msg="TearDown network for sandbox \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\" successfully" Apr 13 20:34:26.198587 containerd[1462]: time="2026-04-13T20:34:26.197890672Z" level=info msg="StopPodSandbox for \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\" returns successfully" Apr 13 20:34:26.202797 containerd[1462]: time="2026-04-13T20:34:26.202538072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kh7qp,Uid:4f231b08-404f-4650-8082-80470e832cfe,Namespace:kube-system,Attempt:1,}" Apr 13 20:34:26.297501 systemd[1]: run-netns-cni\x2d3c45f301\x2dc336\x2df0e5\x2d4691\x2d552910dd2873.mount: Deactivated successfully. Apr 13 20:34:26.438963 systemd-networkd[1373]: calif049a1a512d: Link UP Apr 13 20:34:26.440412 systemd-networkd[1373]: calif049a1a512d: Gained carrier Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.296 [INFO][4878] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0 coredns-7d764666f9- kube-system 4f231b08-404f-4650-8082-80470e832cfe 1052 0 2026-04-13 20:33:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7d764666f9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal coredns-7d764666f9-kh7qp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif049a1a512d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" Namespace="kube-system" Pod="coredns-7d764666f9-kh7qp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-" Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.297 [INFO][4878] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" Namespace="kube-system" Pod="coredns-7d764666f9-kh7qp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.361 [INFO][4889] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" HandleID="k8s-pod-network.822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.377 [INFO][4889] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" HandleID="k8s-pod-network.822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000277a90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", "pod":"coredns-7d764666f9-kh7qp", "timestamp":"2026-04-13 20:34:26.36140339 +0000 UTC"}, Hostname:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000237080)} Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.377 [INFO][4889] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.377 [INFO][4889] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.377 [INFO][4889] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal' Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.381 [INFO][4889] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.387 [INFO][4889] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.397 [INFO][4889] ipam/ipam.go 526: Trying affinity for 192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.401 [INFO][4889] ipam/ipam.go 160: Attempting to load block cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.407 [INFO][4889] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.82.0/26 host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.407 [INFO][4889] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.82.0/26 handle="k8s-pod-network.822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.409 [INFO][4889] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.415 [INFO][4889] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.82.0/26 handle="k8s-pod-network.822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.426 [INFO][4889] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.82.8/26] block=192.168.82.0/26 handle="k8s-pod-network.822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.426 [INFO][4889] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.82.8/26] handle="k8s-pod-network.822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" host="ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal" Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.427 [INFO][4889] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:26.474265 containerd[1462]: 2026-04-13 20:34:26.427 [INFO][4889] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.82.8/26] IPv6=[] ContainerID="822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" HandleID="k8s-pod-network.822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:26.478248 containerd[1462]: 2026-04-13 20:34:26.433 [INFO][4878] cni-plugin/k8s.go 418: Populated endpoint ContainerID="822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" Namespace="kube-system" Pod="coredns-7d764666f9-kh7qp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4f231b08-404f-4650-8082-80470e832cfe", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7d764666f9-kh7qp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif049a1a512d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:26.478248 containerd[1462]: 2026-04-13 20:34:26.433 [INFO][4878] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.82.8/32] ContainerID="822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" Namespace="kube-system" Pod="coredns-7d764666f9-kh7qp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:26.478248 containerd[1462]: 2026-04-13 20:34:26.433 [INFO][4878] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif049a1a512d ContainerID="822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" Namespace="kube-system" Pod="coredns-7d764666f9-kh7qp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:26.478248 containerd[1462]: 2026-04-13 20:34:26.444 [INFO][4878] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" Namespace="kube-system" Pod="coredns-7d764666f9-kh7qp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:26.478780 containerd[1462]: 2026-04-13 20:34:26.447 [INFO][4878] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" Namespace="kube-system" Pod="coredns-7d764666f9-kh7qp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4f231b08-404f-4650-8082-80470e832cfe", ResourceVersion:"1052", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f", Pod:"coredns-7d764666f9-kh7qp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif049a1a512d", MAC:"72:59:9a:be:8d:a8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:26.478780 containerd[1462]: 2026-04-13 20:34:26.465 [INFO][4878] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f" Namespace="kube-system" Pod="coredns-7d764666f9-kh7qp" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:26.605404 containerd[1462]: time="2026-04-13T20:34:26.575970717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 13 20:34:26.605404 containerd[1462]: time="2026-04-13T20:34:26.576273522Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 13 20:34:26.605404 containerd[1462]: time="2026-04-13T20:34:26.576294389Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:26.605404 containerd[1462]: time="2026-04-13T20:34:26.577010064Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 13 20:34:26.692216 systemd[1]: Started cri-containerd-822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f.scope - libcontainer container 822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f. Apr 13 20:34:26.873425 containerd[1462]: time="2026-04-13T20:34:26.872850864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7d764666f9-kh7qp,Uid:4f231b08-404f-4650-8082-80470e832cfe,Namespace:kube-system,Attempt:1,} returns sandbox id \"822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f\"" Apr 13 20:34:27.012576 containerd[1462]: time="2026-04-13T20:34:27.011851167Z" level=info msg="CreateContainer within sandbox \"822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Apr 13 20:34:27.136956 containerd[1462]: time="2026-04-13T20:34:27.136567226Z" level=info msg="CreateContainer within sandbox \"822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"530167314f0bab078b279be8a49522b543f9bc6ff40d5ad92e92be21de2bc3ab\"" Apr 13 20:34:27.139302 containerd[1462]: time="2026-04-13T20:34:27.138675235Z" level=info msg="StartContainer for \"530167314f0bab078b279be8a49522b543f9bc6ff40d5ad92e92be21de2bc3ab\"" Apr 13 20:34:27.214217 systemd[1]: Started cri-containerd-530167314f0bab078b279be8a49522b543f9bc6ff40d5ad92e92be21de2bc3ab.scope - libcontainer container 530167314f0bab078b279be8a49522b543f9bc6ff40d5ad92e92be21de2bc3ab. Apr 13 20:34:27.317766 containerd[1462]: time="2026-04-13T20:34:27.317462253Z" level=info msg="StartContainer for \"530167314f0bab078b279be8a49522b543f9bc6ff40d5ad92e92be21de2bc3ab\" returns successfully" Apr 13 20:34:27.982283 kubelet[2582]: I0413 20:34:27.982138 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kube-system/coredns-7d764666f9-kh7qp" podStartSLOduration=55.982079661 podStartE2EDuration="55.982079661s" podCreationTimestamp="2026-04-13 20:33:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-04-13 20:34:27.915127976 +0000 UTC m=+60.136845994" watchObservedRunningTime="2026-04-13 20:34:27.982079661 +0000 UTC m=+60.203797595" Apr 13 20:34:28.033795 containerd[1462]: time="2026-04-13T20:34:28.033733750Z" level=info msg="StopPodSandbox for \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\"" Apr 13 20:34:28.046658 systemd-networkd[1373]: calif049a1a512d: Gained IPv6LL Apr 13 20:34:28.237453 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2044446904.mount: Deactivated successfully. Apr 13 20:34:28.341504 containerd[1462]: 2026-04-13 20:34:28.222 [WARNING][5011] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--7b55c6d7cc--6f99v-eth0" Apr 13 20:34:28.341504 containerd[1462]: 2026-04-13 20:34:28.223 [INFO][5011] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Apr 13 20:34:28.341504 containerd[1462]: 2026-04-13 20:34:28.223 [INFO][5011] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" iface="eth0" netns="" Apr 13 20:34:28.341504 containerd[1462]: 2026-04-13 20:34:28.223 [INFO][5011] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Apr 13 20:34:28.341504 containerd[1462]: 2026-04-13 20:34:28.223 [INFO][5011] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Apr 13 20:34:28.341504 containerd[1462]: 2026-04-13 20:34:28.307 [INFO][5025] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" HandleID="k8s-pod-network.abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--7b55c6d7cc--6f99v-eth0" Apr 13 20:34:28.341504 containerd[1462]: 2026-04-13 20:34:28.307 [INFO][5025] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:28.341504 containerd[1462]: 2026-04-13 20:34:28.307 [INFO][5025] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:28.341504 containerd[1462]: 2026-04-13 20:34:28.331 [WARNING][5025] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" HandleID="k8s-pod-network.abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--7b55c6d7cc--6f99v-eth0" Apr 13 20:34:28.341504 containerd[1462]: 2026-04-13 20:34:28.331 [INFO][5025] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" HandleID="k8s-pod-network.abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--7b55c6d7cc--6f99v-eth0" Apr 13 20:34:28.341504 containerd[1462]: 2026-04-13 20:34:28.335 [INFO][5025] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:28.341504 containerd[1462]: 2026-04-13 20:34:28.338 [INFO][5011] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Apr 13 20:34:28.343461 containerd[1462]: time="2026-04-13T20:34:28.343414931Z" level=info msg="TearDown network for sandbox \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\" successfully" Apr 13 20:34:28.343605 containerd[1462]: time="2026-04-13T20:34:28.343585115Z" level=info msg="StopPodSandbox for \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\" returns successfully" Apr 13 20:34:28.344889 containerd[1462]: time="2026-04-13T20:34:28.344815267Z" level=info msg="RemovePodSandbox for \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\"" Apr 13 20:34:28.344889 containerd[1462]: time="2026-04-13T20:34:28.344885667Z" level=info msg="Forcibly stopping sandbox \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\"" Apr 13 20:34:28.530398 containerd[1462]: 2026-04-13 20:34:28.428 [WARNING][5043] cni-plugin/k8s.go 610: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" WorkloadEndpoint="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--7b55c6d7cc--6f99v-eth0" Apr 13 20:34:28.530398 containerd[1462]: 2026-04-13 20:34:28.429 [INFO][5043] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Apr 13 20:34:28.530398 containerd[1462]: 2026-04-13 20:34:28.429 [INFO][5043] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" iface="eth0" netns="" Apr 13 20:34:28.530398 containerd[1462]: 2026-04-13 20:34:28.429 [INFO][5043] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Apr 13 20:34:28.530398 containerd[1462]: 2026-04-13 20:34:28.430 [INFO][5043] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Apr 13 20:34:28.530398 containerd[1462]: 2026-04-13 20:34:28.491 [INFO][5051] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" HandleID="k8s-pod-network.abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--7b55c6d7cc--6f99v-eth0" Apr 13 20:34:28.530398 containerd[1462]: 2026-04-13 20:34:28.491 [INFO][5051] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:28.530398 containerd[1462]: 2026-04-13 20:34:28.491 [INFO][5051] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:28.530398 containerd[1462]: 2026-04-13 20:34:28.506 [WARNING][5051] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" HandleID="k8s-pod-network.abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--7b55c6d7cc--6f99v-eth0" Apr 13 20:34:28.530398 containerd[1462]: 2026-04-13 20:34:28.507 [INFO][5051] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" HandleID="k8s-pod-network.abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-whisker--7b55c6d7cc--6f99v-eth0" Apr 13 20:34:28.530398 containerd[1462]: 2026-04-13 20:34:28.512 [INFO][5051] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:28.530398 containerd[1462]: 2026-04-13 20:34:28.521 [INFO][5043] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257" Apr 13 20:34:28.532947 containerd[1462]: time="2026-04-13T20:34:28.531428933Z" level=info msg="TearDown network for sandbox \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\" successfully" Apr 13 20:34:28.545938 containerd[1462]: time="2026-04-13T20:34:28.545783278Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:34:28.547261 containerd[1462]: time="2026-04-13T20:34:28.547210268Z" level=info msg="RemovePodSandbox \"abad3bb56dd029b8b6610b04fd58cb269fcd06886a9bd1edea3333c87d7e8257\" returns successfully" Apr 13 20:34:28.548160 containerd[1462]: time="2026-04-13T20:34:28.548123044Z" level=info msg="StopPodSandbox for \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\"" Apr 13 20:34:28.778183 containerd[1462]: 2026-04-13 20:34:28.659 [WARNING][5069] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0", GenerateName:"calico-apiserver-7d4d888f55-", Namespace:"calico-system", SelfLink:"", UID:"c693e029-794b-434b-97e0-e01594b71108", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4d888f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015", Pod:"calico-apiserver-7d4d888f55-tqdxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0c742f22395", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:28.778183 containerd[1462]: 2026-04-13 20:34:28.660 [INFO][5069] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Apr 13 20:34:28.778183 containerd[1462]: 2026-04-13 20:34:28.660 [INFO][5069] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" iface="eth0" netns="" Apr 13 20:34:28.778183 containerd[1462]: 2026-04-13 20:34:28.660 [INFO][5069] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Apr 13 20:34:28.778183 containerd[1462]: 2026-04-13 20:34:28.660 [INFO][5069] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Apr 13 20:34:28.778183 containerd[1462]: 2026-04-13 20:34:28.748 [INFO][5077] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" HandleID="k8s-pod-network.8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:28.778183 containerd[1462]: 2026-04-13 20:34:28.748 [INFO][5077] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:28.778183 containerd[1462]: 2026-04-13 20:34:28.748 [INFO][5077] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:28.778183 containerd[1462]: 2026-04-13 20:34:28.764 [WARNING][5077] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" HandleID="k8s-pod-network.8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:28.778183 containerd[1462]: 2026-04-13 20:34:28.764 [INFO][5077] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" HandleID="k8s-pod-network.8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:28.778183 containerd[1462]: 2026-04-13 20:34:28.768 [INFO][5077] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:28.778183 containerd[1462]: 2026-04-13 20:34:28.770 [INFO][5069] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Apr 13 20:34:28.779448 containerd[1462]: time="2026-04-13T20:34:28.778229343Z" level=info msg="TearDown network for sandbox \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\" successfully" Apr 13 20:34:28.779448 containerd[1462]: time="2026-04-13T20:34:28.778266706Z" level=info msg="StopPodSandbox for \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\" returns successfully" Apr 13 20:34:28.780310 containerd[1462]: time="2026-04-13T20:34:28.780243169Z" level=info msg="RemovePodSandbox for \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\"" Apr 13 20:34:28.782776 containerd[1462]: time="2026-04-13T20:34:28.780311256Z" level=info msg="Forcibly stopping sandbox \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\"" Apr 13 20:34:29.015696 containerd[1462]: 2026-04-13 20:34:28.891 [WARNING][5092] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0", GenerateName:"calico-apiserver-7d4d888f55-", Namespace:"calico-system", SelfLink:"", UID:"c693e029-794b-434b-97e0-e01594b71108", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4d888f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"5df6b67e00ec5d83e8b2b2e69290f22f848f0e46dba2d6b753e20feaa2dfd015", Pod:"calico-apiserver-7d4d888f55-tqdxr", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali0c742f22395", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:29.015696 containerd[1462]: 2026-04-13 20:34:28.892 [INFO][5092] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Apr 13 20:34:29.015696 containerd[1462]: 2026-04-13 20:34:28.892 [INFO][5092] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" iface="eth0" netns="" Apr 13 20:34:29.015696 containerd[1462]: 2026-04-13 20:34:28.893 [INFO][5092] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Apr 13 20:34:29.015696 containerd[1462]: 2026-04-13 20:34:28.893 [INFO][5092] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Apr 13 20:34:29.015696 containerd[1462]: 2026-04-13 20:34:28.982 [INFO][5099] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" HandleID="k8s-pod-network.8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:29.015696 containerd[1462]: 2026-04-13 20:34:28.982 [INFO][5099] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:29.015696 containerd[1462]: 2026-04-13 20:34:28.983 [INFO][5099] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:29.015696 containerd[1462]: 2026-04-13 20:34:29.004 [WARNING][5099] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" HandleID="k8s-pod-network.8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:29.015696 containerd[1462]: 2026-04-13 20:34:29.004 [INFO][5099] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" HandleID="k8s-pod-network.8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--tqdxr-eth0" Apr 13 20:34:29.015696 containerd[1462]: 2026-04-13 20:34:29.007 [INFO][5099] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:29.015696 containerd[1462]: 2026-04-13 20:34:29.011 [INFO][5092] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3" Apr 13 20:34:29.017485 containerd[1462]: time="2026-04-13T20:34:29.015760311Z" level=info msg="TearDown network for sandbox \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\" successfully" Apr 13 20:34:29.025691 containerd[1462]: time="2026-04-13T20:34:29.025428258Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:34:29.025691 containerd[1462]: time="2026-04-13T20:34:29.025517195Z" level=info msg="RemovePodSandbox \"8d16c8084d7402180e6d2f21003baf795485d473ab338295da5d73e38ff208f3\" returns successfully" Apr 13 20:34:29.026700 containerd[1462]: time="2026-04-13T20:34:29.026208990Z" level=info msg="StopPodSandbox for \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\"" Apr 13 20:34:29.210090 containerd[1462]: 2026-04-13 20:34:29.119 [WARNING][5113] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"29fcd04c-8e33-4e32-b58c-36d11bc97ed6", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044", Pod:"goldmane-9f7667bb8-9xj9t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicc417c3a32e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:29.210090 containerd[1462]: 2026-04-13 20:34:29.119 [INFO][5113] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Apr 13 20:34:29.210090 containerd[1462]: 2026-04-13 20:34:29.119 [INFO][5113] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" iface="eth0" netns="" Apr 13 20:34:29.210090 containerd[1462]: 2026-04-13 20:34:29.119 [INFO][5113] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Apr 13 20:34:29.210090 containerd[1462]: 2026-04-13 20:34:29.119 [INFO][5113] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Apr 13 20:34:29.210090 containerd[1462]: 2026-04-13 20:34:29.167 [INFO][5120] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" HandleID="k8s-pod-network.dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:29.210090 containerd[1462]: 2026-04-13 20:34:29.168 [INFO][5120] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:29.210090 containerd[1462]: 2026-04-13 20:34:29.168 [INFO][5120] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:29.210090 containerd[1462]: 2026-04-13 20:34:29.191 [WARNING][5120] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" HandleID="k8s-pod-network.dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:29.210090 containerd[1462]: 2026-04-13 20:34:29.191 [INFO][5120] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" HandleID="k8s-pod-network.dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:29.210090 containerd[1462]: 2026-04-13 20:34:29.194 [INFO][5120] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:29.210090 containerd[1462]: 2026-04-13 20:34:29.204 [INFO][5113] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Apr 13 20:34:29.213549 containerd[1462]: time="2026-04-13T20:34:29.211653833Z" level=info msg="TearDown network for sandbox \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\" successfully" Apr 13 20:34:29.213549 containerd[1462]: time="2026-04-13T20:34:29.211700365Z" level=info msg="StopPodSandbox for \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\" returns successfully" Apr 13 20:34:29.215235 containerd[1462]: time="2026-04-13T20:34:29.214599394Z" level=info msg="RemovePodSandbox for \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\"" Apr 13 20:34:29.215235 containerd[1462]: time="2026-04-13T20:34:29.214656457Z" level=info msg="Forcibly stopping sandbox \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\"" Apr 13 20:34:29.464625 containerd[1462]: 2026-04-13 20:34:29.370 [WARNING][5138] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0", GenerateName:"goldmane-9f7667bb8-", Namespace:"calico-system", SelfLink:"", UID:"29fcd04c-8e33-4e32-b58c-36d11bc97ed6", ResourceVersion:"975", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"9f7667bb8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044", Pod:"goldmane-9f7667bb8-9xj9t", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.82.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calicc417c3a32e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:29.464625 containerd[1462]: 2026-04-13 20:34:29.371 [INFO][5138] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Apr 13 20:34:29.464625 containerd[1462]: 2026-04-13 20:34:29.371 [INFO][5138] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" iface="eth0" netns="" Apr 13 20:34:29.464625 containerd[1462]: 2026-04-13 20:34:29.371 [INFO][5138] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Apr 13 20:34:29.464625 containerd[1462]: 2026-04-13 20:34:29.371 [INFO][5138] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Apr 13 20:34:29.464625 containerd[1462]: 2026-04-13 20:34:29.433 [INFO][5145] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" HandleID="k8s-pod-network.dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:29.464625 containerd[1462]: 2026-04-13 20:34:29.436 [INFO][5145] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:29.464625 containerd[1462]: 2026-04-13 20:34:29.436 [INFO][5145] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:29.464625 containerd[1462]: 2026-04-13 20:34:29.452 [WARNING][5145] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" HandleID="k8s-pod-network.dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:29.464625 containerd[1462]: 2026-04-13 20:34:29.452 [INFO][5145] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" HandleID="k8s-pod-network.dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-goldmane--9f7667bb8--9xj9t-eth0" Apr 13 20:34:29.464625 containerd[1462]: 2026-04-13 20:34:29.455 [INFO][5145] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:29.464625 containerd[1462]: 2026-04-13 20:34:29.459 [INFO][5138] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3" Apr 13 20:34:29.464625 containerd[1462]: time="2026-04-13T20:34:29.464157414Z" level=info msg="TearDown network for sandbox \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\" successfully" Apr 13 20:34:29.473739 containerd[1462]: time="2026-04-13T20:34:29.473675581Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:34:29.474120 containerd[1462]: time="2026-04-13T20:34:29.473773564Z" level=info msg="RemovePodSandbox \"dacdfce23983b593664a0f460d8f5be001b4d66a016a9f855cb3821313fa95f3\" returns successfully" Apr 13 20:34:29.475373 containerd[1462]: time="2026-04-13T20:34:29.475049571Z" level=info msg="StopPodSandbox for \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\"" Apr 13 20:34:29.632148 containerd[1462]: time="2026-04-13T20:34:29.632075854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:29.638716 containerd[1462]: time="2026-04-13T20:34:29.638640077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Apr 13 20:34:29.640029 containerd[1462]: time="2026-04-13T20:34:29.639984468Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:29.652581 containerd[1462]: time="2026-04-13T20:34:29.652513937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:29.658480 containerd[1462]: time="2026-04-13T20:34:29.658413819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 4.121358846s" Apr 13 20:34:29.659845 containerd[1462]: time="2026-04-13T20:34:29.658485081Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Apr 13 20:34:29.662285 containerd[1462]: time="2026-04-13T20:34:29.662229967Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Apr 13 20:34:29.671401 containerd[1462]: time="2026-04-13T20:34:29.671345126Z" level=info msg="CreateContainer within sandbox \"6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Apr 13 20:34:29.677071 containerd[1462]: 2026-04-13 20:34:29.569 [WARNING][5159] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"24399e91-dbab-4831-aa98-8db96cfff9e4", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624", Pod:"coredns-7d764666f9-f8m2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79d06642b75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:29.677071 containerd[1462]: 2026-04-13 20:34:29.570 [INFO][5159] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Apr 13 20:34:29.677071 containerd[1462]: 2026-04-13 20:34:29.570 [INFO][5159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" iface="eth0" netns="" Apr 13 20:34:29.677071 containerd[1462]: 2026-04-13 20:34:29.570 [INFO][5159] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Apr 13 20:34:29.677071 containerd[1462]: 2026-04-13 20:34:29.570 [INFO][5159] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Apr 13 20:34:29.677071 containerd[1462]: 2026-04-13 20:34:29.641 [INFO][5166] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" HandleID="k8s-pod-network.66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:29.677071 containerd[1462]: 2026-04-13 20:34:29.642 [INFO][5166] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:29.677071 containerd[1462]: 2026-04-13 20:34:29.642 [INFO][5166] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:29.677071 containerd[1462]: 2026-04-13 20:34:29.656 [WARNING][5166] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" HandleID="k8s-pod-network.66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:29.677071 containerd[1462]: 2026-04-13 20:34:29.656 [INFO][5166] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" HandleID="k8s-pod-network.66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:29.677071 containerd[1462]: 2026-04-13 20:34:29.666 [INFO][5166] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:29.677071 containerd[1462]: 2026-04-13 20:34:29.671 [INFO][5159] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Apr 13 20:34:29.677071 containerd[1462]: time="2026-04-13T20:34:29.676867079Z" level=info msg="TearDown network for sandbox \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\" successfully" Apr 13 20:34:29.680625 containerd[1462]: time="2026-04-13T20:34:29.677939981Z" level=info msg="StopPodSandbox for \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\" returns successfully" Apr 13 20:34:29.695496 containerd[1462]: time="2026-04-13T20:34:29.695436541Z" level=info msg="RemovePodSandbox for \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\"" Apr 13 20:34:29.695650 containerd[1462]: time="2026-04-13T20:34:29.695500543Z" level=info msg="Forcibly stopping sandbox \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\"" Apr 13 20:34:29.705021 containerd[1462]: time="2026-04-13T20:34:29.703065711Z" level=info msg="CreateContainer within sandbox \"6ee7ad35152521535e12f2201115a98e4cee8c21abe13b798d36b939341d7044\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"7593256f1d4c5c86c99ea433a72ce64e40d8e978f69a15bd038c99302e7e7124\"" Apr 13 20:34:29.707041 containerd[1462]: time="2026-04-13T20:34:29.705635403Z" level=info msg="StartContainer for \"7593256f1d4c5c86c99ea433a72ce64e40d8e978f69a15bd038c99302e7e7124\"" Apr 13 20:34:29.849646 systemd[1]: run-containerd-runc-k8s.io-7593256f1d4c5c86c99ea433a72ce64e40d8e978f69a15bd038c99302e7e7124-runc.CXrX5X.mount: Deactivated successfully. Apr 13 20:34:29.863460 systemd[1]: Started cri-containerd-7593256f1d4c5c86c99ea433a72ce64e40d8e978f69a15bd038c99302e7e7124.scope - libcontainer container 7593256f1d4c5c86c99ea433a72ce64e40d8e978f69a15bd038c99302e7e7124. Apr 13 20:34:29.929808 containerd[1462]: 2026-04-13 20:34:29.841 [WARNING][5192] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"24399e91-dbab-4831-aa98-8db96cfff9e4", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"aec4c618f4be0f3fae16eafd40d0b99507d0b8994352e5816ff1502d569a2624", Pod:"coredns-7d764666f9-f8m2g", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali79d06642b75", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:29.929808 containerd[1462]: 2026-04-13 20:34:29.841 [INFO][5192] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Apr 13 20:34:29.929808 containerd[1462]: 2026-04-13 20:34:29.842 [INFO][5192] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" iface="eth0" netns="" Apr 13 20:34:29.929808 containerd[1462]: 2026-04-13 20:34:29.842 [INFO][5192] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Apr 13 20:34:29.929808 containerd[1462]: 2026-04-13 20:34:29.842 [INFO][5192] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Apr 13 20:34:29.929808 containerd[1462]: 2026-04-13 20:34:29.900 [INFO][5219] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" HandleID="k8s-pod-network.66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:29.929808 containerd[1462]: 2026-04-13 20:34:29.900 [INFO][5219] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:29.929808 containerd[1462]: 2026-04-13 20:34:29.900 [INFO][5219] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:29.929808 containerd[1462]: 2026-04-13 20:34:29.919 [WARNING][5219] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" HandleID="k8s-pod-network.66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:29.929808 containerd[1462]: 2026-04-13 20:34:29.919 [INFO][5219] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" HandleID="k8s-pod-network.66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--f8m2g-eth0" Apr 13 20:34:29.929808 containerd[1462]: 2026-04-13 20:34:29.921 [INFO][5219] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:29.929808 containerd[1462]: 2026-04-13 20:34:29.926 [INFO][5192] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b" Apr 13 20:34:29.929808 containerd[1462]: time="2026-04-13T20:34:29.929767379Z" level=info msg="TearDown network for sandbox \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\" successfully" Apr 13 20:34:29.939250 containerd[1462]: time="2026-04-13T20:34:29.938251155Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:34:29.939250 containerd[1462]: time="2026-04-13T20:34:29.938426103Z" level=info msg="RemovePodSandbox \"66e21fbd2f230fa89b62f0ab62185f31a1249010fb86dacd2c286c0c4f08a74b\" returns successfully" Apr 13 20:34:29.941362 containerd[1462]: time="2026-04-13T20:34:29.940799119Z" level=info msg="StopPodSandbox for \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\"" Apr 13 20:34:29.952747 containerd[1462]: time="2026-04-13T20:34:29.952683328Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:29.957312 containerd[1462]: time="2026-04-13T20:34:29.957117051Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Apr 13 20:34:29.962556 containerd[1462]: time="2026-04-13T20:34:29.962478435Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 300.164335ms" Apr 13 20:34:29.962830 containerd[1462]: time="2026-04-13T20:34:29.962638302Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Apr 13 20:34:29.969120 containerd[1462]: time="2026-04-13T20:34:29.968083846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Apr 13 20:34:29.975862 containerd[1462]: time="2026-04-13T20:34:29.975585826Z" level=info msg="CreateContainer within sandbox \"878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Apr 13 20:34:29.999965 containerd[1462]: time="2026-04-13T20:34:29.999242212Z" level=info msg="CreateContainer within sandbox \"878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"9f27ba7586d3ae7d3797979759d138f4a9f14992536ff26d7def05ca7e470b3f\"" Apr 13 20:34:30.007700 containerd[1462]: time="2026-04-13T20:34:30.007643586Z" level=info msg="StartContainer for \"9f27ba7586d3ae7d3797979759d138f4a9f14992536ff26d7def05ca7e470b3f\"" Apr 13 20:34:30.026739 containerd[1462]: time="2026-04-13T20:34:30.026675045Z" level=info msg="StartContainer for \"7593256f1d4c5c86c99ea433a72ce64e40d8e978f69a15bd038c99302e7e7124\" returns successfully" Apr 13 20:34:30.101778 systemd[1]: Started cri-containerd-9f27ba7586d3ae7d3797979759d138f4a9f14992536ff26d7def05ca7e470b3f.scope - libcontainer container 9f27ba7586d3ae7d3797979759d138f4a9f14992536ff26d7def05ca7e470b3f. Apr 13 20:34:30.217945 containerd[1462]: 2026-04-13 20:34:30.113 [WARNING][5240] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0", GenerateName:"calico-kube-controllers-7ff99f9c59-", Namespace:"calico-system", SelfLink:"", UID:"72d06a65-1282-431f-bff3-3de35ce0d86c", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ff99f9c59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d", Pod:"calico-kube-controllers-7ff99f9c59-94r4d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid549b74edf6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:30.217945 containerd[1462]: 2026-04-13 20:34:30.123 [INFO][5240] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Apr 13 20:34:30.217945 containerd[1462]: 2026-04-13 20:34:30.123 [INFO][5240] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" iface="eth0" netns="" Apr 13 20:34:30.217945 containerd[1462]: 2026-04-13 20:34:30.123 [INFO][5240] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Apr 13 20:34:30.217945 containerd[1462]: 2026-04-13 20:34:30.123 [INFO][5240] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Apr 13 20:34:30.217945 containerd[1462]: 2026-04-13 20:34:30.183 [INFO][5281] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" HandleID="k8s-pod-network.99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:30.217945 containerd[1462]: 2026-04-13 20:34:30.183 [INFO][5281] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:30.217945 containerd[1462]: 2026-04-13 20:34:30.183 [INFO][5281] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:30.217945 containerd[1462]: 2026-04-13 20:34:30.206 [WARNING][5281] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" HandleID="k8s-pod-network.99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:30.217945 containerd[1462]: 2026-04-13 20:34:30.206 [INFO][5281] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" HandleID="k8s-pod-network.99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:30.217945 containerd[1462]: 2026-04-13 20:34:30.209 [INFO][5281] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:30.217945 containerd[1462]: 2026-04-13 20:34:30.214 [INFO][5240] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Apr 13 20:34:30.220764 containerd[1462]: time="2026-04-13T20:34:30.217978683Z" level=info msg="TearDown network for sandbox \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\" successfully" Apr 13 20:34:30.220764 containerd[1462]: time="2026-04-13T20:34:30.218014170Z" level=info msg="StopPodSandbox for \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\" returns successfully" Apr 13 20:34:30.220764 containerd[1462]: time="2026-04-13T20:34:30.218641748Z" level=info msg="RemovePodSandbox for \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\"" Apr 13 20:34:30.220764 containerd[1462]: time="2026-04-13T20:34:30.218681289Z" level=info msg="Forcibly stopping sandbox \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\"" Apr 13 20:34:30.246627 containerd[1462]: time="2026-04-13T20:34:30.246522907Z" level=info msg="StartContainer for \"9f27ba7586d3ae7d3797979759d138f4a9f14992536ff26d7def05ca7e470b3f\" returns successfully" Apr 13 20:34:30.270814 ntpd[1432]: Listen normally on 16 calif049a1a512d [fe80::ecee:eeff:feee:eeee%14]:123 Apr 13 20:34:30.272570 ntpd[1432]: 13 Apr 20:34:30 ntpd[1432]: Listen normally on 16 calif049a1a512d [fe80::ecee:eeff:feee:eeee%14]:123 Apr 13 20:34:30.397414 containerd[1462]: 2026-04-13 20:34:30.312 [WARNING][5303] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0", GenerateName:"calico-kube-controllers-7ff99f9c59-", Namespace:"calico-system", SelfLink:"", UID:"72d06a65-1282-431f-bff3-3de35ce0d86c", ResourceVersion:"1047", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7ff99f9c59", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"bc3b8f04569ddecea74ed5f66bf00f4cb7164114f494f6dd8c30fd37d213884d", Pod:"calico-kube-controllers-7ff99f9c59-94r4d", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.82.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid549b74edf6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:30.397414 containerd[1462]: 2026-04-13 20:34:30.313 [INFO][5303] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Apr 13 20:34:30.397414 containerd[1462]: 2026-04-13 20:34:30.313 [INFO][5303] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" iface="eth0" netns="" Apr 13 20:34:30.397414 containerd[1462]: 2026-04-13 20:34:30.313 [INFO][5303] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Apr 13 20:34:30.397414 containerd[1462]: 2026-04-13 20:34:30.313 [INFO][5303] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Apr 13 20:34:30.397414 containerd[1462]: 2026-04-13 20:34:30.377 [INFO][5316] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" HandleID="k8s-pod-network.99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:30.397414 containerd[1462]: 2026-04-13 20:34:30.377 [INFO][5316] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:30.397414 containerd[1462]: 2026-04-13 20:34:30.377 [INFO][5316] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:30.397414 containerd[1462]: 2026-04-13 20:34:30.389 [WARNING][5316] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" HandleID="k8s-pod-network.99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:30.397414 containerd[1462]: 2026-04-13 20:34:30.389 [INFO][5316] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" HandleID="k8s-pod-network.99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--kube--controllers--7ff99f9c59--94r4d-eth0" Apr 13 20:34:30.397414 containerd[1462]: 2026-04-13 20:34:30.391 [INFO][5316] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:30.397414 containerd[1462]: 2026-04-13 20:34:30.394 [INFO][5303] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6" Apr 13 20:34:30.401788 containerd[1462]: time="2026-04-13T20:34:30.399480417Z" level=info msg="TearDown network for sandbox \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\" successfully" Apr 13 20:34:30.406549 containerd[1462]: time="2026-04-13T20:34:30.406500826Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:34:30.406823 containerd[1462]: time="2026-04-13T20:34:30.406798883Z" level=info msg="RemovePodSandbox \"99ffc7033370ce50d9f95aa74739d8a17585a8492c8ee41d762ff8ca3a53c6d6\" returns successfully" Apr 13 20:34:30.407543 containerd[1462]: time="2026-04-13T20:34:30.407509816Z" level=info msg="StopPodSandbox for \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\"" Apr 13 20:34:30.544140 containerd[1462]: 2026-04-13 20:34:30.473 [WARNING][5334] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0", GenerateName:"calico-apiserver-7d4d888f55-", Namespace:"calico-system", SelfLink:"", UID:"ac435b27-3a39-4ef6-8b1e-437562c1e7eb", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4d888f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45", Pod:"calico-apiserver-7d4d888f55-gszcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9adf618b581", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:30.544140 containerd[1462]: 2026-04-13 20:34:30.474 [INFO][5334] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Apr 13 20:34:30.544140 containerd[1462]: 2026-04-13 20:34:30.474 [INFO][5334] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" iface="eth0" netns="" Apr 13 20:34:30.544140 containerd[1462]: 2026-04-13 20:34:30.474 [INFO][5334] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Apr 13 20:34:30.544140 containerd[1462]: 2026-04-13 20:34:30.474 [INFO][5334] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Apr 13 20:34:30.544140 containerd[1462]: 2026-04-13 20:34:30.523 [INFO][5341] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" HandleID="k8s-pod-network.7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:30.544140 containerd[1462]: 2026-04-13 20:34:30.523 [INFO][5341] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:30.544140 containerd[1462]: 2026-04-13 20:34:30.523 [INFO][5341] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:30.544140 containerd[1462]: 2026-04-13 20:34:30.534 [WARNING][5341] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" HandleID="k8s-pod-network.7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:30.544140 containerd[1462]: 2026-04-13 20:34:30.534 [INFO][5341] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" HandleID="k8s-pod-network.7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:30.544140 containerd[1462]: 2026-04-13 20:34:30.537 [INFO][5341] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:30.544140 containerd[1462]: 2026-04-13 20:34:30.540 [INFO][5334] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Apr 13 20:34:30.546582 containerd[1462]: time="2026-04-13T20:34:30.545088778Z" level=info msg="TearDown network for sandbox \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\" successfully" Apr 13 20:34:30.546582 containerd[1462]: time="2026-04-13T20:34:30.545135626Z" level=info msg="StopPodSandbox for \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\" returns successfully" Apr 13 20:34:30.546582 containerd[1462]: time="2026-04-13T20:34:30.546126959Z" level=info msg="RemovePodSandbox for \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\"" Apr 13 20:34:30.546582 containerd[1462]: time="2026-04-13T20:34:30.546162474Z" level=info msg="Forcibly stopping sandbox \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\"" Apr 13 20:34:30.702637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2759491589.mount: Deactivated successfully. Apr 13 20:34:30.704932 containerd[1462]: 2026-04-13 20:34:30.621 [WARNING][5355] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0", GenerateName:"calico-apiserver-7d4d888f55-", Namespace:"calico-system", SelfLink:"", UID:"ac435b27-3a39-4ef6-8b1e-437562c1e7eb", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d4d888f55", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"878e928e8e17b58796d8a3c585e5824a4855f0faa0bd724aca598406577c6e45", Pod:"calico-apiserver-7d4d888f55-gszcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.82.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"cali9adf618b581", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:30.704932 containerd[1462]: 2026-04-13 20:34:30.621 [INFO][5355] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Apr 13 20:34:30.704932 containerd[1462]: 2026-04-13 20:34:30.621 [INFO][5355] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" iface="eth0" netns="" Apr 13 20:34:30.704932 containerd[1462]: 2026-04-13 20:34:30.621 [INFO][5355] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Apr 13 20:34:30.704932 containerd[1462]: 2026-04-13 20:34:30.621 [INFO][5355] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Apr 13 20:34:30.704932 containerd[1462]: 2026-04-13 20:34:30.672 [INFO][5363] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" HandleID="k8s-pod-network.7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:30.704932 containerd[1462]: 2026-04-13 20:34:30.673 [INFO][5363] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:30.704932 containerd[1462]: 2026-04-13 20:34:30.673 [INFO][5363] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:30.704932 containerd[1462]: 2026-04-13 20:34:30.684 [WARNING][5363] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" HandleID="k8s-pod-network.7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:30.704932 containerd[1462]: 2026-04-13 20:34:30.684 [INFO][5363] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" HandleID="k8s-pod-network.7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-calico--apiserver--7d4d888f55--gszcq-eth0" Apr 13 20:34:30.704932 containerd[1462]: 2026-04-13 20:34:30.688 [INFO][5363] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:30.704932 containerd[1462]: 2026-04-13 20:34:30.695 [INFO][5355] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4" Apr 13 20:34:30.708091 containerd[1462]: time="2026-04-13T20:34:30.706316783Z" level=info msg="TearDown network for sandbox \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\" successfully" Apr 13 20:34:30.716306 containerd[1462]: time="2026-04-13T20:34:30.716155707Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:34:30.717172 containerd[1462]: time="2026-04-13T20:34:30.716455631Z" level=info msg="RemovePodSandbox \"7e1bca3e6f00513ace9c066acaca3cfa88faf9a9af0347dfebc8cc4b4e566dc4\" returns successfully" Apr 13 20:34:30.717686 containerd[1462]: time="2026-04-13T20:34:30.717639968Z" level=info msg="StopPodSandbox for \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\"" Apr 13 20:34:30.866040 containerd[1462]: 2026-04-13 20:34:30.791 [WARNING][5377] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4f231b08-404f-4650-8082-80470e832cfe", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f", Pod:"coredns-7d764666f9-kh7qp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif049a1a512d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:30.866040 containerd[1462]: 2026-04-13 20:34:30.792 [INFO][5377] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Apr 13 20:34:30.866040 containerd[1462]: 2026-04-13 20:34:30.792 [INFO][5377] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" iface="eth0" netns="" Apr 13 20:34:30.866040 containerd[1462]: 2026-04-13 20:34:30.792 [INFO][5377] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Apr 13 20:34:30.866040 containerd[1462]: 2026-04-13 20:34:30.792 [INFO][5377] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Apr 13 20:34:30.866040 containerd[1462]: 2026-04-13 20:34:30.836 [INFO][5384] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" HandleID="k8s-pod-network.c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:30.866040 containerd[1462]: 2026-04-13 20:34:30.836 [INFO][5384] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:30.866040 containerd[1462]: 2026-04-13 20:34:30.836 [INFO][5384] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:30.866040 containerd[1462]: 2026-04-13 20:34:30.854 [WARNING][5384] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" HandleID="k8s-pod-network.c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:30.866040 containerd[1462]: 2026-04-13 20:34:30.854 [INFO][5384] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" HandleID="k8s-pod-network.c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:30.866040 containerd[1462]: 2026-04-13 20:34:30.857 [INFO][5384] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:30.866040 containerd[1462]: 2026-04-13 20:34:30.861 [INFO][5377] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Apr 13 20:34:30.869269 containerd[1462]: time="2026-04-13T20:34:30.869011767Z" level=info msg="TearDown network for sandbox \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\" successfully" Apr 13 20:34:30.869269 containerd[1462]: time="2026-04-13T20:34:30.869050864Z" level=info msg="StopPodSandbox for \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\" returns successfully" Apr 13 20:34:30.870865 containerd[1462]: time="2026-04-13T20:34:30.870816588Z" level=info msg="RemovePodSandbox for \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\"" Apr 13 20:34:30.871009 containerd[1462]: time="2026-04-13T20:34:30.870873009Z" level=info msg="Forcibly stopping sandbox \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\"" Apr 13 20:34:30.942663 kubelet[2582]: I0413 20:34:30.942575 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/calico-apiserver-7d4d888f55-gszcq" podStartSLOduration=30.874527936 podStartE2EDuration="43.942551381s" podCreationTimestamp="2026-04-13 20:33:47 +0000 UTC" firstStartedPulling="2026-04-13 20:34:16.896793248 +0000 UTC m=+49.118511163" lastFinishedPulling="2026-04-13 20:34:29.964816684 +0000 UTC m=+62.186534608" observedRunningTime="2026-04-13 20:34:30.939741112 +0000 UTC m=+63.161459053" watchObservedRunningTime="2026-04-13 20:34:30.942551381 +0000 UTC m=+63.164269321" Apr 13 20:34:30.987438 kubelet[2582]: I0413 20:34:30.982189 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/goldmane-9f7667bb8-9xj9t" podStartSLOduration=31.129386646 podStartE2EDuration="43.982167342s" podCreationTimestamp="2026-04-13 20:33:47 +0000 UTC" firstStartedPulling="2026-04-13 20:34:16.808478103 +0000 UTC m=+49.030196027" lastFinishedPulling="2026-04-13 20:34:29.661258793 +0000 UTC m=+61.882976723" observedRunningTime="2026-04-13 20:34:30.98034929 +0000 UTC m=+63.202067234" watchObservedRunningTime="2026-04-13 20:34:30.982167342 +0000 UTC m=+63.203885283" Apr 13 20:34:31.039786 systemd[1]: run-containerd-runc-k8s.io-7593256f1d4c5c86c99ea433a72ce64e40d8e978f69a15bd038c99302e7e7124-runc.S83eMs.mount: Deactivated successfully. Apr 13 20:34:31.291269 containerd[1462]: 2026-04-13 20:34:31.081 [WARNING][5399] cni-plugin/k8s.go 616: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0", GenerateName:"coredns-7d764666f9-", Namespace:"kube-system", SelfLink:"", UID:"4f231b08-404f-4650-8082-80470e832cfe", ResourceVersion:"1068", Generation:0, CreationTimestamp:time.Date(2026, time.April, 13, 20, 33, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7d764666f9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081-3-7-810edbea8c24f27595c6.c.flatcar-212911.internal", ContainerID:"822af91dbd455697d7df215533eab6e0b5abffe6961c5ea42089cf1d44eafe5f", Pod:"coredns-7d764666f9-kh7qp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.82.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif049a1a512d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Apr 13 20:34:31.291269 containerd[1462]: 2026-04-13 20:34:31.082 [INFO][5399] cni-plugin/k8s.go 652: Cleaning up netns ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Apr 13 20:34:31.291269 containerd[1462]: 2026-04-13 20:34:31.082 [INFO][5399] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" iface="eth0" netns="" Apr 13 20:34:31.291269 containerd[1462]: 2026-04-13 20:34:31.082 [INFO][5399] cni-plugin/k8s.go 659: Releasing IP address(es) ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Apr 13 20:34:31.291269 containerd[1462]: 2026-04-13 20:34:31.082 [INFO][5399] cni-plugin/utils.go 204: Calico CNI releasing IP address ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Apr 13 20:34:31.291269 containerd[1462]: 2026-04-13 20:34:31.247 [INFO][5427] ipam/ipam_plugin.go 497: Releasing address using handleID ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" HandleID="k8s-pod-network.c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:31.291269 containerd[1462]: 2026-04-13 20:34:31.247 [INFO][5427] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Apr 13 20:34:31.291269 containerd[1462]: 2026-04-13 20:34:31.247 [INFO][5427] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Apr 13 20:34:31.291269 containerd[1462]: 2026-04-13 20:34:31.264 [WARNING][5427] ipam/ipam_plugin.go 514: Asked to release address but it doesn't exist. Ignoring ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" HandleID="k8s-pod-network.c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:31.291269 containerd[1462]: 2026-04-13 20:34:31.265 [INFO][5427] ipam/ipam_plugin.go 525: Releasing address using workloadID ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" HandleID="k8s-pod-network.c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Workload="ci--4081--3--7--810edbea8c24f27595c6.c.flatcar--212911.internal-k8s-coredns--7d764666f9--kh7qp-eth0" Apr 13 20:34:31.291269 containerd[1462]: 2026-04-13 20:34:31.268 [INFO][5427] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Apr 13 20:34:31.291269 containerd[1462]: 2026-04-13 20:34:31.280 [INFO][5399] cni-plugin/k8s.go 665: Teardown processing complete. ContainerID="c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89" Apr 13 20:34:31.293858 containerd[1462]: time="2026-04-13T20:34:31.291334118Z" level=info msg="TearDown network for sandbox \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\" successfully" Apr 13 20:34:31.303602 containerd[1462]: time="2026-04-13T20:34:31.303524824Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 13 20:34:31.303779 containerd[1462]: time="2026-04-13T20:34:31.303658670Z" level=info msg="RemovePodSandbox \"c3b906c61a7c671ebc094c5b6ccd2fcaf38b5946c72553c46c514c4f27c40a89\" returns successfully" Apr 13 20:34:31.558260 containerd[1462]: time="2026-04-13T20:34:31.558111517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:31.560732 containerd[1462]: time="2026-04-13T20:34:31.560542052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Apr 13 20:34:31.561842 containerd[1462]: time="2026-04-13T20:34:31.561794606Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:31.571433 containerd[1462]: time="2026-04-13T20:34:31.568755231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:31.571433 containerd[1462]: time="2026-04-13T20:34:31.569769036Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.601456095s" Apr 13 20:34:31.571433 containerd[1462]: time="2026-04-13T20:34:31.569813139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Apr 13 20:34:31.574932 containerd[1462]: time="2026-04-13T20:34:31.574129468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Apr 13 20:34:31.580873 containerd[1462]: time="2026-04-13T20:34:31.580818906Z" level=info msg="CreateContainer within sandbox \"bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Apr 13 20:34:31.600178 containerd[1462]: time="2026-04-13T20:34:31.600005770Z" level=info msg="CreateContainer within sandbox \"bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"7b0de535ee4e1dc3ae5bfda22f14a4743fa5dbc205fc310bdf2abe3aee4a8536\"" Apr 13 20:34:31.601874 containerd[1462]: time="2026-04-13T20:34:31.601240482Z" level=info msg="StartContainer for \"7b0de535ee4e1dc3ae5bfda22f14a4743fa5dbc205fc310bdf2abe3aee4a8536\"" Apr 13 20:34:31.669166 systemd[1]: Started cri-containerd-7b0de535ee4e1dc3ae5bfda22f14a4743fa5dbc205fc310bdf2abe3aee4a8536.scope - libcontainer container 7b0de535ee4e1dc3ae5bfda22f14a4743fa5dbc205fc310bdf2abe3aee4a8536. Apr 13 20:34:31.793429 containerd[1462]: time="2026-04-13T20:34:31.793169142Z" level=info msg="StartContainer for \"7b0de535ee4e1dc3ae5bfda22f14a4743fa5dbc205fc310bdf2abe3aee4a8536\" returns successfully" Apr 13 20:34:32.952593 kubelet[2582]: I0413 20:34:32.952499 2582 prober_manager.go:356] "Failed to trigger a manual run" probe="Readiness" Apr 13 20:34:33.111590 containerd[1462]: time="2026-04-13T20:34:33.111528946Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:33.113721 containerd[1462]: time="2026-04-13T20:34:33.113660146Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Apr 13 20:34:33.115931 containerd[1462]: time="2026-04-13T20:34:33.115616740Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:33.121010 containerd[1462]: time="2026-04-13T20:34:33.120073334Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:33.122047 containerd[1462]: time="2026-04-13T20:34:33.121998829Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.547816373s" Apr 13 20:34:33.122185 containerd[1462]: time="2026-04-13T20:34:33.122053079Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Apr 13 20:34:33.126925 containerd[1462]: time="2026-04-13T20:34:33.124432571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Apr 13 20:34:33.129394 containerd[1462]: time="2026-04-13T20:34:33.129353616Z" level=info msg="CreateContainer within sandbox \"fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 13 20:34:33.158221 containerd[1462]: time="2026-04-13T20:34:33.158163889Z" level=info msg="CreateContainer within sandbox \"fc0379edf1d4ee38bbdb342ed9d3c9604adacc1a8383df40c1ace38356561fb3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"2e85e2ef7385a929b39a4f2fc510aeca3c355fdb4242376192d5821999af77a6\"" Apr 13 20:34:33.162200 containerd[1462]: time="2026-04-13T20:34:33.162151899Z" level=info msg="StartContainer for \"2e85e2ef7385a929b39a4f2fc510aeca3c355fdb4242376192d5821999af77a6\"" Apr 13 20:34:33.255134 systemd[1]: Started cri-containerd-2e85e2ef7385a929b39a4f2fc510aeca3c355fdb4242376192d5821999af77a6.scope - libcontainer container 2e85e2ef7385a929b39a4f2fc510aeca3c355fdb4242376192d5821999af77a6. Apr 13 20:34:33.389274 containerd[1462]: time="2026-04-13T20:34:33.389181417Z" level=info msg="StartContainer for \"2e85e2ef7385a929b39a4f2fc510aeca3c355fdb4242376192d5821999af77a6\" returns successfully" Apr 13 20:34:34.286915 kubelet[2582]: I0413 20:34:34.286816 2582 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 13 20:34:34.286915 kubelet[2582]: I0413 20:34:34.286859 2582 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 13 20:34:34.771523 kubelet[2582]: I0413 20:34:34.769341 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/csi-node-driver-dn72w" podStartSLOduration=30.376623784 podStartE2EDuration="46.769316029s" podCreationTimestamp="2026-04-13 20:33:48 +0000 UTC" firstStartedPulling="2026-04-13 20:34:16.73150251 +0000 UTC m=+48.953220437" lastFinishedPulling="2026-04-13 20:34:33.124194746 +0000 UTC m=+65.345912682" observedRunningTime="2026-04-13 20:34:33.996739344 +0000 UTC m=+66.218457277" watchObservedRunningTime="2026-04-13 20:34:34.769316029 +0000 UTC m=+66.991033989" Apr 13 20:34:34.986785 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1660001050.mount: Deactivated successfully. Apr 13 20:34:35.013489 containerd[1462]: time="2026-04-13T20:34:35.013414898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:35.015350 containerd[1462]: time="2026-04-13T20:34:35.015094186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Apr 13 20:34:35.017160 containerd[1462]: time="2026-04-13T20:34:35.017110648Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:35.024605 containerd[1462]: time="2026-04-13T20:34:35.022291983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 13 20:34:35.028861 containerd[1462]: time="2026-04-13T20:34:35.028692655Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.904212281s" Apr 13 20:34:35.028861 containerd[1462]: time="2026-04-13T20:34:35.028767916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Apr 13 20:34:35.043992 containerd[1462]: time="2026-04-13T20:34:35.043946090Z" level=info msg="CreateContainer within sandbox \"bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Apr 13 20:34:35.069743 containerd[1462]: time="2026-04-13T20:34:35.069673981Z" level=info msg="CreateContainer within sandbox \"bc27c386484fa50e06601eb72c48e7a9fa09293d4ca77d87d9689c925d4d6bca\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"7a31beb85ccb7a49715c8c6b3de16a419eb2d6b4547c1995cb93325cb8328bc8\"" Apr 13 20:34:35.071533 containerd[1462]: time="2026-04-13T20:34:35.071466420Z" level=info msg="StartContainer for \"7a31beb85ccb7a49715c8c6b3de16a419eb2d6b4547c1995cb93325cb8328bc8\"" Apr 13 20:34:35.132580 systemd[1]: Started cri-containerd-7a31beb85ccb7a49715c8c6b3de16a419eb2d6b4547c1995cb93325cb8328bc8.scope - libcontainer container 7a31beb85ccb7a49715c8c6b3de16a419eb2d6b4547c1995cb93325cb8328bc8. Apr 13 20:34:35.210725 containerd[1462]: time="2026-04-13T20:34:35.210605647Z" level=info msg="StartContainer for \"7a31beb85ccb7a49715c8c6b3de16a419eb2d6b4547c1995cb93325cb8328bc8\" returns successfully" Apr 13 20:34:35.995500 kubelet[2582]: I0413 20:34:35.995203 2582 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="calico-system/whisker-658d45bc66-9ksdp" podStartSLOduration=2.92948757 podStartE2EDuration="20.995182515s" podCreationTimestamp="2026-04-13 20:34:15 +0000 UTC" firstStartedPulling="2026-04-13 20:34:16.966314337 +0000 UTC m=+49.188032261" lastFinishedPulling="2026-04-13 20:34:35.032009293 +0000 UTC m=+67.253727206" observedRunningTime="2026-04-13 20:34:35.994498638 +0000 UTC m=+68.216216577" watchObservedRunningTime="2026-04-13 20:34:35.995182515 +0000 UTC m=+68.216900446" Apr 13 20:34:38.273452 systemd[1]: Started sshd@7-10.128.0.70:22-20.229.252.112:35356.service - OpenSSH per-connection server daemon (20.229.252.112:35356). Apr 13 20:34:39.028911 sshd[5597]: Accepted publickey for core from 20.229.252.112 port 35356 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:34:39.031353 sshd[5597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:34:39.039112 systemd-logind[1443]: New session 8 of user core. Apr 13 20:34:39.043442 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 13 20:34:39.663131 sshd[5597]: pam_unix(sshd:session): session closed for user core Apr 13 20:34:39.670454 systemd[1]: sshd@7-10.128.0.70:22-20.229.252.112:35356.service: Deactivated successfully. Apr 13 20:34:39.674834 systemd[1]: session-8.scope: Deactivated successfully. Apr 13 20:34:39.676902 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Apr 13 20:34:39.678722 systemd-logind[1443]: Removed session 8. Apr 13 20:34:44.795358 systemd[1]: Started sshd@8-10.128.0.70:22-20.229.252.112:35368.service - OpenSSH per-connection server daemon (20.229.252.112:35368). Apr 13 20:34:45.488590 sshd[5653]: Accepted publickey for core from 20.229.252.112 port 35368 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:34:45.491978 sshd[5653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:34:45.501719 systemd-logind[1443]: New session 9 of user core. Apr 13 20:34:45.507202 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 13 20:34:46.069325 sshd[5653]: pam_unix(sshd:session): session closed for user core Apr 13 20:34:46.076887 systemd[1]: sshd@8-10.128.0.70:22-20.229.252.112:35368.service: Deactivated successfully. Apr 13 20:34:46.080737 systemd[1]: session-9.scope: Deactivated successfully. Apr 13 20:34:46.084616 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Apr 13 20:34:46.088239 systemd-logind[1443]: Removed session 9. Apr 13 20:34:51.205436 systemd[1]: Started sshd@9-10.128.0.70:22-20.229.252.112:40314.service - OpenSSH per-connection server daemon (20.229.252.112:40314). Apr 13 20:34:51.947940 sshd[5677]: Accepted publickey for core from 20.229.252.112 port 40314 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:34:51.950105 sshd[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:34:51.956744 systemd-logind[1443]: New session 10 of user core. Apr 13 20:34:51.964245 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 13 20:34:52.540479 sshd[5677]: pam_unix(sshd:session): session closed for user core Apr 13 20:34:52.547540 systemd[1]: sshd@9-10.128.0.70:22-20.229.252.112:40314.service: Deactivated successfully. Apr 13 20:34:52.551292 systemd[1]: session-10.scope: Deactivated successfully. Apr 13 20:34:52.552776 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Apr 13 20:34:52.555430 systemd-logind[1443]: Removed session 10. Apr 13 20:34:55.724172 systemd[1]: run-containerd-runc-k8s.io-121cb303174b9750126d0ffd52ab18709a20c3a556d0f6a48315772369190318-runc.f4H518.mount: Deactivated successfully. Apr 13 20:34:57.666427 systemd[1]: Started sshd@10-10.128.0.70:22-20.229.252.112:39662.service - OpenSSH per-connection server daemon (20.229.252.112:39662). Apr 13 20:34:58.366592 sshd[5731]: Accepted publickey for core from 20.229.252.112 port 39662 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:34:58.369022 sshd[5731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:34:58.379567 systemd-logind[1443]: New session 11 of user core. Apr 13 20:34:58.384220 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 13 20:34:58.970479 sshd[5731]: pam_unix(sshd:session): session closed for user core Apr 13 20:34:58.977565 systemd[1]: sshd@10-10.128.0.70:22-20.229.252.112:39662.service: Deactivated successfully. Apr 13 20:34:58.981689 systemd[1]: session-11.scope: Deactivated successfully. Apr 13 20:34:58.984781 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Apr 13 20:34:58.987094 systemd-logind[1443]: Removed session 11. Apr 13 20:35:04.101421 systemd[1]: Started sshd@11-10.128.0.70:22-20.229.252.112:39670.service - OpenSSH per-connection server daemon (20.229.252.112:39670). Apr 13 20:35:04.838691 sshd[5793]: Accepted publickey for core from 20.229.252.112 port 39670 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:35:04.840866 sshd[5793]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:35:04.848579 systemd-logind[1443]: New session 12 of user core. Apr 13 20:35:04.852308 systemd[1]: Started session-12.scope - Session 12 of User core. Apr 13 20:35:05.439756 sshd[5793]: pam_unix(sshd:session): session closed for user core Apr 13 20:35:05.446664 systemd[1]: sshd@11-10.128.0.70:22-20.229.252.112:39670.service: Deactivated successfully. Apr 13 20:35:05.452267 systemd[1]: session-12.scope: Deactivated successfully. Apr 13 20:35:05.453691 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Apr 13 20:35:05.455644 systemd-logind[1443]: Removed session 12. Apr 13 20:35:05.567342 systemd[1]: Started sshd@12-10.128.0.70:22-20.229.252.112:56794.service - OpenSSH per-connection server daemon (20.229.252.112:56794). Apr 13 20:35:06.252021 sshd[5807]: Accepted publickey for core from 20.229.252.112 port 56794 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:35:06.254841 sshd[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:35:06.265514 systemd-logind[1443]: New session 13 of user core. Apr 13 20:35:06.270306 systemd[1]: Started session-13.scope - Session 13 of User core. Apr 13 20:35:06.898161 sshd[5807]: pam_unix(sshd:session): session closed for user core Apr 13 20:35:06.905362 systemd[1]: sshd@12-10.128.0.70:22-20.229.252.112:56794.service: Deactivated successfully. Apr 13 20:35:06.909636 systemd[1]: session-13.scope: Deactivated successfully. Apr 13 20:35:06.911868 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Apr 13 20:35:06.914833 systemd-logind[1443]: Removed session 13. Apr 13 20:35:07.020465 systemd[1]: Started sshd@13-10.128.0.70:22-20.229.252.112:56796.service - OpenSSH per-connection server daemon (20.229.252.112:56796). Apr 13 20:35:07.717153 sshd[5824]: Accepted publickey for core from 20.229.252.112 port 56796 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:35:07.719542 sshd[5824]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:35:07.727820 systemd-logind[1443]: New session 14 of user core. Apr 13 20:35:07.733186 systemd[1]: Started session-14.scope - Session 14 of User core. Apr 13 20:35:08.288721 sshd[5824]: pam_unix(sshd:session): session closed for user core Apr 13 20:35:08.294656 systemd[1]: sshd@13-10.128.0.70:22-20.229.252.112:56796.service: Deactivated successfully. Apr 13 20:35:08.298723 systemd[1]: session-14.scope: Deactivated successfully. Apr 13 20:35:08.301786 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Apr 13 20:35:08.304809 systemd-logind[1443]: Removed session 14. Apr 13 20:35:13.423787 systemd[1]: Started sshd@14-10.128.0.70:22-20.229.252.112:56802.service - OpenSSH per-connection server daemon (20.229.252.112:56802). Apr 13 20:35:14.158973 sshd[5838]: Accepted publickey for core from 20.229.252.112 port 56802 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:35:14.161387 sshd[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:35:14.168374 systemd-logind[1443]: New session 15 of user core. Apr 13 20:35:14.174466 systemd[1]: Started session-15.scope - Session 15 of User core. Apr 13 20:35:14.631997 systemd[1]: run-containerd-runc-k8s.io-a0ad9f0cb5cb89f64334d3bf5f3d08e375dd5600b16b122ab45135289af27c95-runc.3JjKmT.mount: Deactivated successfully. Apr 13 20:35:14.812576 sshd[5838]: pam_unix(sshd:session): session closed for user core Apr 13 20:35:14.820567 systemd[1]: sshd@14-10.128.0.70:22-20.229.252.112:56802.service: Deactivated successfully. Apr 13 20:35:14.824098 systemd[1]: session-15.scope: Deactivated successfully. Apr 13 20:35:14.825426 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Apr 13 20:35:14.828315 systemd-logind[1443]: Removed session 15. Apr 13 20:35:14.941510 systemd[1]: Started sshd@15-10.128.0.70:22-20.229.252.112:56804.service - OpenSSH per-connection server daemon (20.229.252.112:56804). Apr 13 20:35:15.640948 sshd[5872]: Accepted publickey for core from 20.229.252.112 port 56804 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:35:15.643320 sshd[5872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:35:15.651671 systemd-logind[1443]: New session 16 of user core. Apr 13 20:35:15.659216 systemd[1]: Started session-16.scope - Session 16 of User core. Apr 13 20:35:16.271994 sshd[5872]: pam_unix(sshd:session): session closed for user core Apr 13 20:35:16.278403 systemd[1]: sshd@15-10.128.0.70:22-20.229.252.112:56804.service: Deactivated successfully. Apr 13 20:35:16.283214 systemd[1]: session-16.scope: Deactivated successfully. Apr 13 20:35:16.284565 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Apr 13 20:35:16.286673 systemd-logind[1443]: Removed session 16. Apr 13 20:35:16.401343 systemd[1]: Started sshd@16-10.128.0.70:22-20.229.252.112:38800.service - OpenSSH per-connection server daemon (20.229.252.112:38800). Apr 13 20:35:17.128729 sshd[5883]: Accepted publickey for core from 20.229.252.112 port 38800 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:35:17.130657 sshd[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:35:17.139239 systemd-logind[1443]: New session 17 of user core. Apr 13 20:35:17.145226 systemd[1]: Started session-17.scope - Session 17 of User core. Apr 13 20:35:18.433742 sshd[5883]: pam_unix(sshd:session): session closed for user core Apr 13 20:35:18.440365 systemd[1]: sshd@16-10.128.0.70:22-20.229.252.112:38800.service: Deactivated successfully. Apr 13 20:35:18.445125 systemd[1]: session-17.scope: Deactivated successfully. Apr 13 20:35:18.446435 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Apr 13 20:35:18.448269 systemd-logind[1443]: Removed session 17. Apr 13 20:35:18.567597 systemd[1]: Started sshd@17-10.128.0.70:22-20.229.252.112:38814.service - OpenSSH per-connection server daemon (20.229.252.112:38814). Apr 13 20:35:19.287564 sshd[5907]: Accepted publickey for core from 20.229.252.112 port 38814 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:35:19.290194 sshd[5907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:35:19.298034 systemd-logind[1443]: New session 18 of user core. Apr 13 20:35:19.302178 systemd[1]: Started session-18.scope - Session 18 of User core. Apr 13 20:35:20.040525 sshd[5907]: pam_unix(sshd:session): session closed for user core Apr 13 20:35:20.047651 systemd[1]: sshd@17-10.128.0.70:22-20.229.252.112:38814.service: Deactivated successfully. Apr 13 20:35:20.050625 systemd[1]: session-18.scope: Deactivated successfully. Apr 13 20:35:20.053652 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Apr 13 20:35:20.055551 systemd-logind[1443]: Removed session 18. Apr 13 20:35:20.171387 systemd[1]: Started sshd@18-10.128.0.70:22-20.229.252.112:38824.service - OpenSSH per-connection server daemon (20.229.252.112:38824). Apr 13 20:35:20.888109 sshd[5920]: Accepted publickey for core from 20.229.252.112 port 38824 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:35:20.890575 sshd[5920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:35:20.897515 systemd-logind[1443]: New session 19 of user core. Apr 13 20:35:20.904234 systemd[1]: Started session-19.scope - Session 19 of User core. Apr 13 20:35:21.485566 sshd[5920]: pam_unix(sshd:session): session closed for user core Apr 13 20:35:21.491147 systemd[1]: sshd@18-10.128.0.70:22-20.229.252.112:38824.service: Deactivated successfully. Apr 13 20:35:21.495651 systemd[1]: session-19.scope: Deactivated successfully. Apr 13 20:35:21.498358 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Apr 13 20:35:21.501183 systemd-logind[1443]: Removed session 19. Apr 13 20:35:26.612974 systemd[1]: Started sshd@19-10.128.0.70:22-20.229.252.112:60968.service - OpenSSH per-connection server daemon (20.229.252.112:60968). Apr 13 20:35:27.342230 sshd[5977]: Accepted publickey for core from 20.229.252.112 port 60968 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:35:27.344577 sshd[5977]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:35:27.352647 systemd-logind[1443]: New session 20 of user core. Apr 13 20:35:27.356194 systemd[1]: Started session-20.scope - Session 20 of User core. Apr 13 20:35:27.931704 sshd[5977]: pam_unix(sshd:session): session closed for user core Apr 13 20:35:27.938217 systemd[1]: sshd@19-10.128.0.70:22-20.229.252.112:60968.service: Deactivated successfully. Apr 13 20:35:27.941566 systemd[1]: session-20.scope: Deactivated successfully. Apr 13 20:35:27.943239 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Apr 13 20:35:27.945485 systemd-logind[1443]: Removed session 20. Apr 13 20:35:33.066381 systemd[1]: Started sshd@20-10.128.0.70:22-20.229.252.112:60974.service - OpenSSH per-connection server daemon (20.229.252.112:60974). Apr 13 20:35:33.790106 sshd[6013]: Accepted publickey for core from 20.229.252.112 port 60974 ssh2: RSA SHA256:8koyYCh6N7XC15x1L0GA+V5R/sIeJxt7qqNvWavQGuY Apr 13 20:35:33.793077 sshd[6013]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 13 20:35:33.799847 systemd-logind[1443]: New session 21 of user core. Apr 13 20:35:33.805193 systemd[1]: Started session-21.scope - Session 21 of User core. Apr 13 20:35:34.439541 sshd[6013]: pam_unix(sshd:session): session closed for user core Apr 13 20:35:34.449765 systemd[1]: sshd@20-10.128.0.70:22-20.229.252.112:60974.service: Deactivated successfully. Apr 13 20:35:34.455756 systemd[1]: session-21.scope: Deactivated successfully. Apr 13 20:35:34.460479 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Apr 13 20:35:34.462849 systemd-logind[1443]: Removed session 21.