Mar 13 00:30:38.117360 kernel: Linux version 6.12.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT_DYNAMIC Thu Mar 12 22:08:29 -00 2026 Mar 13 00:30:38.117400 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:30:38.117422 kernel: BIOS-provided physical RAM map: Mar 13 00:30:38.117435 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Mar 13 00:30:38.117447 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Mar 13 00:30:38.117459 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Mar 13 00:30:38.117481 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Mar 13 00:30:38.117558 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Mar 13 00:30:38.117571 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd2e4fff] usable Mar 13 00:30:38.117590 kernel: BIOS-e820: [mem 0x00000000bd2e5000-0x00000000bd2eefff] ACPI data Mar 13 00:30:38.117603 kernel: BIOS-e820: [mem 0x00000000bd2ef000-0x00000000bf8ecfff] usable Mar 13 00:30:38.117616 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Mar 13 00:30:38.117630 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Mar 13 00:30:38.117645 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Mar 13 00:30:38.117664 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Mar 13 00:30:38.117684 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Mar 13 00:30:38.117700 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Mar 13 00:30:38.117716 kernel: NX (Execute Disable) protection: active Mar 13 00:30:38.117732 kernel: APIC: Static calls initialized Mar 13 00:30:38.117748 kernel: efi: EFI v2.7 by EDK II Mar 13 00:30:38.117765 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbd2ef018 RNG=0xbfb73018 TPMEventLog=0xbd2e5018 Mar 13 00:30:38.117781 kernel: random: crng init done Mar 13 00:30:38.117796 kernel: secureboot: Secure boot disabled Mar 13 00:30:38.117812 kernel: SMBIOS 2.4 present. Mar 13 00:30:38.117837 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Mar 13 00:30:38.117857 kernel: DMI: Memory slots populated: 1/1 Mar 13 00:30:38.117872 kernel: Hypervisor detected: KVM Mar 13 00:30:38.117888 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 13 00:30:38.117904 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 13 00:30:38.117920 kernel: kvm-clock: using sched offset of 15418121296 cycles Mar 13 00:30:38.117937 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 13 00:30:38.117954 kernel: tsc: Detected 2299.998 MHz processor Mar 13 00:30:38.117971 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 13 00:30:38.117987 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 13 00:30:38.118002 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Mar 13 00:30:38.118022 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Mar 13 00:30:38.118038 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 13 00:30:38.118054 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 13 00:30:38.118069 kernel: Using GB pages for direct mapping Mar 13 00:30:38.118085 kernel: ACPI: Early table checksum verification disabled Mar 13 00:30:38.118107 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Mar 13 00:30:38.118123 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Mar 13 00:30:38.118143 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Mar 13 00:30:38.118159 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Mar 13 00:30:38.118176 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Mar 13 00:30:38.118192 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20250404) Mar 13 00:30:38.118209 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Mar 13 00:30:38.118225 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Mar 13 00:30:38.118242 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Mar 13 00:30:38.118262 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Mar 13 00:30:38.118278 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Mar 13 00:30:38.118294 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Mar 13 00:30:38.118311 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Mar 13 00:30:38.118327 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Mar 13 00:30:38.118343 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Mar 13 00:30:38.118360 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Mar 13 00:30:38.118376 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Mar 13 00:30:38.118392 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Mar 13 00:30:38.118412 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Mar 13 00:30:38.118428 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Mar 13 00:30:38.118444 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 13 00:30:38.118461 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Mar 13 00:30:38.118477 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Mar 13 00:30:38.118494 kernel: NUMA: Node 0 [mem 0x00001000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00001000-0xbfffffff] Mar 13 00:30:38.118562 kernel: NUMA: Node 0 [mem 0x00001000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00001000-0x21fffffff] Mar 13 00:30:38.118580 kernel: NODE_DATA(0) allocated [mem 0x21fff8dc0-0x21fffffff] Mar 13 00:30:38.118596 kernel: Zone ranges: Mar 13 00:30:38.118618 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 13 00:30:38.118635 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 13 00:30:38.118652 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Mar 13 00:30:38.118669 kernel: Device empty Mar 13 00:30:38.118687 kernel: Movable zone start for each node Mar 13 00:30:38.118704 kernel: Early memory node ranges Mar 13 00:30:38.118723 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Mar 13 00:30:38.118741 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Mar 13 00:30:38.118759 kernel: node 0: [mem 0x0000000000100000-0x00000000bd2e4fff] Mar 13 00:30:38.118777 kernel: node 0: [mem 0x00000000bd2ef000-0x00000000bf8ecfff] Mar 13 00:30:38.118799 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Mar 13 00:30:38.118826 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Mar 13 00:30:38.118844 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Mar 13 00:30:38.118862 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 13 00:30:38.118880 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Mar 13 00:30:38.118899 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Mar 13 00:30:38.118917 kernel: On node 0, zone DMA32: 10 pages in unavailable ranges Mar 13 00:30:38.118935 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 13 00:30:38.118953 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Mar 13 00:30:38.118975 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 13 00:30:38.118993 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 13 00:30:38.119011 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 13 00:30:38.119029 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 13 00:30:38.119046 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 13 00:30:38.119065 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 13 00:30:38.119082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 13 00:30:38.119100 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 13 00:30:38.119118 kernel: CPU topo: Max. logical packages: 1 Mar 13 00:30:38.119140 kernel: CPU topo: Max. logical dies: 1 Mar 13 00:30:38.119158 kernel: CPU topo: Max. dies per package: 1 Mar 13 00:30:38.119174 kernel: CPU topo: Max. threads per core: 2 Mar 13 00:30:38.119191 kernel: CPU topo: Num. cores per package: 1 Mar 13 00:30:38.119209 kernel: CPU topo: Num. threads per package: 2 Mar 13 00:30:38.119227 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Mar 13 00:30:38.119244 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 13 00:30:38.119262 kernel: Booting paravirtualized kernel on KVM Mar 13 00:30:38.119281 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 13 00:30:38.119302 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 13 00:30:38.119320 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Mar 13 00:30:38.119338 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Mar 13 00:30:38.119355 kernel: pcpu-alloc: [0] 0 1 Mar 13 00:30:38.119372 kernel: kvm-guest: PV spinlocks enabled Mar 13 00:30:38.119390 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 13 00:30:38.119409 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:30:38.119427 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 13 00:30:38.119448 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 13 00:30:38.119465 kernel: Fallback order for Node 0: 0 Mar 13 00:30:38.119483 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1965136 Mar 13 00:30:38.119593 kernel: Policy zone: Normal Mar 13 00:30:38.119613 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 13 00:30:38.119632 kernel: software IO TLB: area num 2. Mar 13 00:30:38.119664 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 13 00:30:38.119686 kernel: Kernel/User page tables isolation: enabled Mar 13 00:30:38.119705 kernel: ftrace: allocating 40099 entries in 157 pages Mar 13 00:30:38.119723 kernel: ftrace: allocated 157 pages with 5 groups Mar 13 00:30:38.119742 kernel: Dynamic Preempt: voluntary Mar 13 00:30:38.119760 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 13 00:30:38.119784 kernel: rcu: RCU event tracing is enabled. Mar 13 00:30:38.119803 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 13 00:30:38.119830 kernel: Trampoline variant of Tasks RCU enabled. Mar 13 00:30:38.119849 kernel: Rude variant of Tasks RCU enabled. Mar 13 00:30:38.119868 kernel: Tracing variant of Tasks RCU enabled. Mar 13 00:30:38.119891 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 13 00:30:38.119910 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 13 00:30:38.119929 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:30:38.119948 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:30:38.119967 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 13 00:30:38.119985 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 13 00:30:38.120004 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 13 00:30:38.120023 kernel: Console: colour dummy device 80x25 Mar 13 00:30:38.120042 kernel: printk: legacy console [ttyS0] enabled Mar 13 00:30:38.120064 kernel: ACPI: Core revision 20240827 Mar 13 00:30:38.120082 kernel: APIC: Switch to symmetric I/O mode setup Mar 13 00:30:38.120100 kernel: x2apic enabled Mar 13 00:30:38.120119 kernel: APIC: Switched APIC routing to: physical x2apic Mar 13 00:30:38.120138 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Mar 13 00:30:38.120158 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 13 00:30:38.120177 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Mar 13 00:30:38.120196 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Mar 13 00:30:38.120214 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Mar 13 00:30:38.120237 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 13 00:30:38.120256 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall and VM exit Mar 13 00:30:38.120275 kernel: Spectre V2 : Mitigation: IBRS Mar 13 00:30:38.120294 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Mar 13 00:30:38.120312 kernel: RETBleed: Mitigation: IBRS Mar 13 00:30:38.120331 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 13 00:30:38.120350 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Mar 13 00:30:38.120369 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 13 00:30:38.120396 kernel: MDS: Mitigation: Clear CPU buffers Mar 13 00:30:38.120418 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 13 00:30:38.120437 kernel: active return thunk: its_return_thunk Mar 13 00:30:38.120456 kernel: ITS: Mitigation: Aligned branch/return thunks Mar 13 00:30:38.120474 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 13 00:30:38.120492 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 13 00:30:38.120532 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 13 00:30:38.120550 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 13 00:30:38.120567 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 13 00:30:38.120589 kernel: Freeing SMP alternatives memory: 32K Mar 13 00:30:38.120606 kernel: pid_max: default: 32768 minimum: 301 Mar 13 00:30:38.120624 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Mar 13 00:30:38.120641 kernel: landlock: Up and running. Mar 13 00:30:38.120659 kernel: SELinux: Initializing. Mar 13 00:30:38.120682 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 13 00:30:38.120699 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 13 00:30:38.120717 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Mar 13 00:30:38.120735 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Mar 13 00:30:38.120756 kernel: signal: max sigframe size: 1776 Mar 13 00:30:38.120773 kernel: rcu: Hierarchical SRCU implementation. Mar 13 00:30:38.120791 kernel: rcu: Max phase no-delay instances is 400. Mar 13 00:30:38.120809 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Mar 13 00:30:38.120832 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 13 00:30:38.120849 kernel: smp: Bringing up secondary CPUs ... Mar 13 00:30:38.120866 kernel: smpboot: x86: Booting SMP configuration: Mar 13 00:30:38.120882 kernel: .... node #0, CPUs: #1 Mar 13 00:30:38.120910 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 13 00:30:38.120935 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 13 00:30:38.120953 kernel: smp: Brought up 1 node, 2 CPUs Mar 13 00:30:38.120973 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 13 00:30:38.120990 kernel: Memory: 7555808K/7860544K available (14336K kernel code, 2445K rwdata, 26064K rodata, 46200K init, 2560K bss, 298904K reserved, 0K cma-reserved) Mar 13 00:30:38.121008 kernel: devtmpfs: initialized Mar 13 00:30:38.121025 kernel: x86/mm: Memory block size: 128MB Mar 13 00:30:38.121043 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Mar 13 00:30:38.121062 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 13 00:30:38.121080 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 13 00:30:38.121103 kernel: pinctrl core: initialized pinctrl subsystem Mar 13 00:30:38.121122 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 13 00:30:38.121140 kernel: audit: initializing netlink subsys (disabled) Mar 13 00:30:38.121159 kernel: audit: type=2000 audit(1773361833.872:1): state=initialized audit_enabled=0 res=1 Mar 13 00:30:38.121177 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 13 00:30:38.121195 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 13 00:30:38.121214 kernel: cpuidle: using governor menu Mar 13 00:30:38.121232 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 13 00:30:38.121250 kernel: dca service started, version 1.12.1 Mar 13 00:30:38.121272 kernel: PCI: Using configuration type 1 for base access Mar 13 00:30:38.121291 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 13 00:30:38.121309 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 13 00:30:38.121328 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 13 00:30:38.121346 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 13 00:30:38.121365 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 13 00:30:38.121382 kernel: ACPI: Added _OSI(Module Device) Mar 13 00:30:38.121399 kernel: ACPI: Added _OSI(Processor Device) Mar 13 00:30:38.121416 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 13 00:30:38.121438 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 13 00:30:38.121456 kernel: ACPI: Interpreter enabled Mar 13 00:30:38.121474 kernel: ACPI: PM: (supports S0 S3 S5) Mar 13 00:30:38.121493 kernel: ACPI: Using IOAPIC for interrupt routing Mar 13 00:30:38.121531 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 13 00:30:38.121555 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 13 00:30:38.121573 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 13 00:30:38.121592 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 13 00:30:38.121856 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 13 00:30:38.122052 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 13 00:30:38.122242 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 13 00:30:38.122266 kernel: PCI host bridge to bus 0000:00 Mar 13 00:30:38.122439 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 13 00:30:38.122639 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 13 00:30:38.122806 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 13 00:30:38.122981 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Mar 13 00:30:38.123145 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 13 00:30:38.123358 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint Mar 13 00:30:38.125635 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 conventional PCI endpoint Mar 13 00:30:38.125873 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint Mar 13 00:30:38.126072 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 13 00:30:38.126286 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 conventional PCI endpoint Mar 13 00:30:38.126471 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Mar 13 00:30:38.126699 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc0001000-0xc000107f] Mar 13 00:30:38.126909 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Mar 13 00:30:38.127099 kernel: pci 0000:00:04.0: BAR 0 [io 0xc000-0xc03f] Mar 13 00:30:38.127287 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc0000000-0xc000007f] Mar 13 00:30:38.127484 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Mar 13 00:30:38.128183 kernel: pci 0000:00:05.0: BAR 0 [io 0xc080-0xc09f] Mar 13 00:30:38.128392 kernel: pci 0000:00:05.0: BAR 1 [mem 0xc0002000-0xc000203f] Mar 13 00:30:38.128420 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 13 00:30:38.128440 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 13 00:30:38.128459 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 13 00:30:38.128478 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 13 00:30:38.128496 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 13 00:30:38.129576 kernel: iommu: Default domain type: Translated Mar 13 00:30:38.129597 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 13 00:30:38.129617 kernel: efivars: Registered efivars operations Mar 13 00:30:38.129635 kernel: PCI: Using ACPI for IRQ routing Mar 13 00:30:38.129654 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 13 00:30:38.129673 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Mar 13 00:30:38.129691 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Mar 13 00:30:38.129710 kernel: e820: reserve RAM buffer [mem 0xbd2e5000-0xbfffffff] Mar 13 00:30:38.129728 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Mar 13 00:30:38.129749 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Mar 13 00:30:38.129765 kernel: vgaarb: loaded Mar 13 00:30:38.129783 kernel: clocksource: Switched to clocksource kvm-clock Mar 13 00:30:38.129801 kernel: VFS: Disk quotas dquot_6.6.0 Mar 13 00:30:38.129830 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 13 00:30:38.129849 kernel: pnp: PnP ACPI init Mar 13 00:30:38.129867 kernel: pnp: PnP ACPI: found 7 devices Mar 13 00:30:38.129886 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 13 00:30:38.129904 kernel: NET: Registered PF_INET protocol family Mar 13 00:30:38.129923 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 13 00:30:38.129946 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 13 00:30:38.129965 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 13 00:30:38.129984 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 13 00:30:38.130003 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 13 00:30:38.130022 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 13 00:30:38.130041 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 13 00:30:38.130060 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 13 00:30:38.130078 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 13 00:30:38.130101 kernel: NET: Registered PF_XDP protocol family Mar 13 00:30:38.130296 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 13 00:30:38.130465 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 13 00:30:38.130677 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 13 00:30:38.130851 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Mar 13 00:30:38.131042 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 13 00:30:38.131067 kernel: PCI: CLS 0 bytes, default 64 Mar 13 00:30:38.131091 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 13 00:30:38.131111 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Mar 13 00:30:38.131134 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 13 00:30:38.131154 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 13 00:30:38.131173 kernel: clocksource: Switched to clocksource tsc Mar 13 00:30:38.131192 kernel: Initialise system trusted keyrings Mar 13 00:30:38.131210 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 13 00:30:38.131229 kernel: Key type asymmetric registered Mar 13 00:30:38.131247 kernel: Asymmetric key parser 'x509' registered Mar 13 00:30:38.131269 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Mar 13 00:30:38.131288 kernel: io scheduler mq-deadline registered Mar 13 00:30:38.131307 kernel: io scheduler kyber registered Mar 13 00:30:38.131325 kernel: io scheduler bfq registered Mar 13 00:30:38.131344 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 13 00:30:38.131363 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 13 00:30:38.132594 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Mar 13 00:30:38.132626 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Mar 13 00:30:38.132842 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Mar 13 00:30:38.132873 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 13 00:30:38.133063 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Mar 13 00:30:38.133088 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 13 00:30:38.133107 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 13 00:30:38.133126 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 13 00:30:38.133143 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Mar 13 00:30:38.133160 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Mar 13 00:30:38.133367 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Mar 13 00:30:38.133400 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 13 00:30:38.133419 kernel: i8042: Warning: Keylock active Mar 13 00:30:38.133437 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 13 00:30:38.133456 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 13 00:30:38.133686 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 13 00:30:38.133877 kernel: rtc_cmos 00:00: registered as rtc0 Mar 13 00:30:38.134055 kernel: rtc_cmos 00:00: setting system clock to 2026-03-13T00:30:37 UTC (1773361837) Mar 13 00:30:38.134225 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 13 00:30:38.134254 kernel: intel_pstate: CPU model not supported Mar 13 00:30:38.134273 kernel: pstore: Using crash dump compression: deflate Mar 13 00:30:38.134291 kernel: pstore: Registered efi_pstore as persistent store backend Mar 13 00:30:38.134310 kernel: NET: Registered PF_INET6 protocol family Mar 13 00:30:38.134328 kernel: Segment Routing with IPv6 Mar 13 00:30:38.134346 kernel: In-situ OAM (IOAM) with IPv6 Mar 13 00:30:38.134364 kernel: NET: Registered PF_PACKET protocol family Mar 13 00:30:38.134382 kernel: Key type dns_resolver registered Mar 13 00:30:38.134400 kernel: IPI shorthand broadcast: enabled Mar 13 00:30:38.134422 kernel: sched_clock: Marking stable (3885003989, 132674506)->(4073672458, -55993963) Mar 13 00:30:38.134440 kernel: registered taskstats version 1 Mar 13 00:30:38.134458 kernel: Loading compiled-in X.509 certificates Mar 13 00:30:38.134477 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.74-flatcar: 5aff49df330f42445474818d085d5033fee752d8' Mar 13 00:30:38.134494 kernel: Demotion targets for Node 0: null Mar 13 00:30:38.134531 kernel: Key type .fscrypt registered Mar 13 00:30:38.134549 kernel: Key type fscrypt-provisioning registered Mar 13 00:30:38.134566 kernel: ima: Allocated hash algorithm: sha1 Mar 13 00:30:38.134586 kernel: ima: No architecture policies found Mar 13 00:30:38.134607 kernel: clk: Disabling unused clocks Mar 13 00:30:38.134626 kernel: Warning: unable to open an initial console. Mar 13 00:30:38.134645 kernel: Freeing unused kernel image (initmem) memory: 46200K Mar 13 00:30:38.134663 kernel: Write protecting the kernel read-only data: 40960k Mar 13 00:30:38.134682 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 13 00:30:38.134701 kernel: Freeing unused kernel image (rodata/data gap) memory: 560K Mar 13 00:30:38.134720 kernel: Run /init as init process Mar 13 00:30:38.134739 kernel: with arguments: Mar 13 00:30:38.134757 kernel: /init Mar 13 00:30:38.134779 kernel: with environment: Mar 13 00:30:38.134797 kernel: HOME=/ Mar 13 00:30:38.134823 kernel: TERM=linux Mar 13 00:30:38.134844 systemd[1]: Successfully made /usr/ read-only. Mar 13 00:30:38.134868 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:30:38.134889 systemd[1]: Detected virtualization google. Mar 13 00:30:38.134908 systemd[1]: Detected architecture x86-64. Mar 13 00:30:38.134931 systemd[1]: Running in initrd. Mar 13 00:30:38.134950 systemd[1]: No hostname configured, using default hostname. Mar 13 00:30:38.134970 systemd[1]: Hostname set to . Mar 13 00:30:38.134990 systemd[1]: Initializing machine ID from random generator. Mar 13 00:30:38.135010 systemd[1]: Queued start job for default target initrd.target. Mar 13 00:30:38.135030 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:30:38.135068 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:30:38.135094 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 13 00:30:38.135115 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:30:38.135136 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 13 00:30:38.135158 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 13 00:30:38.135180 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 13 00:30:38.135201 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 13 00:30:38.135226 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:30:38.135246 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:30:38.135267 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:30:38.135288 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:30:38.135308 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:30:38.135328 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:30:38.135348 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:30:38.135370 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:30:38.135394 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 13 00:30:38.135415 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 13 00:30:38.135435 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:30:38.135456 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:30:38.135477 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:30:38.136885 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:30:38.136916 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 13 00:30:38.136938 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:30:38.136959 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 13 00:30:38.136986 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Mar 13 00:30:38.137006 systemd[1]: Starting systemd-fsck-usr.service... Mar 13 00:30:38.137026 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:30:38.137047 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:30:38.137066 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:30:38.137087 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 13 00:30:38.137112 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:30:38.137133 systemd[1]: Finished systemd-fsck-usr.service. Mar 13 00:30:38.137154 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 13 00:30:38.137210 systemd-journald[192]: Collecting audit messages is disabled. Mar 13 00:30:38.137255 systemd-journald[192]: Journal started Mar 13 00:30:38.137299 systemd-journald[192]: Runtime Journal (/run/log/journal/269ee1c376ef4b3daad71f6395692eef) is 8M, max 148.6M, 140.6M free. Mar 13 00:30:38.113571 systemd-modules-load[193]: Inserted module 'overlay' Mar 13 00:30:38.140801 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:30:38.144120 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:30:38.150935 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 13 00:30:38.159679 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 13 00:30:38.161694 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:30:38.168484 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:30:38.176528 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 13 00:30:38.180380 systemd-modules-load[193]: Inserted module 'br_netfilter' Mar 13 00:30:38.181865 kernel: Bridge firewalling registered Mar 13 00:30:38.183000 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:30:38.191215 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:30:38.198480 systemd-tmpfiles[210]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Mar 13 00:30:38.206484 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:30:38.214880 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:30:38.218733 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:30:38.225811 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:30:38.230557 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 13 00:30:38.247662 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:30:38.267520 dracut-cmdline[229]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=a2116dc4421f78fe124deb19b9ad6d70a0cb4fc0b3349854f4ce4e2904d4925d Mar 13 00:30:38.320643 systemd-resolved[230]: Positive Trust Anchors: Mar 13 00:30:38.321169 systemd-resolved[230]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:30:38.321383 systemd-resolved[230]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:30:38.329701 systemd-resolved[230]: Defaulting to hostname 'linux'. Mar 13 00:30:38.334418 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:30:38.341711 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:30:38.389543 kernel: SCSI subsystem initialized Mar 13 00:30:38.401522 kernel: Loading iSCSI transport class v2.0-870. Mar 13 00:30:38.413533 kernel: iscsi: registered transport (tcp) Mar 13 00:30:38.438539 kernel: iscsi: registered transport (qla4xxx) Mar 13 00:30:38.438601 kernel: QLogic iSCSI HBA Driver Mar 13 00:30:38.461468 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:30:38.478347 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:30:38.485293 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:30:38.544732 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 13 00:30:38.553411 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 13 00:30:38.618536 kernel: raid6: avx2x4 gen() 17986 MB/s Mar 13 00:30:38.635534 kernel: raid6: avx2x2 gen() 17957 MB/s Mar 13 00:30:38.652920 kernel: raid6: avx2x1 gen() 13913 MB/s Mar 13 00:30:38.652983 kernel: raid6: using algorithm avx2x4 gen() 17986 MB/s Mar 13 00:30:38.670920 kernel: raid6: .... xor() 7831 MB/s, rmw enabled Mar 13 00:30:38.671000 kernel: raid6: using avx2x2 recovery algorithm Mar 13 00:30:38.693538 kernel: xor: automatically using best checksumming function avx Mar 13 00:30:38.875543 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 13 00:30:38.884187 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:30:38.886632 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:30:38.920495 systemd-udevd[439]: Using default interface naming scheme 'v255'. Mar 13 00:30:38.929607 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:30:38.934138 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 13 00:30:38.962021 dracut-pre-trigger[449]: rd.md=0: removing MD RAID activation Mar 13 00:30:38.994377 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:30:38.999728 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:30:39.089142 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:30:39.097064 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 13 00:30:39.197642 kernel: virtio_scsi virtio0: 1/0/0 default/read/poll queues Mar 13 00:30:39.206957 kernel: scsi host0: Virtio SCSI HBA Mar 13 00:30:39.207063 kernel: blk-mq: reduced tag depth to 10240 Mar 13 00:30:39.213525 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Mar 13 00:30:39.224538 kernel: cryptd: max_cpu_qlen set to 1000 Mar 13 00:30:39.252527 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 13 00:30:39.259537 kernel: AES CTR mode by8 optimization enabled Mar 13 00:30:39.289560 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:30:39.289770 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:30:39.295776 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:30:39.301343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:30:39.303371 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:30:39.344946 kernel: sd 0:0:1:0: [sda] 33554432 512-byte logical blocks: (17.2 GB/16.0 GiB) Mar 13 00:30:39.345271 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Mar 13 00:30:39.349568 kernel: sd 0:0:1:0: [sda] Write Protect is off Mar 13 00:30:39.349880 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Mar 13 00:30:39.352524 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 13 00:30:39.368949 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 13 00:30:39.369014 kernel: GPT:17805311 != 33554431 Mar 13 00:30:39.369039 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 13 00:30:39.369070 kernel: GPT:17805311 != 33554431 Mar 13 00:30:39.369093 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 13 00:30:39.370566 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:30:39.370619 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Mar 13 00:30:39.372925 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:30:39.450843 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Mar 13 00:30:39.463895 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 13 00:30:39.489785 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Mar 13 00:30:39.502563 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 13 00:30:39.513320 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Mar 13 00:30:39.513590 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Mar 13 00:30:39.520659 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:30:39.525595 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:30:39.529597 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:30:39.535077 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 13 00:30:39.549681 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 13 00:30:39.559774 disk-uuid[596]: Primary Header is updated. Mar 13 00:30:39.559774 disk-uuid[596]: Secondary Entries is updated. Mar 13 00:30:39.559774 disk-uuid[596]: Secondary Header is updated. Mar 13 00:30:39.572621 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:30:39.581368 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:30:39.590548 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:30:40.605551 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 13 00:30:40.605775 disk-uuid[597]: The operation has completed successfully. Mar 13 00:30:40.689635 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 13 00:30:40.689791 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 13 00:30:40.731093 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 13 00:30:40.765896 sh[618]: Success Mar 13 00:30:40.786732 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 13 00:30:40.786799 kernel: device-mapper: uevent: version 1.0.3 Mar 13 00:30:40.786834 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Mar 13 00:30:40.799566 kernel: device-mapper: verity: sha256 using shash "sha256-avx2" Mar 13 00:30:40.867038 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 13 00:30:40.872667 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 13 00:30:40.896868 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 13 00:30:40.913532 kernel: BTRFS: device fsid 503642f8-c59c-4168-97a8-9c3603183fa3 devid 1 transid 37 /dev/mapper/usr (254:0) scanned by mount (630) Mar 13 00:30:40.916053 kernel: BTRFS info (device dm-0): first mount of filesystem 503642f8-c59c-4168-97a8-9c3603183fa3 Mar 13 00:30:40.916106 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:30:40.941823 kernel: BTRFS info (device dm-0 state E): enabling ssd optimizations Mar 13 00:30:40.941906 kernel: BTRFS info (device dm-0 state E): disabling log replay at mount time Mar 13 00:30:40.941932 kernel: BTRFS info (device dm-0 state E): enabling free space tree Mar 13 00:30:40.946710 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 13 00:30:40.948106 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:30:40.951011 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 13 00:30:40.953204 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 13 00:30:40.962691 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 13 00:30:41.005554 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (664) Mar 13 00:30:41.008162 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:30:41.008234 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:30:41.015870 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:30:41.015962 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:30:41.015987 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:30:41.022572 kernel: BTRFS info (device sda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:30:41.023891 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 13 00:30:41.033923 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 13 00:30:41.185195 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:30:41.199719 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:30:41.266421 ignition[724]: Ignition 2.22.0 Mar 13 00:30:41.266446 ignition[724]: Stage: fetch-offline Mar 13 00:30:41.270128 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:30:41.266531 ignition[724]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:30:41.266548 ignition[724]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 13 00:30:41.266701 ignition[724]: parsed url from cmdline: "" Mar 13 00:30:41.266707 ignition[724]: no config URL provided Mar 13 00:30:41.266716 ignition[724]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:30:41.285976 systemd-networkd[803]: lo: Link UP Mar 13 00:30:41.266729 ignition[724]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:30:41.285981 systemd-networkd[803]: lo: Gained carrier Mar 13 00:30:41.266740 ignition[724]: failed to fetch config: resource requires networking Mar 13 00:30:41.288433 systemd-networkd[803]: Enumeration completed Mar 13 00:30:41.266966 ignition[724]: Ignition finished successfully Mar 13 00:30:41.288896 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:30:41.288991 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:30:41.288998 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:30:41.290810 systemd-networkd[803]: eth0: Link UP Mar 13 00:30:41.291689 systemd-networkd[803]: eth0: Gained carrier Mar 13 00:30:41.291707 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:30:41.297007 systemd[1]: Reached target network.target - Network. Mar 13 00:30:41.305885 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 13 00:30:41.306891 systemd-networkd[803]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf.c.flatcar-212911.internal' to 'ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf' Mar 13 00:30:41.306911 systemd-networkd[803]: eth0: DHCPv4 address 10.128.0.72/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 13 00:30:41.358802 ignition[808]: Ignition 2.22.0 Mar 13 00:30:41.358822 ignition[808]: Stage: fetch Mar 13 00:30:41.359072 ignition[808]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:30:41.359092 ignition[808]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 13 00:30:41.359249 ignition[808]: parsed url from cmdline: "" Mar 13 00:30:41.371791 unknown[808]: fetched base config from "system" Mar 13 00:30:41.359258 ignition[808]: no config URL provided Mar 13 00:30:41.371804 unknown[808]: fetched base config from "system" Mar 13 00:30:41.359270 ignition[808]: reading system config file "/usr/lib/ignition/user.ign" Mar 13 00:30:41.371815 unknown[808]: fetched user config from "gcp" Mar 13 00:30:41.359287 ignition[808]: no config at "/usr/lib/ignition/user.ign" Mar 13 00:30:41.375845 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 13 00:30:41.359330 ignition[808]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Mar 13 00:30:41.380162 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 13 00:30:41.363627 ignition[808]: GET result: OK Mar 13 00:30:41.363734 ignition[808]: parsing config with SHA512: 20bdb9ba138bf8bc11b4f362561bdd3f3f5249726ef5c2c932aeaefdb1095432ffb1cf6035a53e44049665f735172ccfd2ee312eb7d2ef2f34867a8e0ed7c83e Mar 13 00:30:41.372396 ignition[808]: fetch: fetch complete Mar 13 00:30:41.372407 ignition[808]: fetch: fetch passed Mar 13 00:30:41.372480 ignition[808]: Ignition finished successfully Mar 13 00:30:41.431125 ignition[816]: Ignition 2.22.0 Mar 13 00:30:41.431142 ignition[816]: Stage: kargs Mar 13 00:30:41.434964 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 13 00:30:41.431370 ignition[816]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:30:41.437803 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 13 00:30:41.431388 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 13 00:30:41.432469 ignition[816]: kargs: kargs passed Mar 13 00:30:41.432550 ignition[816]: Ignition finished successfully Mar 13 00:30:41.476076 ignition[823]: Ignition 2.22.0 Mar 13 00:30:41.476093 ignition[823]: Stage: disks Mar 13 00:30:41.479561 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 13 00:30:41.476312 ignition[823]: no configs at "/usr/lib/ignition/base.d" Mar 13 00:30:41.482945 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 13 00:30:41.476330 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 13 00:30:41.485828 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 13 00:30:41.477583 ignition[823]: disks: disks passed Mar 13 00:30:41.489787 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:30:41.477649 ignition[823]: Ignition finished successfully Mar 13 00:30:41.493808 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:30:41.497821 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:30:41.504133 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 13 00:30:41.540158 systemd-fsck[832]: ROOT: clean, 15/1628000 files, 120826/1617920 blocks Mar 13 00:30:41.549901 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 13 00:30:41.555828 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 13 00:30:41.721531 kernel: EXT4-fs (sda9): mounted filesystem 26348f72-0225-4c06-aedc-823e61beebc6 r/w with ordered data mode. Quota mode: none. Mar 13 00:30:41.722700 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 13 00:30:41.726263 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 13 00:30:41.733343 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:30:41.737227 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 13 00:30:41.743217 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 13 00:30:41.743299 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 13 00:30:41.743479 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:30:41.760375 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 13 00:30:41.764546 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 13 00:30:41.770562 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (840) Mar 13 00:30:41.772989 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:30:41.773048 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:30:41.778219 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:30:41.778270 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:30:41.778295 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:30:41.782829 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:30:41.879328 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Mar 13 00:30:41.887659 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Mar 13 00:30:41.894678 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Mar 13 00:30:41.900567 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Mar 13 00:30:42.040436 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 13 00:30:42.043228 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 13 00:30:42.059106 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 13 00:30:42.069448 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 13 00:30:42.070714 kernel: BTRFS info (device sda6): last unmount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:30:42.114157 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 13 00:30:42.119700 ignition[952]: INFO : Ignition 2.22.0 Mar 13 00:30:42.119700 ignition[952]: INFO : Stage: mount Mar 13 00:30:42.119700 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:30:42.119700 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 13 00:30:42.137658 ignition[952]: INFO : mount: mount passed Mar 13 00:30:42.137658 ignition[952]: INFO : Ignition finished successfully Mar 13 00:30:42.122234 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 13 00:30:42.125929 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 13 00:30:42.152039 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 13 00:30:42.178538 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 (8:6) scanned by mount (964) Mar 13 00:30:42.181331 kernel: BTRFS info (device sda6): first mount of filesystem 451985e5-e916-48b1-8100-483c174d7b52 Mar 13 00:30:42.181382 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 13 00:30:42.186806 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 13 00:30:42.186859 kernel: BTRFS info (device sda6): turning on async discard Mar 13 00:30:42.186884 kernel: BTRFS info (device sda6): enabling free space tree Mar 13 00:30:42.190328 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 13 00:30:42.227117 ignition[980]: INFO : Ignition 2.22.0 Mar 13 00:30:42.227117 ignition[980]: INFO : Stage: files Mar 13 00:30:42.230639 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:30:42.230639 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 13 00:30:42.230639 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Mar 13 00:30:42.230639 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 13 00:30:42.230639 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 13 00:30:42.245649 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 13 00:30:42.245649 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 13 00:30:42.245649 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 13 00:30:42.245649 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:30:42.245649 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Mar 13 00:30:42.235454 unknown[980]: wrote ssh authorized keys file for user: core Mar 13 00:30:42.338763 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 13 00:30:42.476977 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Mar 13 00:30:42.480680 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 13 00:30:42.480680 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 13 00:30:42.480680 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:30:42.480680 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 13 00:30:42.480680 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:30:42.480680 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 13 00:30:42.480680 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:30:42.480680 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 13 00:30:42.512580 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:30:42.512580 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 13 00:30:42.512580 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:30:42.512580 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:30:42.512580 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:30:42.512580 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.8-x86-64.raw: attempt #1 Mar 13 00:30:43.068070 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 13 00:30:43.218666 systemd-networkd[803]: eth0: Gained IPv6LL Mar 13 00:30:44.325224 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.8-x86-64.raw" Mar 13 00:30:44.325224 ignition[980]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 13 00:30:44.333615 ignition[980]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:30:44.333615 ignition[980]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 13 00:30:44.333615 ignition[980]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 13 00:30:44.333615 ignition[980]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 13 00:30:44.333615 ignition[980]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 13 00:30:44.333615 ignition[980]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:30:44.333615 ignition[980]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 13 00:30:44.333615 ignition[980]: INFO : files: files passed Mar 13 00:30:44.333615 ignition[980]: INFO : Ignition finished successfully Mar 13 00:30:44.333846 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 13 00:30:44.341811 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 13 00:30:44.349784 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 13 00:30:44.373140 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 13 00:30:44.373245 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 13 00:30:44.395799 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:30:44.395799 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:30:44.389449 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:30:44.408615 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 13 00:30:44.393192 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 13 00:30:44.399929 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 13 00:30:44.464971 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 13 00:30:44.465253 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 13 00:30:44.468823 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 13 00:30:44.472815 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 13 00:30:44.476843 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 13 00:30:44.479131 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 13 00:30:44.508239 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:30:44.513880 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 13 00:30:44.541405 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:30:44.541795 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:30:44.546054 systemd[1]: Stopped target timers.target - Timer Units. Mar 13 00:30:44.550049 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 13 00:30:44.550483 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 13 00:30:44.560645 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 13 00:30:44.561120 systemd[1]: Stopped target basic.target - Basic System. Mar 13 00:30:44.565014 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 13 00:30:44.568933 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 13 00:30:44.573035 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 13 00:30:44.576930 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Mar 13 00:30:44.581010 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 13 00:30:44.584965 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 13 00:30:44.589080 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 13 00:30:44.593969 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 13 00:30:44.598077 systemd[1]: Stopped target swap.target - Swaps. Mar 13 00:30:44.601984 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 13 00:30:44.602352 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 13 00:30:44.611636 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:30:44.612028 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:30:44.615976 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 13 00:30:44.616427 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:30:44.620038 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 13 00:30:44.620244 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 13 00:30:44.626988 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 13 00:30:44.627423 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 13 00:30:44.629961 systemd[1]: ignition-files.service: Deactivated successfully. Mar 13 00:30:44.630155 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 13 00:30:44.635737 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 13 00:30:44.645621 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 13 00:30:44.645858 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:30:44.655883 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 13 00:30:44.661784 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 13 00:30:44.662710 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:30:44.674889 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 13 00:30:44.675103 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 13 00:30:44.687863 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 13 00:30:44.688621 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 13 00:30:44.692600 ignition[1035]: INFO : Ignition 2.22.0 Mar 13 00:30:44.692600 ignition[1035]: INFO : Stage: umount Mar 13 00:30:44.692600 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 13 00:30:44.692600 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 13 00:30:44.711594 ignition[1035]: INFO : umount: umount passed Mar 13 00:30:44.711594 ignition[1035]: INFO : Ignition finished successfully Mar 13 00:30:44.702973 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 13 00:30:44.704300 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 13 00:30:44.704455 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 13 00:30:44.715001 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 13 00:30:44.715133 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 13 00:30:44.721115 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 13 00:30:44.721194 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 13 00:30:44.724702 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 13 00:30:44.724778 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 13 00:30:44.731645 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 13 00:30:44.731717 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 13 00:30:44.738646 systemd[1]: Stopped target network.target - Network. Mar 13 00:30:44.742593 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 13 00:30:44.742675 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 13 00:30:44.746628 systemd[1]: Stopped target paths.target - Path Units. Mar 13 00:30:44.750575 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 13 00:30:44.754558 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:30:44.757582 systemd[1]: Stopped target slices.target - Slice Units. Mar 13 00:30:44.761594 systemd[1]: Stopped target sockets.target - Socket Units. Mar 13 00:30:44.765677 systemd[1]: iscsid.socket: Deactivated successfully. Mar 13 00:30:44.765748 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 13 00:30:44.769635 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 13 00:30:44.769701 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 13 00:30:44.773624 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 13 00:30:44.773719 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 13 00:30:44.779646 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 13 00:30:44.779721 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 13 00:30:44.782788 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 13 00:30:44.782965 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 13 00:30:44.787478 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 13 00:30:44.795725 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 13 00:30:44.801408 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 13 00:30:44.801721 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 13 00:30:44.808599 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 13 00:30:44.808869 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 13 00:30:44.808982 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 13 00:30:44.813455 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 13 00:30:44.814847 systemd[1]: Stopped target network-pre.target - Preparation for Network. Mar 13 00:30:44.819654 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 13 00:30:44.819723 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:30:44.824753 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 13 00:30:44.827603 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 13 00:30:44.827679 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 13 00:30:44.830800 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 13 00:30:44.830858 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:30:44.838838 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 13 00:30:44.838888 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 13 00:30:44.845684 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 13 00:30:44.845774 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:30:44.852891 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:30:44.860164 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 13 00:30:44.860237 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:30:44.867875 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 13 00:30:44.868091 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:30:44.874537 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 13 00:30:44.874661 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 13 00:30:44.882686 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 13 00:30:44.882744 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:30:44.885739 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 13 00:30:44.885808 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 13 00:30:44.892818 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 13 00:30:44.892875 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 13 00:30:44.901589 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 13 00:30:44.901677 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 13 00:30:44.909814 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 13 00:30:44.917569 systemd[1]: systemd-network-generator.service: Deactivated successfully. Mar 13 00:30:44.917654 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:30:44.920841 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 13 00:30:44.920912 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:30:44.928831 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 13 00:30:45.015634 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Mar 13 00:30:44.928903 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:30:44.940074 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Mar 13 00:30:44.940134 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 13 00:30:44.940182 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 13 00:30:44.940704 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 13 00:30:44.940853 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 13 00:30:44.946009 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 13 00:30:44.946144 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 13 00:30:44.950355 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 13 00:30:44.955925 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 13 00:30:44.978288 systemd[1]: Switching root. Mar 13 00:30:45.047604 systemd-journald[192]: Journal stopped Mar 13 00:30:47.011402 kernel: SELinux: policy capability network_peer_controls=1 Mar 13 00:30:47.011455 kernel: SELinux: policy capability open_perms=1 Mar 13 00:30:47.011483 kernel: SELinux: policy capability extended_socket_class=1 Mar 13 00:30:47.011528 kernel: SELinux: policy capability always_check_network=0 Mar 13 00:30:47.014526 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 13 00:30:47.014561 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 13 00:30:47.014583 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 13 00:30:47.014602 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 13 00:30:47.014627 kernel: SELinux: policy capability userspace_initial_context=0 Mar 13 00:30:47.014646 kernel: audit: type=1403 audit(1773361845.610:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 13 00:30:47.014669 systemd[1]: Successfully loaded SELinux policy in 65.138ms. Mar 13 00:30:47.014692 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.882ms. Mar 13 00:30:47.014714 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 13 00:30:47.014735 systemd[1]: Detected virtualization google. Mar 13 00:30:47.014761 systemd[1]: Detected architecture x86-64. Mar 13 00:30:47.014782 systemd[1]: Detected first boot. Mar 13 00:30:47.014802 systemd[1]: Initializing machine ID from random generator. Mar 13 00:30:47.014824 zram_generator::config[1079]: No configuration found. Mar 13 00:30:47.014846 kernel: Guest personality initialized and is inactive Mar 13 00:30:47.014866 kernel: VMCI host device registered (name=vmci, major=10, minor=258) Mar 13 00:30:47.014889 kernel: Initialized host personality Mar 13 00:30:47.014908 kernel: NET: Registered PF_VSOCK protocol family Mar 13 00:30:47.014929 systemd[1]: Populated /etc with preset unit settings. Mar 13 00:30:47.014953 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 13 00:30:47.014974 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 13 00:30:47.014995 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 13 00:30:47.015016 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 13 00:30:47.015048 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 13 00:30:47.015070 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 13 00:30:47.015092 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 13 00:30:47.015113 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 13 00:30:47.015134 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 13 00:30:47.015156 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 13 00:30:47.015179 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 13 00:30:47.015204 systemd[1]: Created slice user.slice - User and Session Slice. Mar 13 00:30:47.015225 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 13 00:30:47.015247 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 13 00:30:47.015269 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 13 00:30:47.015291 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 13 00:30:47.015313 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 13 00:30:47.015341 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 13 00:30:47.015364 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 13 00:30:47.015387 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 13 00:30:47.015412 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 13 00:30:47.015434 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 13 00:30:47.015455 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 13 00:30:47.015479 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 13 00:30:47.015738 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 13 00:30:47.015773 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 13 00:30:47.015796 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 13 00:30:47.015828 systemd[1]: Reached target slices.target - Slice Units. Mar 13 00:30:47.015852 systemd[1]: Reached target swap.target - Swaps. Mar 13 00:30:47.015876 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 13 00:30:47.015901 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 13 00:30:47.015926 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 13 00:30:47.015951 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 13 00:30:47.015981 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 13 00:30:47.016016 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 13 00:30:47.016037 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 13 00:30:47.016066 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 13 00:30:47.018691 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 13 00:30:47.018723 systemd[1]: Mounting media.mount - External Media Directory... Mar 13 00:30:47.018748 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:30:47.018776 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 13 00:30:47.018798 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 13 00:30:47.018822 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 13 00:30:47.018846 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 13 00:30:47.018869 systemd[1]: Reached target machines.target - Containers. Mar 13 00:30:47.018892 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 13 00:30:47.018914 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:30:47.018936 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 13 00:30:47.018967 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 13 00:30:47.018989 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:30:47.019012 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:30:47.019051 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:30:47.019074 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 13 00:30:47.019097 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:30:47.019119 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 13 00:30:47.019145 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 13 00:30:47.019167 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 13 00:30:47.019195 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 13 00:30:47.019218 systemd[1]: Stopped systemd-fsck-usr.service. Mar 13 00:30:47.019241 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:30:47.019264 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 13 00:30:47.019287 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 13 00:30:47.019309 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 13 00:30:47.019332 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 13 00:30:47.019354 kernel: fuse: init (API version 7.41) Mar 13 00:30:47.019380 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 13 00:30:47.019402 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 13 00:30:47.019426 systemd[1]: verity-setup.service: Deactivated successfully. Mar 13 00:30:47.019449 systemd[1]: Stopped verity-setup.service. Mar 13 00:30:47.019475 kernel: loop: module loaded Mar 13 00:30:47.020347 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:30:47.020389 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 13 00:30:47.020414 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 13 00:30:47.020440 systemd[1]: Mounted media.mount - External Media Directory. Mar 13 00:30:47.020471 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 13 00:30:47.020524 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 13 00:30:47.020551 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 13 00:30:47.020575 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 13 00:30:47.020600 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 13 00:30:47.020624 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 13 00:30:47.020648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:30:47.020672 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:30:47.020701 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:30:47.020723 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:30:47.020746 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 13 00:30:47.020770 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 13 00:30:47.020795 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:30:47.020819 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:30:47.020844 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 13 00:30:47.020869 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 13 00:30:47.020893 kernel: ACPI: bus type drm_connector registered Mar 13 00:30:47.020921 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:30:47.020945 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:30:47.020970 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 13 00:30:47.020994 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 13 00:30:47.021030 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 13 00:30:47.021068 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 13 00:30:47.021138 systemd-journald[1150]: Collecting audit messages is disabled. Mar 13 00:30:47.021195 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 13 00:30:47.021221 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 13 00:30:47.021247 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 13 00:30:47.021276 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 13 00:30:47.021302 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 13 00:30:47.021328 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:30:47.021354 systemd-journald[1150]: Journal started Mar 13 00:30:47.021401 systemd-journald[1150]: Runtime Journal (/run/log/journal/8ee29e64d6894b4f965d8f709366bf45) is 8M, max 148.6M, 140.6M free. Mar 13 00:30:47.024022 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 13 00:30:46.438572 systemd[1]: Queued start job for default target multi-user.target. Mar 13 00:30:46.460241 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 13 00:30:46.460840 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 13 00:30:47.035344 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:30:47.035409 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 13 00:30:47.039581 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:30:47.047551 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 13 00:30:47.055665 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 13 00:30:47.062610 systemd[1]: Started systemd-journald.service - Journal Service. Mar 13 00:30:47.068549 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 13 00:30:47.072015 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 13 00:30:47.076572 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 13 00:30:47.096950 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 13 00:30:47.128536 kernel: loop0: detected capacity change from 0 to 128560 Mar 13 00:30:47.129284 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 13 00:30:47.139716 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 13 00:30:47.151459 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 13 00:30:47.154903 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 13 00:30:47.207897 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 13 00:30:47.224030 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 13 00:30:47.239489 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 13 00:30:47.247389 kernel: loop1: detected capacity change from 0 to 228704 Mar 13 00:30:47.253720 systemd-journald[1150]: Time spent on flushing to /var/log/journal/8ee29e64d6894b4f965d8f709366bf45 is 96.984ms for 969 entries. Mar 13 00:30:47.253720 systemd-journald[1150]: System Journal (/var/log/journal/8ee29e64d6894b4f965d8f709366bf45) is 8M, max 584.8M, 576.8M free. Mar 13 00:30:47.376426 systemd-journald[1150]: Received client request to flush runtime journal. Mar 13 00:30:47.376552 kernel: loop2: detected capacity change from 0 to 50736 Mar 13 00:30:47.261997 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 13 00:30:47.295112 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 13 00:30:47.307394 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 13 00:30:47.382665 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 13 00:30:47.392593 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Mar 13 00:30:47.392629 systemd-tmpfiles[1218]: ACLs are not supported, ignoring. Mar 13 00:30:47.401227 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 13 00:30:47.421597 kernel: loop3: detected capacity change from 0 to 110984 Mar 13 00:30:47.465193 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 13 00:30:47.516493 kernel: loop4: detected capacity change from 0 to 128560 Mar 13 00:30:47.556557 kernel: loop5: detected capacity change from 0 to 228704 Mar 13 00:30:47.596997 kernel: loop6: detected capacity change from 0 to 50736 Mar 13 00:30:47.631413 kernel: loop7: detected capacity change from 0 to 110984 Mar 13 00:30:47.660550 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Mar 13 00:30:47.668526 (sd-merge)[1227]: Merged extensions into '/usr'. Mar 13 00:30:47.675788 systemd[1]: Reload requested from client PID 1182 ('systemd-sysext') (unit systemd-sysext.service)... Mar 13 00:30:47.676006 systemd[1]: Reloading... Mar 13 00:30:47.933536 zram_generator::config[1252]: No configuration found. Mar 13 00:30:48.201736 ldconfig[1177]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 13 00:30:48.402529 systemd[1]: Reloading finished in 725 ms. Mar 13 00:30:48.426131 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 13 00:30:48.430175 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 13 00:30:48.445132 systemd[1]: Starting ensure-sysext.service... Mar 13 00:30:48.450802 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 13 00:30:48.476929 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 13 00:30:48.488795 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Mar 13 00:30:48.488845 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Mar 13 00:30:48.488993 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 13 00:30:48.491044 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 13 00:30:48.491757 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 13 00:30:48.491888 systemd[1]: Reload requested from client PID 1293 ('systemctl') (unit ensure-sysext.service)... Mar 13 00:30:48.491907 systemd[1]: Reloading... Mar 13 00:30:48.493899 systemd-tmpfiles[1294]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 13 00:30:48.494758 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Mar 13 00:30:48.495003 systemd-tmpfiles[1294]: ACLs are not supported, ignoring. Mar 13 00:30:48.505562 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:30:48.505581 systemd-tmpfiles[1294]: Skipping /boot Mar 13 00:30:48.529127 systemd-tmpfiles[1294]: Detected autofs mount point /boot during canonicalization of boot. Mar 13 00:30:48.529155 systemd-tmpfiles[1294]: Skipping /boot Mar 13 00:30:48.583111 systemd-udevd[1297]: Using default interface naming scheme 'v255'. Mar 13 00:30:48.621539 zram_generator::config[1322]: No configuration found. Mar 13 00:30:49.021549 kernel: mousedev: PS/2 mouse device common for all mice Mar 13 00:30:49.059556 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Mar 13 00:30:49.099532 kernel: ACPI: button: Power Button [PWRF] Mar 13 00:30:49.103543 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 13 00:30:49.118865 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input5 Mar 13 00:30:49.142303 kernel: ACPI: button: Sleep Button [SLPF] Mar 13 00:30:49.199529 kernel: EDAC MC: Ver: 3.0.0 Mar 13 00:30:49.282932 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 13 00:30:49.284896 systemd[1]: Reloading finished in 792 ms. Mar 13 00:30:49.299533 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 13 00:30:49.304577 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 13 00:30:49.474836 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 13 00:30:49.497775 systemd[1]: Finished ensure-sysext.service. Mar 13 00:30:49.513125 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Mar 13 00:30:49.521643 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:30:49.523107 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:30:49.537688 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 13 00:30:49.546880 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 13 00:30:49.551713 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 13 00:30:49.562443 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 13 00:30:49.573093 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 13 00:30:49.585115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 13 00:30:49.595763 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 13 00:30:49.603790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 13 00:30:49.606344 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 13 00:30:49.616618 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 13 00:30:49.619978 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 13 00:30:49.622035 augenrules[1444]: No rules Mar 13 00:30:49.633646 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 13 00:30:49.646117 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 13 00:30:49.646240 systemd[1]: Reached target time-set.target - System Time Set. Mar 13 00:30:49.650460 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 13 00:30:49.653072 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 13 00:30:49.653153 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 13 00:30:49.656126 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:30:49.657196 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:30:49.657746 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 13 00:30:49.658157 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 13 00:30:49.659344 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 13 00:30:49.659692 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 13 00:30:49.660182 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 13 00:30:49.660445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 13 00:30:49.662093 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 13 00:30:49.662397 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 13 00:30:49.679479 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 13 00:30:49.679604 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 13 00:30:49.684287 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 13 00:30:49.729244 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 13 00:30:49.741185 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 13 00:30:49.758562 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 13 00:30:49.770916 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 13 00:30:49.780190 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 13 00:30:49.805780 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 13 00:30:49.815874 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Mar 13 00:30:49.819092 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 13 00:30:49.819168 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 13 00:30:49.878315 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 13 00:30:49.915457 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Mar 13 00:30:49.928292 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 13 00:30:49.933108 systemd-networkd[1449]: lo: Link UP Mar 13 00:30:49.933486 systemd-networkd[1449]: lo: Gained carrier Mar 13 00:30:49.936063 systemd-networkd[1449]: Enumeration completed Mar 13 00:30:49.936612 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:30:49.936625 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 13 00:30:49.937287 systemd-networkd[1449]: eth0: Link UP Mar 13 00:30:49.937534 systemd-networkd[1449]: eth0: Gained carrier Mar 13 00:30:49.937561 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 13 00:30:49.938848 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 13 00:30:49.946592 systemd-networkd[1449]: eth0: Overlong DHCP hostname received, shortened from 'ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf.c.flatcar-212911.internal' to 'ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf' Mar 13 00:30:49.946617 systemd-networkd[1449]: eth0: DHCPv4 address 10.128.0.72/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 13 00:30:49.949345 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 13 00:30:49.962815 systemd-resolved[1451]: Positive Trust Anchors: Mar 13 00:30:49.962833 systemd-resolved[1451]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 13 00:30:49.962905 systemd-resolved[1451]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 13 00:30:49.964153 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 13 00:30:49.971324 systemd-resolved[1451]: Defaulting to hostname 'linux'. Mar 13 00:30:49.974724 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 13 00:30:49.984664 systemd[1]: Reached target network.target - Network. Mar 13 00:30:49.993616 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 13 00:30:50.003631 systemd[1]: Reached target sysinit.target - System Initialization. Mar 13 00:30:50.012795 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 13 00:30:50.023655 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 13 00:30:50.033728 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Mar 13 00:30:50.043788 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 13 00:30:50.052713 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 13 00:30:50.062601 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 13 00:30:50.072596 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 13 00:30:50.072650 systemd[1]: Reached target paths.target - Path Units. Mar 13 00:30:50.079598 systemd[1]: Reached target timers.target - Timer Units. Mar 13 00:30:50.089997 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 13 00:30:50.102172 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 13 00:30:50.112289 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 13 00:30:50.122771 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 13 00:30:50.132602 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 13 00:30:50.152241 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 13 00:30:50.161007 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 13 00:30:50.172790 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 13 00:30:50.182819 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 13 00:30:50.193176 systemd[1]: Reached target sockets.target - Socket Units. Mar 13 00:30:50.201606 systemd[1]: Reached target basic.target - Basic System. Mar 13 00:30:50.210702 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:30:50.210757 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 13 00:30:50.212287 systemd[1]: Starting containerd.service - containerd container runtime... Mar 13 00:30:50.228689 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 13 00:30:50.246843 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 13 00:30:50.258341 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 13 00:30:50.272611 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 13 00:30:50.287606 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 13 00:30:50.296676 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 13 00:30:50.299845 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Mar 13 00:30:50.312822 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 13 00:30:50.318563 jq[1503]: false Mar 13 00:30:50.323750 systemd[1]: Started ntpd.service - Network Time Service. Mar 13 00:30:50.338690 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 13 00:30:50.344557 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Refreshing passwd entry cache Mar 13 00:30:50.341939 oslogin_cache_refresh[1507]: Refreshing passwd entry cache Mar 13 00:30:50.350584 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 13 00:30:50.353407 coreos-metadata[1500]: Mar 13 00:30:50.353 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Mar 13 00:30:50.358243 coreos-metadata[1500]: Mar 13 00:30:50.358 INFO Fetch successful Mar 13 00:30:50.358243 coreos-metadata[1500]: Mar 13 00:30:50.358 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Mar 13 00:30:50.360587 coreos-metadata[1500]: Mar 13 00:30:50.358 INFO Fetch successful Mar 13 00:30:50.360587 coreos-metadata[1500]: Mar 13 00:30:50.358 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Mar 13 00:30:50.360587 coreos-metadata[1500]: Mar 13 00:30:50.358 INFO Fetch successful Mar 13 00:30:50.360587 coreos-metadata[1500]: Mar 13 00:30:50.358 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Mar 13 00:30:50.359225 oslogin_cache_refresh[1507]: Failure getting users, quitting Mar 13 00:30:50.360877 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Failure getting users, quitting Mar 13 00:30:50.360877 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:30:50.360877 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Refreshing group entry cache Mar 13 00:30:50.359252 oslogin_cache_refresh[1507]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Mar 13 00:30:50.359314 oslogin_cache_refresh[1507]: Refreshing group entry cache Mar 13 00:30:50.362301 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 13 00:30:50.365642 extend-filesystems[1504]: Found /dev/sda6 Mar 13 00:30:50.379714 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Failure getting groups, quitting Mar 13 00:30:50.379714 google_oslogin_nss_cache[1507]: oslogin_cache_refresh[1507]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:30:50.362710 oslogin_cache_refresh[1507]: Failure getting groups, quitting Mar 13 00:30:50.379878 coreos-metadata[1500]: Mar 13 00:30:50.368 INFO Fetch successful Mar 13 00:30:50.379929 extend-filesystems[1504]: Found /dev/sda9 Mar 13 00:30:50.379929 extend-filesystems[1504]: Checking size of /dev/sda9 Mar 13 00:30:50.362728 oslogin_cache_refresh[1507]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Mar 13 00:30:50.381566 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 13 00:30:50.397774 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Mar 13 00:30:50.398683 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 13 00:30:50.400284 systemd[1]: Starting update-engine.service - Update Engine... Mar 13 00:30:50.402677 extend-filesystems[1504]: Resized partition /dev/sda9 Mar 13 00:30:50.430458 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 3587067 blocks Mar 13 00:30:50.424749 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 13 00:30:50.430650 extend-filesystems[1530]: resize2fs 1.47.3 (8-Jul-2025) Mar 13 00:30:50.445286 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 13 00:30:50.457377 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 13 00:30:50.457770 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 13 00:30:50.458255 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Mar 13 00:30:50.459682 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Mar 13 00:30:50.469792 systemd[1]: motdgen.service: Deactivated successfully. Mar 13 00:30:50.471635 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 13 00:30:50.476361 ntpd[1509]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:30:50.476451 ntpd[1509]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:30:50.476867 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:30:50.476867 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:30:50.476867 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: ---------------------------------------------------- Mar 13 00:30:50.476867 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:30:50.476867 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:30:50.476867 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: corporation. Support and training for ntp-4 are Mar 13 00:30:50.476867 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: available at https://www.nwtime.org/support Mar 13 00:30:50.476867 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: ---------------------------------------------------- Mar 13 00:30:50.476468 ntpd[1509]: ---------------------------------------------------- Mar 13 00:30:50.476482 ntpd[1509]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:30:50.476495 ntpd[1509]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:30:50.476526 ntpd[1509]: corporation. Support and training for ntp-4 are Mar 13 00:30:50.476539 ntpd[1509]: available at https://www.nwtime.org/support Mar 13 00:30:50.476553 ntpd[1509]: ---------------------------------------------------- Mar 13 00:30:50.484982 ntpd[1509]: proto: precision = 0.083 usec (-23) Mar 13 00:30:50.485635 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: proto: precision = 0.083 usec (-23) Mar 13 00:30:50.485813 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 13 00:30:50.531667 kernel: ntpd[1509]: segfault at 24 ip 0000560c5ff05aeb sp 00007ffcee9e1760 error 4 in ntpd[68aeb,560c5fea3000+80000] likely on CPU 0 (core 0, socket 0) Mar 13 00:30:50.531759 kernel: EXT4-fs (sda9): resized filesystem to 3587067 Mar 13 00:30:50.546703 kernel: Code: 0f 1e fa 41 56 41 55 41 54 55 53 48 89 fb e8 8c eb f9 ff 44 8b 28 49 89 c4 e8 51 6b ff ff 48 89 c5 48 85 db 0f 84 a5 00 00 00 <0f> b7 0b 66 83 f9 02 0f 84 c0 00 00 00 66 83 f9 0a 74 32 66 85 c9 Mar 13 00:30:50.546767 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: basedate set to 2026-02-28 Mar 13 00:30:50.546767 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: gps base set to 2026-03-01 (week 2408) Mar 13 00:30:50.546767 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:30:50.546767 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:30:50.546767 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:30:50.546767 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: Listen normally on 3 eth0 10.128.0.72:123 Mar 13 00:30:50.546767 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: Listen normally on 4 lo [::1]:123 Mar 13 00:30:50.546767 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: bind(21) AF_INET6 [fe80::4001:aff:fe80:48%2]:123 flags 0x811 failed: Cannot assign requested address Mar 13 00:30:50.546767 ntpd[1509]: 13 Mar 00:30:50 ntpd[1509]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:48%2]:123 Mar 13 00:30:50.490379 ntpd[1509]: basedate set to 2026-02-28 Mar 13 00:30:50.532542 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 13 00:30:50.490409 ntpd[1509]: gps base set to 2026-03-01 (week 2408) Mar 13 00:30:50.490688 ntpd[1509]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:30:50.490733 ntpd[1509]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:30:50.490969 ntpd[1509]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:30:50.491010 ntpd[1509]: Listen normally on 3 eth0 10.128.0.72:123 Mar 13 00:30:50.491050 ntpd[1509]: Listen normally on 4 lo [::1]:123 Mar 13 00:30:50.491091 ntpd[1509]: bind(21) AF_INET6 [fe80::4001:aff:fe80:48%2]:123 flags 0x811 failed: Cannot assign requested address Mar 13 00:30:50.491118 ntpd[1509]: unable to create socket on eth0 (5) for [fe80::4001:aff:fe80:48%2]:123 Mar 13 00:30:50.564709 (ntainerd)[1541]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 13 00:30:50.571406 jq[1531]: true Mar 13 00:30:50.571701 extend-filesystems[1530]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 13 00:30:50.571701 extend-filesystems[1530]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 13 00:30:50.571701 extend-filesystems[1530]: The filesystem on /dev/sda9 is now 3587067 (4k) blocks long. Mar 13 00:30:50.622789 update_engine[1529]: I20260313 00:30:50.591835 1529 main.cc:92] Flatcar Update Engine starting Mar 13 00:30:50.572856 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 13 00:30:50.623161 extend-filesystems[1504]: Resized filesystem in /dev/sda9 Mar 13 00:30:50.573167 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 13 00:30:50.639095 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 13 00:30:50.648267 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 13 00:30:50.649098 jq[1547]: true Mar 13 00:30:50.708267 systemd-coredump[1567]: Process 1509 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Mar 13 00:30:50.716075 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Mar 13 00:30:50.729564 tar[1539]: linux-amd64/LICENSE Mar 13 00:30:50.729966 tar[1539]: linux-amd64/helm Mar 13 00:30:50.734662 systemd[1]: Started systemd-coredump@0-1567-0.service - Process Core Dump (PID 1567/UID 0). Mar 13 00:30:50.865581 bash[1578]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:30:50.872575 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 13 00:30:50.892163 systemd[1]: Starting sshkeys.service... Mar 13 00:30:50.913618 systemd-logind[1525]: Watching system buttons on /dev/input/event2 (Power Button) Mar 13 00:30:50.913657 systemd-logind[1525]: Watching system buttons on /dev/input/event3 (Sleep Button) Mar 13 00:30:50.913690 systemd-logind[1525]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 13 00:30:50.916480 systemd-logind[1525]: New seat seat0. Mar 13 00:30:50.919549 systemd[1]: Started systemd-logind.service - User Login Management. Mar 13 00:30:50.972978 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 13 00:30:50.978951 dbus-daemon[1501]: [system] SELinux support is enabled Mar 13 00:30:50.990278 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 13 00:30:51.000297 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 13 00:30:51.007725 dbus-daemon[1501]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1449 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 13 00:30:51.015139 update_engine[1529]: I20260313 00:30:51.013978 1529 update_check_scheduler.cc:74] Next update check in 7m26s Mar 13 00:30:51.016889 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 13 00:30:51.017123 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 13 00:30:51.018878 dbus-daemon[1501]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 13 00:30:51.027785 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 13 00:30:51.027999 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 13 00:30:51.039351 systemd[1]: Started update-engine.service - Update Engine. Mar 13 00:30:51.059071 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 13 00:30:51.074110 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 13 00:30:51.090962 coreos-metadata[1581]: Mar 13 00:30:51.090 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Mar 13 00:30:51.096348 coreos-metadata[1581]: Mar 13 00:30:51.096 INFO Fetch failed with 404: resource not found Mar 13 00:30:51.096348 coreos-metadata[1581]: Mar 13 00:30:51.096 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Mar 13 00:30:51.099788 coreos-metadata[1581]: Mar 13 00:30:51.096 INFO Fetch successful Mar 13 00:30:51.099788 coreos-metadata[1581]: Mar 13 00:30:51.097 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Mar 13 00:30:51.099788 coreos-metadata[1581]: Mar 13 00:30:51.099 INFO Fetch failed with 404: resource not found Mar 13 00:30:51.099788 coreos-metadata[1581]: Mar 13 00:30:51.099 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Mar 13 00:30:51.100261 coreos-metadata[1581]: Mar 13 00:30:51.100 INFO Fetch failed with 404: resource not found Mar 13 00:30:51.100261 coreos-metadata[1581]: Mar 13 00:30:51.100 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Mar 13 00:30:51.105719 coreos-metadata[1581]: Mar 13 00:30:51.101 INFO Fetch successful Mar 13 00:30:51.107977 unknown[1581]: wrote ssh authorized keys file for user: core Mar 13 00:30:51.170646 update-ssh-keys[1591]: Updated "/home/core/.ssh/authorized_keys" Mar 13 00:30:51.170315 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 13 00:30:51.187711 systemd[1]: Finished sshkeys.service. Mar 13 00:30:51.220438 systemd-networkd[1449]: eth0: Gained IPv6LL Mar 13 00:30:51.233343 sshd_keygen[1545]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 13 00:30:51.235777 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 13 00:30:51.249546 systemd[1]: Reached target network-online.target - Network is Online. Mar 13 00:30:51.264895 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:30:51.286341 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 13 00:30:51.297052 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Mar 13 00:30:51.384789 init.sh[1601]: + '[' -e /etc/default/instance_configs.cfg.template ']' Mar 13 00:30:51.384789 init.sh[1601]: + echo -e '[InstanceSetup]\nset_host_keys = false' Mar 13 00:30:51.387926 init.sh[1601]: + /usr/bin/google_instance_setup Mar 13 00:30:51.402882 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 13 00:30:51.418435 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 13 00:30:51.442864 systemd-coredump[1571]: Process 1509 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module ld-linux-x86-64.so.2 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1509: #0 0x0000560c5ff05aeb n/a (ntpd + 0x68aeb) #1 0x0000560c5feaecdf n/a (ntpd + 0x11cdf) #2 0x0000560c5feaf575 n/a (ntpd + 0x12575) #3 0x0000560c5feaad8a n/a (ntpd + 0xdd8a) #4 0x0000560c5feac5d3 n/a (ntpd + 0xf5d3) #5 0x0000560c5feb4fd1 n/a (ntpd + 0x17fd1) #6 0x0000560c5fea5c2d n/a (ntpd + 0x8c2d) #7 0x00007f275b99216c n/a (libc.so.6 + 0x2716c) #8 0x00007f275b992229 __libc_start_main (libc.so.6 + 0x27229) #9 0x0000560c5fea5c55 n/a (ntpd + 0x8c55) ELF object binary architecture: AMD x86-64 Mar 13 00:30:51.452363 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Mar 13 00:30:51.453701 systemd[1]: ntpd.service: Failed with result 'core-dump'. Mar 13 00:30:51.463852 systemd[1]: systemd-coredump@0-1567-0.service: Deactivated successfully. Mar 13 00:30:51.499952 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 13 00:30:51.529614 systemd[1]: issuegen.service: Deactivated successfully. Mar 13 00:30:51.530600 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 13 00:30:51.540153 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 13 00:30:51.543402 dbus-daemon[1501]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 13 00:30:51.545233 dbus-daemon[1501]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1584 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 13 00:30:51.556695 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Mar 13 00:30:51.569827 systemd[1]: Started ntpd.service - Network Time Service. Mar 13 00:30:51.585734 systemd[1]: Starting polkit.service - Authorization Manager... Mar 13 00:30:51.596898 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 13 00:30:51.620825 locksmithd[1586]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 13 00:30:51.646937 containerd[1541]: time="2026-03-13T00:30:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 13 00:30:51.651110 containerd[1541]: time="2026-03-13T00:30:51.651056045Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Mar 13 00:30:51.682071 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 13 00:30:51.699285 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 13 00:30:51.704266 ntpd[1633]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:30:51.708439 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: ntpd 4.2.8p18@1.4062-o Thu Mar 12 21:34:27 UTC 2026 (1): Starting Mar 13 00:30:51.708439 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:30:51.708439 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: ---------------------------------------------------- Mar 13 00:30:51.708439 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:30:51.708439 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:30:51.708439 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: corporation. Support and training for ntp-4 are Mar 13 00:30:51.708439 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: available at https://www.nwtime.org/support Mar 13 00:30:51.708439 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: ---------------------------------------------------- Mar 13 00:30:51.704360 ntpd[1633]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 13 00:30:51.704379 ntpd[1633]: ---------------------------------------------------- Mar 13 00:30:51.704398 ntpd[1633]: ntp-4 is maintained by Network Time Foundation, Mar 13 00:30:51.714274 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 13 00:30:51.720367 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: proto: precision = 0.071 usec (-24) Mar 13 00:30:51.720367 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: basedate set to 2026-02-28 Mar 13 00:30:51.720367 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: gps base set to 2026-03-01 (week 2408) Mar 13 00:30:51.720367 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:30:51.720367 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:30:51.720367 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:30:51.720367 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: Listen normally on 3 eth0 10.128.0.72:123 Mar 13 00:30:51.720367 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: Listen normally on 4 lo [::1]:123 Mar 13 00:30:51.720367 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:48%2]:123 Mar 13 00:30:51.720367 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: Listening on routing socket on fd #22 for interface updates Mar 13 00:30:51.704415 ntpd[1633]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 13 00:30:51.704432 ntpd[1633]: corporation. Support and training for ntp-4 are Mar 13 00:30:51.704448 ntpd[1633]: available at https://www.nwtime.org/support Mar 13 00:30:51.704466 ntpd[1633]: ---------------------------------------------------- Mar 13 00:30:51.713063 ntpd[1633]: proto: precision = 0.071 usec (-24) Mar 13 00:30:51.717114 ntpd[1633]: basedate set to 2026-02-28 Mar 13 00:30:51.717141 ntpd[1633]: gps base set to 2026-03-01 (week 2408) Mar 13 00:30:51.718815 ntpd[1633]: Listen and drop on 0 v6wildcard [::]:123 Mar 13 00:30:51.718870 ntpd[1633]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 13 00:30:51.723553 systemd[1]: Reached target getty.target - Login Prompts. Mar 13 00:30:51.719138 ntpd[1633]: Listen normally on 2 lo 127.0.0.1:123 Mar 13 00:30:51.719188 ntpd[1633]: Listen normally on 3 eth0 10.128.0.72:123 Mar 13 00:30:51.719250 ntpd[1633]: Listen normally on 4 lo [::1]:123 Mar 13 00:30:51.719297 ntpd[1633]: Listen normally on 5 eth0 [fe80::4001:aff:fe80:48%2]:123 Mar 13 00:30:51.719341 ntpd[1633]: Listening on routing socket on fd #22 for interface updates Mar 13 00:30:51.732611 ntpd[1633]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:30:51.735383 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:30:51.735383 ntpd[1633]: 13 Mar 00:30:51 ntpd[1633]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:30:51.732657 ntpd[1633]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 13 00:30:51.756952 containerd[1541]: time="2026-03-13T00:30:51.753442281Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.612µs" Mar 13 00:30:51.756952 containerd[1541]: time="2026-03-13T00:30:51.753524219Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 13 00:30:51.756952 containerd[1541]: time="2026-03-13T00:30:51.753559862Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 13 00:30:51.756952 containerd[1541]: time="2026-03-13T00:30:51.753790831Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 13 00:30:51.756952 containerd[1541]: time="2026-03-13T00:30:51.753815940Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 13 00:30:51.756952 containerd[1541]: time="2026-03-13T00:30:51.753859111Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:30:51.756952 containerd[1541]: time="2026-03-13T00:30:51.753948663Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 13 00:30:51.756952 containerd[1541]: time="2026-03-13T00:30:51.753967891Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:30:51.756952 containerd[1541]: time="2026-03-13T00:30:51.754274087Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 13 00:30:51.756952 containerd[1541]: time="2026-03-13T00:30:51.754303918Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:30:51.756952 containerd[1541]: time="2026-03-13T00:30:51.754327212Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 13 00:30:51.756952 containerd[1541]: time="2026-03-13T00:30:51.754344291Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 13 00:30:51.756629 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 13 00:30:51.761841 containerd[1541]: time="2026-03-13T00:30:51.760857951Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 13 00:30:51.761841 containerd[1541]: time="2026-03-13T00:30:51.761293921Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:30:51.761841 containerd[1541]: time="2026-03-13T00:30:51.761355625Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 13 00:30:51.761841 containerd[1541]: time="2026-03-13T00:30:51.761377395Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 13 00:30:51.761841 containerd[1541]: time="2026-03-13T00:30:51.761472088Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 13 00:30:51.768379 containerd[1541]: time="2026-03-13T00:30:51.764443379Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 13 00:30:51.768379 containerd[1541]: time="2026-03-13T00:30:51.765618557Z" level=info msg="metadata content store policy set" policy=shared Mar 13 00:30:51.769608 systemd[1]: Started sshd@0-10.128.0.72:22-20.161.92.111:57142.service - OpenSSH per-connection server daemon (20.161.92.111:57142). Mar 13 00:30:51.775276 containerd[1541]: time="2026-03-13T00:30:51.774657330Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 13 00:30:51.775276 containerd[1541]: time="2026-03-13T00:30:51.774745528Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 13 00:30:51.775276 containerd[1541]: time="2026-03-13T00:30:51.774771699Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 13 00:30:51.775276 containerd[1541]: time="2026-03-13T00:30:51.774844099Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 13 00:30:51.775276 containerd[1541]: time="2026-03-13T00:30:51.774868998Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 13 00:30:51.775276 containerd[1541]: time="2026-03-13T00:30:51.774889497Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 13 00:30:51.775276 containerd[1541]: time="2026-03-13T00:30:51.774929208Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 13 00:30:51.775276 containerd[1541]: time="2026-03-13T00:30:51.774952844Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 13 00:30:51.775276 containerd[1541]: time="2026-03-13T00:30:51.774975205Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 13 00:30:51.775276 containerd[1541]: time="2026-03-13T00:30:51.774994836Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 13 00:30:51.775276 containerd[1541]: time="2026-03-13T00:30:51.775014162Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 13 00:30:51.775276 containerd[1541]: time="2026-03-13T00:30:51.775036739Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 13 00:30:51.775276 containerd[1541]: time="2026-03-13T00:30:51.775241067Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 13 00:30:51.775276 containerd[1541]: time="2026-03-13T00:30:51.775273080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 13 00:30:51.777535 containerd[1541]: time="2026-03-13T00:30:51.775298844Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 13 00:30:51.777535 containerd[1541]: time="2026-03-13T00:30:51.775335112Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 13 00:30:51.777535 containerd[1541]: time="2026-03-13T00:30:51.775359868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 13 00:30:51.777535 containerd[1541]: time="2026-03-13T00:30:51.775379995Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 13 00:30:51.777535 containerd[1541]: time="2026-03-13T00:30:51.775401030Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 13 00:30:51.777535 containerd[1541]: time="2026-03-13T00:30:51.775430097Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 13 00:30:51.777535 containerd[1541]: time="2026-03-13T00:30:51.775461345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 13 00:30:51.777535 containerd[1541]: time="2026-03-13T00:30:51.775483482Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 13 00:30:51.777535 containerd[1541]: time="2026-03-13T00:30:51.775682318Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 13 00:30:51.777535 containerd[1541]: time="2026-03-13T00:30:51.775763176Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 13 00:30:51.777535 containerd[1541]: time="2026-03-13T00:30:51.775788367Z" level=info msg="Start snapshots syncer" Mar 13 00:30:51.778035 containerd[1541]: time="2026-03-13T00:30:51.777586948Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 13 00:30:51.784565 containerd[1541]: time="2026-03-13T00:30:51.778176024Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 13 00:30:51.784565 containerd[1541]: time="2026-03-13T00:30:51.778271173Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 13 00:30:51.784772 containerd[1541]: time="2026-03-13T00:30:51.781150664Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 13 00:30:51.784772 containerd[1541]: time="2026-03-13T00:30:51.781392269Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 13 00:30:51.784772 containerd[1541]: time="2026-03-13T00:30:51.781449919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 13 00:30:51.784772 containerd[1541]: time="2026-03-13T00:30:51.781486380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 13 00:30:51.784772 containerd[1541]: time="2026-03-13T00:30:51.781539467Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 13 00:30:51.784772 containerd[1541]: time="2026-03-13T00:30:51.781580144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 13 00:30:51.784772 containerd[1541]: time="2026-03-13T00:30:51.781618149Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 13 00:30:51.784772 containerd[1541]: time="2026-03-13T00:30:51.781640812Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 13 00:30:51.784772 containerd[1541]: time="2026-03-13T00:30:51.781679948Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 13 00:30:51.784772 containerd[1541]: time="2026-03-13T00:30:51.781699967Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 13 00:30:51.784772 containerd[1541]: time="2026-03-13T00:30:51.781720183Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 13 00:30:51.784772 containerd[1541]: time="2026-03-13T00:30:51.784114849Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:30:51.784772 containerd[1541]: time="2026-03-13T00:30:51.784155742Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 13 00:30:51.784772 containerd[1541]: time="2026-03-13T00:30:51.784174510Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:30:51.785422 containerd[1541]: time="2026-03-13T00:30:51.784193285Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 13 00:30:51.785422 containerd[1541]: time="2026-03-13T00:30:51.784209726Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 13 00:30:51.785422 containerd[1541]: time="2026-03-13T00:30:51.784228340Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 13 00:30:51.785422 containerd[1541]: time="2026-03-13T00:30:51.784257945Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 13 00:30:51.785422 containerd[1541]: time="2026-03-13T00:30:51.784286069Z" level=info msg="runtime interface created" Mar 13 00:30:51.785422 containerd[1541]: time="2026-03-13T00:30:51.784297214Z" level=info msg="created NRI interface" Mar 13 00:30:51.785422 containerd[1541]: time="2026-03-13T00:30:51.784312413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 13 00:30:51.785422 containerd[1541]: time="2026-03-13T00:30:51.784335062Z" level=info msg="Connect containerd service" Mar 13 00:30:51.785422 containerd[1541]: time="2026-03-13T00:30:51.784383966Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 13 00:30:51.790553 containerd[1541]: time="2026-03-13T00:30:51.788282251Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 13 00:30:52.017044 polkitd[1634]: Started polkitd version 126 Mar 13 00:30:52.040394 polkitd[1634]: Loading rules from directory /etc/polkit-1/rules.d Mar 13 00:30:52.057822 polkitd[1634]: Loading rules from directory /run/polkit-1/rules.d Mar 13 00:30:52.057923 polkitd[1634]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 00:30:52.058545 polkitd[1634]: Loading rules from directory /usr/local/share/polkit-1/rules.d Mar 13 00:30:52.058598 polkitd[1634]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Mar 13 00:30:52.058658 polkitd[1634]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 13 00:30:52.066298 polkitd[1634]: Finished loading, compiling and executing 2 rules Mar 13 00:30:52.066727 systemd[1]: Started polkit.service - Authorization Manager. Mar 13 00:30:52.067451 dbus-daemon[1501]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 13 00:30:52.069627 polkitd[1634]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 13 00:30:52.126825 systemd-hostnamed[1584]: Hostname set to (transient) Mar 13 00:30:52.131881 systemd-resolved[1451]: System hostname changed to 'ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf'. Mar 13 00:30:52.226264 containerd[1541]: time="2026-03-13T00:30:52.226202136Z" level=info msg="Start subscribing containerd event" Mar 13 00:30:52.228533 containerd[1541]: time="2026-03-13T00:30:52.226586134Z" level=info msg="Start recovering state" Mar 13 00:30:52.228533 containerd[1541]: time="2026-03-13T00:30:52.228278484Z" level=info msg="Start event monitor" Mar 13 00:30:52.228533 containerd[1541]: time="2026-03-13T00:30:52.228301940Z" level=info msg="Start cni network conf syncer for default" Mar 13 00:30:52.228533 containerd[1541]: time="2026-03-13T00:30:52.228313772Z" level=info msg="Start streaming server" Mar 13 00:30:52.228533 containerd[1541]: time="2026-03-13T00:30:52.228328014Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 13 00:30:52.228533 containerd[1541]: time="2026-03-13T00:30:52.228339436Z" level=info msg="runtime interface starting up..." Mar 13 00:30:52.228533 containerd[1541]: time="2026-03-13T00:30:52.228349456Z" level=info msg="starting plugins..." Mar 13 00:30:52.228533 containerd[1541]: time="2026-03-13T00:30:52.228368221Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 13 00:30:52.229202 containerd[1541]: time="2026-03-13T00:30:52.229165299Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 13 00:30:52.229421 containerd[1541]: time="2026-03-13T00:30:52.229390300Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 13 00:30:52.230011 containerd[1541]: time="2026-03-13T00:30:52.229596045Z" level=info msg="containerd successfully booted in 0.586495s" Mar 13 00:30:52.229749 systemd[1]: Started containerd.service - containerd container runtime. Mar 13 00:30:52.282430 tar[1539]: linux-amd64/README.md Mar 13 00:30:52.311490 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 13 00:30:52.322769 sshd[1643]: Accepted publickey for core from 20.161.92.111 port 57142 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:30:52.326131 sshd-session[1643]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:30:52.341928 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 13 00:30:52.355451 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 13 00:30:52.385092 systemd-logind[1525]: New session 1 of user core. Mar 13 00:30:52.408805 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 13 00:30:52.424455 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 13 00:30:52.459626 (systemd)[1675]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 13 00:30:52.466748 systemd-logind[1525]: New session c1 of user core. Mar 13 00:30:52.700053 instance-setup[1608]: INFO Running google_set_multiqueue. Mar 13 00:30:52.726308 instance-setup[1608]: INFO Set channels for eth0 to 2. Mar 13 00:30:52.732348 instance-setup[1608]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Mar 13 00:30:52.734898 instance-setup[1608]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Mar 13 00:30:52.734966 instance-setup[1608]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Mar 13 00:30:52.736840 instance-setup[1608]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Mar 13 00:30:52.737345 instance-setup[1608]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Mar 13 00:30:52.740051 instance-setup[1608]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Mar 13 00:30:52.740116 instance-setup[1608]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Mar 13 00:30:52.744709 instance-setup[1608]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Mar 13 00:30:52.763286 instance-setup[1608]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 13 00:30:52.769140 instance-setup[1608]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 13 00:30:52.771754 instance-setup[1608]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Mar 13 00:30:52.771804 instance-setup[1608]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Mar 13 00:30:52.795936 init.sh[1601]: + /usr/bin/google_metadata_script_runner --script-type startup Mar 13 00:30:52.808640 systemd[1675]: Queued start job for default target default.target. Mar 13 00:30:52.815328 systemd[1675]: Created slice app.slice - User Application Slice. Mar 13 00:30:52.815379 systemd[1675]: Reached target paths.target - Paths. Mar 13 00:30:52.815463 systemd[1675]: Reached target timers.target - Timers. Mar 13 00:30:52.819152 systemd[1675]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 13 00:30:52.845759 systemd[1675]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 13 00:30:52.848243 systemd[1675]: Reached target sockets.target - Sockets. Mar 13 00:30:52.848610 systemd[1675]: Reached target basic.target - Basic System. Mar 13 00:30:52.848869 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 13 00:30:52.849396 systemd[1675]: Reached target default.target - Main User Target. Mar 13 00:30:52.849459 systemd[1675]: Startup finished in 362ms. Mar 13 00:30:52.862719 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 13 00:30:53.003491 systemd[1]: Started sshd@1-10.128.0.72:22-20.161.92.111:57146.service - OpenSSH per-connection server daemon (20.161.92.111:57146). Mar 13 00:30:53.022206 startup-script[1709]: INFO Starting startup scripts. Mar 13 00:30:53.032931 startup-script[1709]: INFO No startup scripts found in metadata. Mar 13 00:30:53.033004 startup-script[1709]: INFO Finished running startup scripts. Mar 13 00:30:53.067688 init.sh[1601]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Mar 13 00:30:53.067859 init.sh[1601]: + daemon_pids=() Mar 13 00:30:53.067988 init.sh[1601]: + for d in accounts clock_skew network Mar 13 00:30:53.068348 init.sh[1601]: + daemon_pids+=($!) Mar 13 00:30:53.068676 init.sh[1720]: + /usr/bin/google_accounts_daemon Mar 13 00:30:53.069719 init.sh[1601]: + for d in accounts clock_skew network Mar 13 00:30:53.069719 init.sh[1601]: + daemon_pids+=($!) Mar 13 00:30:53.069719 init.sh[1601]: + for d in accounts clock_skew network Mar 13 00:30:53.069841 init.sh[1721]: + /usr/bin/google_clock_skew_daemon Mar 13 00:30:53.070333 init.sh[1722]: + /usr/bin/google_network_daemon Mar 13 00:30:53.071733 init.sh[1601]: + daemon_pids+=($!) Mar 13 00:30:53.071733 init.sh[1601]: + NOTIFY_SOCKET=/run/systemd/notify Mar 13 00:30:53.071733 init.sh[1601]: + /usr/bin/systemd-notify --ready Mar 13 00:30:53.084999 systemd[1]: Started oem-gce.service - GCE Linux Agent. Mar 13 00:30:53.097539 init.sh[1601]: + wait -n 1720 1721 1722 Mar 13 00:30:53.271045 sshd[1717]: Accepted publickey for core from 20.161.92.111 port 57146 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:30:53.275144 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:30:53.291359 systemd-logind[1525]: New session 2 of user core. Mar 13 00:30:53.293840 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 13 00:30:53.407533 sshd[1724]: Connection closed by 20.161.92.111 port 57146 Mar 13 00:30:53.409771 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Mar 13 00:30:53.419472 systemd[1]: sshd@1-10.128.0.72:22-20.161.92.111:57146.service: Deactivated successfully. Mar 13 00:30:53.421596 systemd-logind[1525]: Session 2 logged out. Waiting for processes to exit. Mar 13 00:30:53.424849 systemd[1]: session-2.scope: Deactivated successfully. Mar 13 00:30:53.431317 systemd-logind[1525]: Removed session 2. Mar 13 00:30:53.459754 systemd[1]: Started sshd@2-10.128.0.72:22-20.161.92.111:57160.service - OpenSSH per-connection server daemon (20.161.92.111:57160). Mar 13 00:30:53.521464 google-clock-skew[1721]: INFO Starting Google Clock Skew daemon. Mar 13 00:30:53.547335 google-clock-skew[1721]: INFO Clock drift token has changed: 0. Mar 13 00:30:53.586389 google-networking[1722]: INFO Starting Google Networking daemon. Mar 13 00:30:53.634806 groupadd[1740]: group added to /etc/group: name=google-sudoers, GID=1000 Mar 13 00:30:53.640193 groupadd[1740]: group added to /etc/gshadow: name=google-sudoers Mar 13 00:30:53.684588 groupadd[1740]: new group: name=google-sudoers, GID=1000 Mar 13 00:30:53.712968 google-accounts[1720]: INFO Starting Google Accounts daemon. Mar 13 00:30:53.720706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:30:53.727191 google-accounts[1720]: WARNING OS Login not installed. Mar 13 00:30:53.728804 google-accounts[1720]: INFO Creating a new user account for 0. Mar 13 00:30:53.732013 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 13 00:30:53.734681 init.sh[1755]: useradd: invalid user name '0': use --badname to ignore Mar 13 00:30:53.734837 google-accounts[1720]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Mar 13 00:30:53.737098 (kubelet)[1753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:30:53.742483 systemd[1]: Startup finished in 4.052s (kernel) + 7.820s (initrd) + 8.193s (userspace) = 20.066s. Mar 13 00:30:53.777403 sshd[1736]: Accepted publickey for core from 20.161.92.111 port 57160 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:30:53.780586 sshd-session[1736]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:30:53.790572 systemd-logind[1525]: New session 3 of user core. Mar 13 00:30:53.794700 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 13 00:30:53.891793 sshd[1761]: Connection closed by 20.161.92.111 port 57160 Mar 13 00:30:53.893717 sshd-session[1736]: pam_unix(sshd:session): session closed for user core Mar 13 00:30:53.901207 systemd[1]: sshd@2-10.128.0.72:22-20.161.92.111:57160.service: Deactivated successfully. Mar 13 00:30:53.905555 systemd[1]: session-3.scope: Deactivated successfully. Mar 13 00:30:53.908291 systemd-logind[1525]: Session 3 logged out. Waiting for processes to exit. Mar 13 00:30:53.910290 systemd-logind[1525]: Removed session 3. Mar 13 00:30:54.000912 systemd-resolved[1451]: Clock change detected. Flushing caches. Mar 13 00:30:54.001694 google-clock-skew[1721]: INFO Synced system time with hardware clock. Mar 13 00:30:54.580029 kubelet[1753]: E0313 00:30:54.579955 1753 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:30:54.583081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:30:54.583355 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:30:54.583909 systemd[1]: kubelet.service: Consumed 1.263s CPU time, 268M memory peak. Mar 13 00:31:03.982262 systemd[1]: Started sshd@3-10.128.0.72:22-20.161.92.111:56452.service - OpenSSH per-connection server daemon (20.161.92.111:56452). Mar 13 00:31:04.210659 sshd[1773]: Accepted publickey for core from 20.161.92.111 port 56452 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:31:04.212291 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:31:04.219243 systemd-logind[1525]: New session 4 of user core. Mar 13 00:31:04.227379 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 13 00:31:04.320383 sshd[1776]: Connection closed by 20.161.92.111 port 56452 Mar 13 00:31:04.321252 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Mar 13 00:31:04.327019 systemd[1]: sshd@3-10.128.0.72:22-20.161.92.111:56452.service: Deactivated successfully. Mar 13 00:31:04.329700 systemd[1]: session-4.scope: Deactivated successfully. Mar 13 00:31:04.330987 systemd-logind[1525]: Session 4 logged out. Waiting for processes to exit. Mar 13 00:31:04.332867 systemd-logind[1525]: Removed session 4. Mar 13 00:31:04.366033 systemd[1]: Started sshd@4-10.128.0.72:22-20.161.92.111:56466.service - OpenSSH per-connection server daemon (20.161.92.111:56466). Mar 13 00:31:04.598068 sshd[1782]: Accepted publickey for core from 20.161.92.111 port 56466 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:31:04.600010 sshd-session[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:31:04.601352 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 13 00:31:04.605432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:31:04.610326 systemd-logind[1525]: New session 5 of user core. Mar 13 00:31:04.619393 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 13 00:31:04.705381 sshd[1788]: Connection closed by 20.161.92.111 port 56466 Mar 13 00:31:04.707460 sshd-session[1782]: pam_unix(sshd:session): session closed for user core Mar 13 00:31:04.712190 systemd[1]: sshd@4-10.128.0.72:22-20.161.92.111:56466.service: Deactivated successfully. Mar 13 00:31:04.714907 systemd[1]: session-5.scope: Deactivated successfully. Mar 13 00:31:04.717518 systemd-logind[1525]: Session 5 logged out. Waiting for processes to exit. Mar 13 00:31:04.720346 systemd-logind[1525]: Removed session 5. Mar 13 00:31:04.754542 systemd[1]: Started sshd@5-10.128.0.72:22-20.161.92.111:56476.service - OpenSSH per-connection server daemon (20.161.92.111:56476). Mar 13 00:31:04.910430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:31:04.922785 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:31:04.977493 kubelet[1802]: E0313 00:31:04.977431 1802 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:31:04.982077 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:31:04.982314 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:31:04.983061 systemd[1]: kubelet.service: Consumed 211ms CPU time, 110.8M memory peak. Mar 13 00:31:04.990368 sshd[1794]: Accepted publickey for core from 20.161.92.111 port 56476 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:31:04.991881 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:31:04.999229 systemd-logind[1525]: New session 6 of user core. Mar 13 00:31:05.008359 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 13 00:31:05.091342 sshd[1809]: Connection closed by 20.161.92.111 port 56476 Mar 13 00:31:05.092480 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Mar 13 00:31:05.098114 systemd-logind[1525]: Session 6 logged out. Waiting for processes to exit. Mar 13 00:31:05.098553 systemd[1]: sshd@5-10.128.0.72:22-20.161.92.111:56476.service: Deactivated successfully. Mar 13 00:31:05.101415 systemd[1]: session-6.scope: Deactivated successfully. Mar 13 00:31:05.103712 systemd-logind[1525]: Removed session 6. Mar 13 00:31:05.150166 systemd[1]: Started sshd@6-10.128.0.72:22-20.161.92.111:56484.service - OpenSSH per-connection server daemon (20.161.92.111:56484). Mar 13 00:31:05.379093 sshd[1815]: Accepted publickey for core from 20.161.92.111 port 56484 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:31:05.380741 sshd-session[1815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:31:05.388103 systemd-logind[1525]: New session 7 of user core. Mar 13 00:31:05.393376 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 13 00:31:05.474737 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 13 00:31:05.475245 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:31:05.491439 sudo[1819]: pam_unix(sudo:session): session closed for user root Mar 13 00:31:05.526195 sshd[1818]: Connection closed by 20.161.92.111 port 56484 Mar 13 00:31:05.527209 sshd-session[1815]: pam_unix(sshd:session): session closed for user core Mar 13 00:31:05.533324 systemd[1]: sshd@6-10.128.0.72:22-20.161.92.111:56484.service: Deactivated successfully. Mar 13 00:31:05.535755 systemd[1]: session-7.scope: Deactivated successfully. Mar 13 00:31:05.536920 systemd-logind[1525]: Session 7 logged out. Waiting for processes to exit. Mar 13 00:31:05.538989 systemd-logind[1525]: Removed session 7. Mar 13 00:31:05.577236 systemd[1]: Started sshd@7-10.128.0.72:22-20.161.92.111:56492.service - OpenSSH per-connection server daemon (20.161.92.111:56492). Mar 13 00:31:05.809380 sshd[1825]: Accepted publickey for core from 20.161.92.111 port 56492 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:31:05.811028 sshd-session[1825]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:31:05.818256 systemd-logind[1525]: New session 8 of user core. Mar 13 00:31:05.827366 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 13 00:31:05.892482 sudo[1830]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 13 00:31:05.892956 sudo[1830]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:31:05.899578 sudo[1830]: pam_unix(sudo:session): session closed for user root Mar 13 00:31:05.912930 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 13 00:31:05.913424 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:31:05.925831 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 13 00:31:05.977877 augenrules[1852]: No rules Mar 13 00:31:05.979460 systemd[1]: audit-rules.service: Deactivated successfully. Mar 13 00:31:05.979793 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 13 00:31:05.981809 sudo[1829]: pam_unix(sudo:session): session closed for user root Mar 13 00:31:06.017284 sshd[1828]: Connection closed by 20.161.92.111 port 56492 Mar 13 00:31:06.019380 sshd-session[1825]: pam_unix(sshd:session): session closed for user core Mar 13 00:31:06.023877 systemd[1]: sshd@7-10.128.0.72:22-20.161.92.111:56492.service: Deactivated successfully. Mar 13 00:31:06.026361 systemd[1]: session-8.scope: Deactivated successfully. Mar 13 00:31:06.028297 systemd-logind[1525]: Session 8 logged out. Waiting for processes to exit. Mar 13 00:31:06.030452 systemd-logind[1525]: Removed session 8. Mar 13 00:31:06.060987 systemd[1]: Started sshd@8-10.128.0.72:22-20.161.92.111:56504.service - OpenSSH per-connection server daemon (20.161.92.111:56504). Mar 13 00:31:06.293238 sshd[1861]: Accepted publickey for core from 20.161.92.111 port 56504 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:31:06.294015 sshd-session[1861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:31:06.301236 systemd-logind[1525]: New session 9 of user core. Mar 13 00:31:06.308377 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 13 00:31:06.374672 sudo[1865]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 13 00:31:06.375142 sudo[1865]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 13 00:31:06.839562 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 13 00:31:06.851806 (dockerd)[1883]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 13 00:31:07.192212 dockerd[1883]: time="2026-03-13T00:31:07.189446659Z" level=info msg="Starting up" Mar 13 00:31:07.193151 dockerd[1883]: time="2026-03-13T00:31:07.193113864Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 13 00:31:07.209039 dockerd[1883]: time="2026-03-13T00:31:07.208972089Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Mar 13 00:31:07.261264 dockerd[1883]: time="2026-03-13T00:31:07.260964158Z" level=info msg="Loading containers: start." Mar 13 00:31:07.279209 kernel: Initializing XFRM netlink socket Mar 13 00:31:07.628610 systemd-networkd[1449]: docker0: Link UP Mar 13 00:31:07.633478 dockerd[1883]: time="2026-03-13T00:31:07.633424781Z" level=info msg="Loading containers: done." Mar 13 00:31:07.650866 dockerd[1883]: time="2026-03-13T00:31:07.650809603Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 13 00:31:07.651072 dockerd[1883]: time="2026-03-13T00:31:07.650896971Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Mar 13 00:31:07.651072 dockerd[1883]: time="2026-03-13T00:31:07.651025531Z" level=info msg="Initializing buildkit" Mar 13 00:31:07.679012 dockerd[1883]: time="2026-03-13T00:31:07.678948473Z" level=info msg="Completed buildkit initialization" Mar 13 00:31:07.687904 dockerd[1883]: time="2026-03-13T00:31:07.687828280Z" level=info msg="Daemon has completed initialization" Mar 13 00:31:07.688285 dockerd[1883]: time="2026-03-13T00:31:07.687912151Z" level=info msg="API listen on /run/docker.sock" Mar 13 00:31:07.688370 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 13 00:31:08.468225 containerd[1541]: time="2026-03-13T00:31:08.468142406Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\"" Mar 13 00:31:09.012084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3455363360.mount: Deactivated successfully. Mar 13 00:31:10.560326 containerd[1541]: time="2026-03-13T00:31:10.560266802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:10.561729 containerd[1541]: time="2026-03-13T00:31:10.561684774Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.9: active requests=0, bytes read=30117617" Mar 13 00:31:10.562776 containerd[1541]: time="2026-03-13T00:31:10.562701796Z" level=info msg="ImageCreate event name:\"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:10.566099 containerd[1541]: time="2026-03-13T00:31:10.565745839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:10.567384 containerd[1541]: time="2026-03-13T00:31:10.567024270Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.9\" with image id \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a1fe354f8b36dbce37fef26c3731e2376fb8eb7375e7df3068df7ad11656f022\", size \"30112785\" in 2.098809693s" Mar 13 00:31:10.567384 containerd[1541]: time="2026-03-13T00:31:10.567071471Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.9\" returns image reference \"sha256:d3c49e1d7c1cb22893888d0d7a4142c80e16023143fdd2c0225a362ec08ab4a4\"" Mar 13 00:31:10.567766 containerd[1541]: time="2026-03-13T00:31:10.567738828Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\"" Mar 13 00:31:12.172611 containerd[1541]: time="2026-03-13T00:31:12.172541852Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:12.173924 containerd[1541]: time="2026-03-13T00:31:12.173863933Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.9: active requests=0, bytes read=26022056" Mar 13 00:31:12.175200 containerd[1541]: time="2026-03-13T00:31:12.175115780Z" level=info msg="ImageCreate event name:\"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:12.178487 containerd[1541]: time="2026-03-13T00:31:12.178413583Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:12.179761 containerd[1541]: time="2026-03-13T00:31:12.179599861Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.9\" with image id \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:a495c9f30cfd4d57ae6c27cb21e477b9b1ddebdace61762e80a06fe264a0d61a\", size \"27678758\" in 1.611673221s" Mar 13 00:31:12.179761 containerd[1541]: time="2026-03-13T00:31:12.179644811Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.9\" returns image reference \"sha256:bdbe897c17b593b8163eebd3c55c6723711b8b775bf7e554da6d75d33d114e98\"" Mar 13 00:31:12.180564 containerd[1541]: time="2026-03-13T00:31:12.180527220Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\"" Mar 13 00:31:13.527077 containerd[1541]: time="2026-03-13T00:31:13.527012550Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:13.528449 containerd[1541]: time="2026-03-13T00:31:13.528386203Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.9: active requests=0, bytes read=20162974" Mar 13 00:31:13.529704 containerd[1541]: time="2026-03-13T00:31:13.529634016Z" level=info msg="ImageCreate event name:\"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:13.533775 containerd[1541]: time="2026-03-13T00:31:13.533709613Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:13.535205 containerd[1541]: time="2026-03-13T00:31:13.535059571Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.9\" with image id \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d1533368d3acd772e3d11225337a61be319b5ecf7523adeff7ebfe4107ab05b5\", size \"21819712\" in 1.354485914s" Mar 13 00:31:13.535205 containerd[1541]: time="2026-03-13T00:31:13.535188687Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.9\" returns image reference \"sha256:04e9a75bd404b7d5d286565ebcd5e8d5a2be3355e6cb0c3f1ab9db53fe6f180a\"" Mar 13 00:31:13.536150 containerd[1541]: time="2026-03-13T00:31:13.536114789Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\"" Mar 13 00:31:14.639896 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3516496159.mount: Deactivated successfully. Mar 13 00:31:15.173488 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 13 00:31:15.177414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:31:15.474708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:31:15.486690 (kubelet)[2177]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:31:15.528906 containerd[1541]: time="2026-03-13T00:31:15.527927947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:15.530534 containerd[1541]: time="2026-03-13T00:31:15.530490358Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.9: active requests=0, bytes read=31828974" Mar 13 00:31:15.531856 containerd[1541]: time="2026-03-13T00:31:15.531813824Z" level=info msg="ImageCreate event name:\"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:15.535126 kubelet[2177]: E0313 00:31:15.535079 2177 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:31:15.536463 containerd[1541]: time="2026-03-13T00:31:15.536407596Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:15.538689 containerd[1541]: time="2026-03-13T00:31:15.538380079Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.9\" with image id \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\", repo tag \"registry.k8s.io/kube-proxy:v1.33.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:079ba0e77e457dbf755e78bf3a6d736b7eb73d021fe53b853a0b82bbb2c17322\", size \"31827666\" in 2.002219663s" Mar 13 00:31:15.538689 containerd[1541]: time="2026-03-13T00:31:15.538419358Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.9\" returns image reference \"sha256:36d290108190a8d792e275b3e6ba8f1c0def0fd717573d69c3970816d945510a\"" Mar 13 00:31:15.539083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:31:15.539638 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:31:15.540203 containerd[1541]: time="2026-03-13T00:31:15.539947230Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Mar 13 00:31:15.540491 systemd[1]: kubelet.service: Consumed 215ms CPU time, 108.6M memory peak. Mar 13 00:31:15.955743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106654314.mount: Deactivated successfully. Mar 13 00:31:17.110848 containerd[1541]: time="2026-03-13T00:31:17.110778467Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:17.112266 containerd[1541]: time="2026-03-13T00:31:17.112217333Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20943692" Mar 13 00:31:17.113545 containerd[1541]: time="2026-03-13T00:31:17.113477614Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:17.117037 containerd[1541]: time="2026-03-13T00:31:17.116736925Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:17.118290 containerd[1541]: time="2026-03-13T00:31:17.118085164Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.578098535s" Mar 13 00:31:17.118290 containerd[1541]: time="2026-03-13T00:31:17.118130157Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Mar 13 00:31:17.119232 containerd[1541]: time="2026-03-13T00:31:17.119197301Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 13 00:31:22.204011 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 13 00:31:25.673749 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Mar 13 00:31:25.676214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:31:25.972831 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:31:25.984701 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:31:26.036668 kubelet[2247]: E0313 00:31:26.036604 2247 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:31:26.039465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:31:26.039714 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:31:26.040274 systemd[1]: kubelet.service: Consumed 188ms CPU time, 110.7M memory peak. Mar 13 00:31:36.173308 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Mar 13 00:31:36.175520 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:31:36.546797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:31:36.558700 (kubelet)[2262]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:31:36.606130 kubelet[2262]: E0313 00:31:36.606054 2262 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:31:36.608841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:31:36.609091 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:31:36.609735 systemd[1]: kubelet.service: Consumed 185ms CPU time, 109.8M memory peak. Mar 13 00:31:36.723383 update_engine[1529]: I20260313 00:31:36.723279 1529 update_attempter.cc:509] Updating boot flags... Mar 13 00:31:46.673398 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Mar 13 00:31:46.676302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:31:46.963636 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:31:46.980714 (kubelet)[2301]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 13 00:31:47.027188 kubelet[2301]: E0313 00:31:47.027101 2301 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 13 00:31:47.029970 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 13 00:31:47.030240 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 13 00:31:47.030935 systemd[1]: kubelet.service: Consumed 180ms CPU time, 108.1M memory peak. Mar 13 00:31:47.178501 containerd[1541]: time="2026-03-13T00:31:47.178444102Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 13 00:31:47.179551 containerd[1541]: time="2026-03-13T00:31:47.178457429Z" level=error msg="PullImage \"registry.k8s.io/pause:3.10\" failed" error="rpc error: code = DeadlineExceeded desc = failed to pull and unpack image \"registry.k8s.io/pause:3.10\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://registry.k8s.io/v2/pause/manifests/sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\": dial tcp 34.96.108.209:443: i/o timeout" Mar 13 00:31:47.179551 containerd[1541]: time="2026-03-13T00:31:47.179257108Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 13 00:31:47.591938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount239582985.mount: Deactivated successfully. Mar 13 00:31:47.597285 containerd[1541]: time="2026-03-13T00:31:47.597229512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:31:47.598277 containerd[1541]: time="2026-03-13T00:31:47.598220950Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321348" Mar 13 00:31:47.599517 containerd[1541]: time="2026-03-13T00:31:47.599453807Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:31:47.602001 containerd[1541]: time="2026-03-13T00:31:47.601940197Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 13 00:31:47.603050 containerd[1541]: time="2026-03-13T00:31:47.602880564Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 423.588277ms" Mar 13 00:31:47.603050 containerd[1541]: time="2026-03-13T00:31:47.602921776Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 13 00:31:47.603898 containerd[1541]: time="2026-03-13T00:31:47.603451997Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\"" Mar 13 00:31:48.073213 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1636915090.mount: Deactivated successfully. Mar 13 00:31:49.269544 containerd[1541]: time="2026-03-13T00:31:49.269479079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.24-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:49.274205 containerd[1541]: time="2026-03-13T00:31:49.273130903Z" level=info msg="ImageCreate event name:\"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:49.274205 containerd[1541]: time="2026-03-13T00:31:49.273291468Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.24-0: active requests=0, bytes read=23720121" Mar 13 00:31:49.280380 containerd[1541]: time="2026-03-13T00:31:49.280338069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:31:49.281738 containerd[1541]: time="2026-03-13T00:31:49.281698889Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.24-0\" with image id \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\", repo tag \"registry.k8s.io/etcd:3.5.24-0\", repo digest \"registry.k8s.io/etcd@sha256:251e7e490f64859d329cd963bc879dc04acf3d7195bb52c4c50b4a07bedf37d6\", size \"23716032\" in 1.678206992s" Mar 13 00:31:49.281896 containerd[1541]: time="2026-03-13T00:31:49.281871493Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.24-0\" returns image reference \"sha256:8cb12dd0c3e42c6d0175d09a060358cbb68a3ecc2ba4dbb00327c7d760e1425d\"" Mar 13 00:31:53.365037 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:31:53.365970 systemd[1]: kubelet.service: Consumed 180ms CPU time, 108.1M memory peak. Mar 13 00:31:53.369050 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:31:53.411726 systemd[1]: Reload requested from client PID 2403 ('systemctl') (unit session-9.scope)... Mar 13 00:31:53.411748 systemd[1]: Reloading... Mar 13 00:31:53.569224 zram_generator::config[2443]: No configuration found. Mar 13 00:31:53.898744 systemd[1]: Reloading finished in 486 ms. Mar 13 00:31:53.973023 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 13 00:31:53.973166 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 13 00:31:53.973593 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:31:53.973711 systemd[1]: kubelet.service: Consumed 163ms CPU time, 98.3M memory peak. Mar 13 00:31:53.976331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:31:54.340902 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:31:54.354756 (kubelet)[2498]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:31:54.407207 kubelet[2498]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:31:54.407207 kubelet[2498]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:31:54.407207 kubelet[2498]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:31:54.407724 kubelet[2498]: I0313 00:31:54.407226 2498 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:31:55.140202 kubelet[2498]: I0313 00:31:55.139994 2498 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 00:31:55.140202 kubelet[2498]: I0313 00:31:55.140032 2498 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:31:55.140623 kubelet[2498]: I0313 00:31:55.140597 2498 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:31:55.194852 kubelet[2498]: E0313 00:31:55.194798 2498 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.128.0.72:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.128.0.72:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Mar 13 00:31:55.196648 kubelet[2498]: I0313 00:31:55.196383 2498 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:31:55.205617 kubelet[2498]: I0313 00:31:55.205591 2498 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:31:55.210575 kubelet[2498]: I0313 00:31:55.210539 2498 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 00:31:55.211744 kubelet[2498]: I0313 00:31:55.211681 2498 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:31:55.211963 kubelet[2498]: I0313 00:31:55.211731 2498 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:31:55.211963 kubelet[2498]: I0313 00:31:55.211959 2498 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:31:55.212245 kubelet[2498]: I0313 00:31:55.211975 2498 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 00:31:55.212245 kubelet[2498]: I0313 00:31:55.212149 2498 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:31:55.219076 kubelet[2498]: I0313 00:31:55.218970 2498 kubelet.go:480] "Attempting to sync node with API server" Mar 13 00:31:55.219076 kubelet[2498]: I0313 00:31:55.219000 2498 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:31:55.220221 kubelet[2498]: I0313 00:31:55.219940 2498 kubelet.go:386] "Adding apiserver pod source" Mar 13 00:31:55.227622 kubelet[2498]: I0313 00:31:55.227272 2498 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:31:55.231357 kubelet[2498]: E0313 00:31:55.231320 2498 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf&limit=500&resourceVersion=0\": dial tcp 10.128.0.72:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:31:55.231980 kubelet[2498]: E0313 00:31:55.231951 2498 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.128.0.72:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.128.0.72:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Mar 13 00:31:55.235322 kubelet[2498]: I0313 00:31:55.235279 2498 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:31:55.236224 kubelet[2498]: I0313 00:31:55.235856 2498 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:31:55.237185 kubelet[2498]: W0313 00:31:55.237146 2498 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 13 00:31:55.253996 kubelet[2498]: I0313 00:31:55.253961 2498 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 00:31:55.254295 kubelet[2498]: I0313 00:31:55.254042 2498 server.go:1289] "Started kubelet" Mar 13 00:31:55.265041 kubelet[2498]: I0313 00:31:55.264938 2498 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:31:55.265571 kubelet[2498]: I0313 00:31:55.265537 2498 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:31:55.270221 kubelet[2498]: E0313 00:31:55.267274 2498 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.72:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.72:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf.189c3f4428ebb6f1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf,UID:ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf,},FirstTimestamp:2026-03-13 00:31:55.253995249 +0000 UTC m=+0.893682277,LastTimestamp:2026-03-13 00:31:55.253995249 +0000 UTC m=+0.893682277,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf,}" Mar 13 00:31:55.270221 kubelet[2498]: I0313 00:31:55.269441 2498 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:31:55.271947 kubelet[2498]: I0313 00:31:55.271906 2498 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:31:55.273874 kubelet[2498]: I0313 00:31:55.273839 2498 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:31:55.275736 kubelet[2498]: I0313 00:31:55.275617 2498 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:31:55.280059 kubelet[2498]: E0313 00:31:55.279922 2498 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" not found" Mar 13 00:31:55.280059 kubelet[2498]: I0313 00:31:55.280012 2498 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 00:31:55.280814 kubelet[2498]: I0313 00:31:55.280724 2498 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 00:31:55.280991 kubelet[2498]: I0313 00:31:55.280972 2498 reconciler.go:26] "Reconciler: start to sync state" Mar 13 00:31:55.281788 kubelet[2498]: E0313 00:31:55.281725 2498 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.128.0.72:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.128.0.72:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Mar 13 00:31:55.282328 kubelet[2498]: E0313 00:31:55.282299 2498 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:31:55.283026 kubelet[2498]: I0313 00:31:55.282924 2498 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:31:55.284845 kubelet[2498]: E0313 00:31:55.284788 2498 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf?timeout=10s\": dial tcp 10.128.0.72:6443: connect: connection refused" interval="200ms" Mar 13 00:31:55.286200 kubelet[2498]: I0313 00:31:55.285252 2498 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:31:55.286200 kubelet[2498]: I0313 00:31:55.285300 2498 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:31:55.307862 kubelet[2498]: I0313 00:31:55.307833 2498 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:31:55.308419 kubelet[2498]: I0313 00:31:55.308058 2498 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:31:55.308419 kubelet[2498]: I0313 00:31:55.308093 2498 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:31:55.311458 kubelet[2498]: I0313 00:31:55.310590 2498 policy_none.go:49] "None policy: Start" Mar 13 00:31:55.311458 kubelet[2498]: I0313 00:31:55.310652 2498 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 00:31:55.311458 kubelet[2498]: I0313 00:31:55.310681 2498 state_mem.go:35] "Initializing new in-memory state store" Mar 13 00:31:55.322947 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 13 00:31:55.327209 kubelet[2498]: I0313 00:31:55.325889 2498 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 00:31:55.329434 kubelet[2498]: I0313 00:31:55.329404 2498 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 00:31:55.329519 kubelet[2498]: I0313 00:31:55.329442 2498 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 00:31:55.329519 kubelet[2498]: I0313 00:31:55.329469 2498 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:31:55.329519 kubelet[2498]: I0313 00:31:55.329482 2498 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 00:31:55.329682 kubelet[2498]: E0313 00:31:55.329546 2498 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:31:55.332738 kubelet[2498]: E0313 00:31:55.332695 2498 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.128.0.72:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.128.0.72:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Mar 13 00:31:55.344593 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 13 00:31:55.350318 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 13 00:31:55.366988 kubelet[2498]: E0313 00:31:55.366959 2498 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:31:55.367435 kubelet[2498]: I0313 00:31:55.367412 2498 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:31:55.367628 kubelet[2498]: I0313 00:31:55.367579 2498 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:31:55.368556 kubelet[2498]: I0313 00:31:55.368533 2498 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:31:55.370609 kubelet[2498]: E0313 00:31:55.370530 2498 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:31:55.371022 kubelet[2498]: E0313 00:31:55.370979 2498 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" not found" Mar 13 00:31:55.447902 systemd[1]: Created slice kubepods-burstable-pod988a46bccf9c0eae97ab67f35df4fd03.slice - libcontainer container kubepods-burstable-pod988a46bccf9c0eae97ab67f35df4fd03.slice. Mar 13 00:31:55.461349 kubelet[2498]: E0313 00:31:55.461058 2498 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" not found" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.467013 systemd[1]: Created slice kubepods-burstable-podd484da9f4eef5ab506ae9a819d5ac5f6.slice - libcontainer container kubepods-burstable-podd484da9f4eef5ab506ae9a819d5ac5f6.slice. Mar 13 00:31:55.475096 kubelet[2498]: E0313 00:31:55.474669 2498 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" not found" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.476090 kubelet[2498]: I0313 00:31:55.476037 2498 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.476571 kubelet[2498]: E0313 00:31:55.476538 2498 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.72:6443/api/v1/nodes\": dial tcp 10.128.0.72:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.481776 systemd[1]: Created slice kubepods-burstable-pod47e2c19421157cdcf404228783febeb7.slice - libcontainer container kubepods-burstable-pod47e2c19421157cdcf404228783febeb7.slice. Mar 13 00:31:55.482198 kubelet[2498]: I0313 00:31:55.482133 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/988a46bccf9c0eae97ab67f35df4fd03-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"988a46bccf9c0eae97ab67f35df4fd03\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.482683 kubelet[2498]: I0313 00:31:55.482335 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d484da9f4eef5ab506ae9a819d5ac5f6-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"d484da9f4eef5ab506ae9a819d5ac5f6\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.482683 kubelet[2498]: I0313 00:31:55.482380 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d484da9f4eef5ab506ae9a819d5ac5f6-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"d484da9f4eef5ab506ae9a819d5ac5f6\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.482683 kubelet[2498]: I0313 00:31:55.482408 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d484da9f4eef5ab506ae9a819d5ac5f6-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"d484da9f4eef5ab506ae9a819d5ac5f6\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.482683 kubelet[2498]: I0313 00:31:55.482438 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d484da9f4eef5ab506ae9a819d5ac5f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"d484da9f4eef5ab506ae9a819d5ac5f6\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.482920 kubelet[2498]: I0313 00:31:55.482468 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47e2c19421157cdcf404228783febeb7-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"47e2c19421157cdcf404228783febeb7\") " pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.482920 kubelet[2498]: I0313 00:31:55.482496 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/988a46bccf9c0eae97ab67f35df4fd03-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"988a46bccf9c0eae97ab67f35df4fd03\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.482920 kubelet[2498]: I0313 00:31:55.482524 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/988a46bccf9c0eae97ab67f35df4fd03-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"988a46bccf9c0eae97ab67f35df4fd03\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.482920 kubelet[2498]: I0313 00:31:55.482554 2498 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d484da9f4eef5ab506ae9a819d5ac5f6-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"d484da9f4eef5ab506ae9a819d5ac5f6\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.485151 kubelet[2498]: E0313 00:31:55.485118 2498 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" not found" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.485452 kubelet[2498]: E0313 00:31:55.485388 2498 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf?timeout=10s\": dial tcp 10.128.0.72:6443: connect: connection refused" interval="400ms" Mar 13 00:31:55.681466 kubelet[2498]: I0313 00:31:55.681401 2498 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.681918 kubelet[2498]: E0313 00:31:55.681875 2498 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.72:6443/api/v1/nodes\": dial tcp 10.128.0.72:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:55.763412 containerd[1541]: time="2026-03-13T00:31:55.762980129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf,Uid:988a46bccf9c0eae97ab67f35df4fd03,Namespace:kube-system,Attempt:0,}" Mar 13 00:31:55.776274 containerd[1541]: time="2026-03-13T00:31:55.775893248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf,Uid:d484da9f4eef5ab506ae9a819d5ac5f6,Namespace:kube-system,Attempt:0,}" Mar 13 00:31:55.804516 containerd[1541]: time="2026-03-13T00:31:55.804472748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf,Uid:47e2c19421157cdcf404228783febeb7,Namespace:kube-system,Attempt:0,}" Mar 13 00:31:55.808447 containerd[1541]: time="2026-03-13T00:31:55.808386825Z" level=info msg="connecting to shim 2cf5e46b329fdecefe3677d7d8673d11e31b6bec290aea1a922ab0d1e93509a3" address="unix:///run/containerd/s/2d62a9c7e52d23d72795e0ee0232f06f71bbe9c46f6d52d0d86f81fc4f6fd155" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:31:55.813481 containerd[1541]: time="2026-03-13T00:31:55.813443902Z" level=info msg="connecting to shim ff60ff1543ed7cdcd85d8c201a1012a312f6a5c4715781b388b67a9db2dc159c" address="unix:///run/containerd/s/76abfccf5befe7878dd48bf49a65982b26a7c3f7cb08384f6912979fd9f14daf" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:31:55.857046 containerd[1541]: time="2026-03-13T00:31:55.856994484Z" level=info msg="connecting to shim 2ba18d83bc1383a232992ffa3fa65817c9b3c8ebf35035717f955e0cf787813b" address="unix:///run/containerd/s/b3f108c2c4a613e7c86137b1e1007e04d7348da4dd3b3e15e7e4b844059aa2f2" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:31:55.887886 kubelet[2498]: E0313 00:31:55.887785 2498 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.72:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf?timeout=10s\": dial tcp 10.128.0.72:6443: connect: connection refused" interval="800ms" Mar 13 00:31:55.897412 systemd[1]: Started cri-containerd-2cf5e46b329fdecefe3677d7d8673d11e31b6bec290aea1a922ab0d1e93509a3.scope - libcontainer container 2cf5e46b329fdecefe3677d7d8673d11e31b6bec290aea1a922ab0d1e93509a3. Mar 13 00:31:55.901400 systemd[1]: Started cri-containerd-ff60ff1543ed7cdcd85d8c201a1012a312f6a5c4715781b388b67a9db2dc159c.scope - libcontainer container ff60ff1543ed7cdcd85d8c201a1012a312f6a5c4715781b388b67a9db2dc159c. Mar 13 00:31:55.918880 systemd[1]: Started cri-containerd-2ba18d83bc1383a232992ffa3fa65817c9b3c8ebf35035717f955e0cf787813b.scope - libcontainer container 2ba18d83bc1383a232992ffa3fa65817c9b3c8ebf35035717f955e0cf787813b. Mar 13 00:31:56.004527 containerd[1541]: time="2026-03-13T00:31:56.004413613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf,Uid:988a46bccf9c0eae97ab67f35df4fd03,Namespace:kube-system,Attempt:0,} returns sandbox id \"2cf5e46b329fdecefe3677d7d8673d11e31b6bec290aea1a922ab0d1e93509a3\"" Mar 13 00:31:56.011549 kubelet[2498]: E0313 00:31:56.011341 2498 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96" Mar 13 00:31:56.022162 containerd[1541]: time="2026-03-13T00:31:56.021457289Z" level=info msg="CreateContainer within sandbox \"2cf5e46b329fdecefe3677d7d8673d11e31b6bec290aea1a922ab0d1e93509a3\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 13 00:31:56.044877 containerd[1541]: time="2026-03-13T00:31:56.044760434Z" level=info msg="Container 35c92516204a638c18a6cba6030a7b6d9962503fddf405516f2295c35193270a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:31:56.045215 containerd[1541]: time="2026-03-13T00:31:56.045114058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf,Uid:d484da9f4eef5ab506ae9a819d5ac5f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff60ff1543ed7cdcd85d8c201a1012a312f6a5c4715781b388b67a9db2dc159c\"" Mar 13 00:31:56.050194 kubelet[2498]: E0313 00:31:56.049780 2498 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f" Mar 13 00:31:56.053560 containerd[1541]: time="2026-03-13T00:31:56.053465272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf,Uid:47e2c19421157cdcf404228783febeb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ba18d83bc1383a232992ffa3fa65817c9b3c8ebf35035717f955e0cf787813b\"" Mar 13 00:31:56.054665 containerd[1541]: time="2026-03-13T00:31:56.054632511Z" level=info msg="CreateContainer within sandbox \"ff60ff1543ed7cdcd85d8c201a1012a312f6a5c4715781b388b67a9db2dc159c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 13 00:31:56.055495 kubelet[2498]: E0313 00:31:56.055460 2498 kubelet_pods.go:553] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96" Mar 13 00:31:56.059383 containerd[1541]: time="2026-03-13T00:31:56.059321175Z" level=info msg="CreateContainer within sandbox \"2ba18d83bc1383a232992ffa3fa65817c9b3c8ebf35035717f955e0cf787813b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 13 00:31:56.060127 containerd[1541]: time="2026-03-13T00:31:56.060059040Z" level=info msg="CreateContainer within sandbox \"2cf5e46b329fdecefe3677d7d8673d11e31b6bec290aea1a922ab0d1e93509a3\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"35c92516204a638c18a6cba6030a7b6d9962503fddf405516f2295c35193270a\"" Mar 13 00:31:56.061703 containerd[1541]: time="2026-03-13T00:31:56.061435043Z" level=info msg="StartContainer for \"35c92516204a638c18a6cba6030a7b6d9962503fddf405516f2295c35193270a\"" Mar 13 00:31:56.063186 containerd[1541]: time="2026-03-13T00:31:56.063122050Z" level=info msg="connecting to shim 35c92516204a638c18a6cba6030a7b6d9962503fddf405516f2295c35193270a" address="unix:///run/containerd/s/2d62a9c7e52d23d72795e0ee0232f06f71bbe9c46f6d52d0d86f81fc4f6fd155" protocol=ttrpc version=3 Mar 13 00:31:56.069340 containerd[1541]: time="2026-03-13T00:31:56.069300704Z" level=info msg="Container 3b06c8836f34f9d1ea677689d30c167dd3dd6ebe0460f5c397e66cd886f7660d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:31:56.072211 containerd[1541]: time="2026-03-13T00:31:56.071942808Z" level=info msg="Container 8cc6b9a18ff479d2dcafe903d05f3bd73a55c4dde4847d6797028cfab6c459fb: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:31:56.081904 containerd[1541]: time="2026-03-13T00:31:56.081861513Z" level=info msg="CreateContainer within sandbox \"ff60ff1543ed7cdcd85d8c201a1012a312f6a5c4715781b388b67a9db2dc159c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3b06c8836f34f9d1ea677689d30c167dd3dd6ebe0460f5c397e66cd886f7660d\"" Mar 13 00:31:56.083145 containerd[1541]: time="2026-03-13T00:31:56.083109863Z" level=info msg="StartContainer for \"3b06c8836f34f9d1ea677689d30c167dd3dd6ebe0460f5c397e66cd886f7660d\"" Mar 13 00:31:56.090990 kubelet[2498]: I0313 00:31:56.089769 2498 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:56.092235 kubelet[2498]: E0313 00:31:56.092158 2498 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.128.0.72:6443/api/v1/nodes\": dial tcp 10.128.0.72:6443: connect: connection refused" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:56.092485 containerd[1541]: time="2026-03-13T00:31:56.092441517Z" level=info msg="connecting to shim 3b06c8836f34f9d1ea677689d30c167dd3dd6ebe0460f5c397e66cd886f7660d" address="unix:///run/containerd/s/76abfccf5befe7878dd48bf49a65982b26a7c3f7cb08384f6912979fd9f14daf" protocol=ttrpc version=3 Mar 13 00:31:56.095548 systemd[1]: Started cri-containerd-35c92516204a638c18a6cba6030a7b6d9962503fddf405516f2295c35193270a.scope - libcontainer container 35c92516204a638c18a6cba6030a7b6d9962503fddf405516f2295c35193270a. Mar 13 00:31:56.101195 containerd[1541]: time="2026-03-13T00:31:56.098157575Z" level=info msg="CreateContainer within sandbox \"2ba18d83bc1383a232992ffa3fa65817c9b3c8ebf35035717f955e0cf787813b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8cc6b9a18ff479d2dcafe903d05f3bd73a55c4dde4847d6797028cfab6c459fb\"" Mar 13 00:31:56.104200 containerd[1541]: time="2026-03-13T00:31:56.103371030Z" level=info msg="StartContainer for \"8cc6b9a18ff479d2dcafe903d05f3bd73a55c4dde4847d6797028cfab6c459fb\"" Mar 13 00:31:56.107625 kubelet[2498]: E0313 00:31:56.107580 2498 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.128.0.72:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf&limit=500&resourceVersion=0\": dial tcp 10.128.0.72:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Mar 13 00:31:56.109196 containerd[1541]: time="2026-03-13T00:31:56.107822465Z" level=info msg="connecting to shim 8cc6b9a18ff479d2dcafe903d05f3bd73a55c4dde4847d6797028cfab6c459fb" address="unix:///run/containerd/s/b3f108c2c4a613e7c86137b1e1007e04d7348da4dd3b3e15e7e4b844059aa2f2" protocol=ttrpc version=3 Mar 13 00:31:56.147608 systemd[1]: Started cri-containerd-8cc6b9a18ff479d2dcafe903d05f3bd73a55c4dde4847d6797028cfab6c459fb.scope - libcontainer container 8cc6b9a18ff479d2dcafe903d05f3bd73a55c4dde4847d6797028cfab6c459fb. Mar 13 00:31:56.157683 systemd[1]: Started cri-containerd-3b06c8836f34f9d1ea677689d30c167dd3dd6ebe0460f5c397e66cd886f7660d.scope - libcontainer container 3b06c8836f34f9d1ea677689d30c167dd3dd6ebe0460f5c397e66cd886f7660d. Mar 13 00:31:56.236599 containerd[1541]: time="2026-03-13T00:31:56.236449491Z" level=info msg="StartContainer for \"35c92516204a638c18a6cba6030a7b6d9962503fddf405516f2295c35193270a\" returns successfully" Mar 13 00:31:56.285857 containerd[1541]: time="2026-03-13T00:31:56.283871954Z" level=info msg="StartContainer for \"8cc6b9a18ff479d2dcafe903d05f3bd73a55c4dde4847d6797028cfab6c459fb\" returns successfully" Mar 13 00:31:56.300993 containerd[1541]: time="2026-03-13T00:31:56.300931788Z" level=info msg="StartContainer for \"3b06c8836f34f9d1ea677689d30c167dd3dd6ebe0460f5c397e66cd886f7660d\" returns successfully" Mar 13 00:31:56.342928 kubelet[2498]: E0313 00:31:56.342884 2498 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" not found" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:56.349167 kubelet[2498]: E0313 00:31:56.349127 2498 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" not found" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:56.355109 kubelet[2498]: E0313 00:31:56.355078 2498 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" not found" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:56.898430 kubelet[2498]: I0313 00:31:56.898391 2498 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:57.359054 kubelet[2498]: E0313 00:31:57.359011 2498 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" not found" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:57.359844 kubelet[2498]: E0313 00:31:57.359807 2498 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" not found" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:58.360953 kubelet[2498]: E0313 00:31:58.360904 2498 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" not found" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:59.041720 kubelet[2498]: E0313 00:31:59.041672 2498 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" not found" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:59.051488 kubelet[2498]: E0313 00:31:59.051423 2498 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" not found" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:59.188904 kubelet[2498]: I0313 00:31:59.188829 2498 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:59.189116 kubelet[2498]: E0313 00:31:59.188965 2498 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\": node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" not found" Mar 13 00:31:59.233848 kubelet[2498]: I0313 00:31:59.233809 2498 apiserver.go:52] "Watching apiserver" Mar 13 00:31:59.280944 kubelet[2498]: I0313 00:31:59.280884 2498 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 00:31:59.284708 kubelet[2498]: I0313 00:31:59.284668 2498 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:59.314275 kubelet[2498]: E0313 00:31:59.313409 2498 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:59.314275 kubelet[2498]: I0313 00:31:59.313444 2498 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:59.319221 kubelet[2498]: E0313 00:31:59.319185 2498 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:59.320408 kubelet[2498]: I0313 00:31:59.320209 2498 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:31:59.335366 kubelet[2498]: E0313 00:31:59.335325 2498 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:01.402430 systemd[1]: Reload requested from client PID 2784 ('systemctl') (unit session-9.scope)... Mar 13 00:32:01.402451 systemd[1]: Reloading... Mar 13 00:32:01.604236 zram_generator::config[2829]: No configuration found. Mar 13 00:32:02.009419 systemd[1]: Reloading finished in 606 ms. Mar 13 00:32:02.046253 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:32:02.060078 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 00:32:02.060486 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:32:02.060584 systemd[1]: kubelet.service: Consumed 1.430s CPU time, 131.4M memory peak. Mar 13 00:32:02.063817 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 13 00:32:02.434944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 13 00:32:02.449716 (kubelet)[2877]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 13 00:32:02.518199 kubelet[2877]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:32:02.518199 kubelet[2877]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 13 00:32:02.518199 kubelet[2877]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 00:32:02.518199 kubelet[2877]: I0313 00:32:02.517702 2877 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 00:32:02.530639 kubelet[2877]: I0313 00:32:02.530589 2877 server.go:530] "Kubelet version" kubeletVersion="v1.33.8" Mar 13 00:32:02.530639 kubelet[2877]: I0313 00:32:02.530620 2877 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 00:32:02.530976 kubelet[2877]: I0313 00:32:02.530962 2877 server.go:956] "Client rotation is on, will bootstrap in background" Mar 13 00:32:02.533327 kubelet[2877]: I0313 00:32:02.533294 2877 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Mar 13 00:32:02.537949 kubelet[2877]: I0313 00:32:02.537142 2877 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 13 00:32:02.545081 kubelet[2877]: I0313 00:32:02.545050 2877 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 00:32:02.550814 kubelet[2877]: I0313 00:32:02.550780 2877 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 13 00:32:02.551591 kubelet[2877]: I0313 00:32:02.551095 2877 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 00:32:02.551591 kubelet[2877]: I0313 00:32:02.551144 2877 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 00:32:02.551591 kubelet[2877]: I0313 00:32:02.551388 2877 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 00:32:02.551591 kubelet[2877]: I0313 00:32:02.551402 2877 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 00:32:02.551927 kubelet[2877]: I0313 00:32:02.551474 2877 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:32:02.551927 kubelet[2877]: I0313 00:32:02.551732 2877 kubelet.go:480] "Attempting to sync node with API server" Mar 13 00:32:02.551927 kubelet[2877]: I0313 00:32:02.551752 2877 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 00:32:02.552063 kubelet[2877]: I0313 00:32:02.551928 2877 kubelet.go:386] "Adding apiserver pod source" Mar 13 00:32:02.552063 kubelet[2877]: I0313 00:32:02.551958 2877 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 00:32:02.578088 kubelet[2877]: I0313 00:32:02.576444 2877 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Mar 13 00:32:02.582518 kubelet[2877]: I0313 00:32:02.582246 2877 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Mar 13 00:32:02.609476 kubelet[2877]: I0313 00:32:02.608902 2877 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 13 00:32:02.609476 kubelet[2877]: I0313 00:32:02.609004 2877 server.go:1289] "Started kubelet" Mar 13 00:32:02.610197 kubelet[2877]: I0313 00:32:02.610076 2877 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 00:32:02.610863 kubelet[2877]: I0313 00:32:02.610832 2877 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 00:32:02.612459 kubelet[2877]: I0313 00:32:02.611716 2877 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 00:32:02.616880 kubelet[2877]: I0313 00:32:02.616858 2877 server.go:317] "Adding debug handlers to kubelet server" Mar 13 00:32:02.619103 kubelet[2877]: I0313 00:32:02.611919 2877 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 00:32:02.623845 kubelet[2877]: I0313 00:32:02.619243 2877 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 13 00:32:02.631262 kubelet[2877]: I0313 00:32:02.624851 2877 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 13 00:32:02.635557 kubelet[2877]: I0313 00:32:02.624886 2877 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Mar 13 00:32:02.635557 kubelet[2877]: I0313 00:32:02.631936 2877 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 13 00:32:02.637298 kubelet[2877]: I0313 00:32:02.637270 2877 reconciler.go:26] "Reconciler: start to sync state" Mar 13 00:32:02.637797 kubelet[2877]: E0313 00:32:02.637734 2877 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 13 00:32:02.643539 kubelet[2877]: I0313 00:32:02.641580 2877 factory.go:223] Registration of the containerd container factory successfully Mar 13 00:32:02.643539 kubelet[2877]: I0313 00:32:02.641599 2877 factory.go:223] Registration of the systemd container factory successfully Mar 13 00:32:02.701130 kubelet[2877]: I0313 00:32:02.700930 2877 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Mar 13 00:32:02.707507 kubelet[2877]: I0313 00:32:02.707471 2877 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Mar 13 00:32:02.707668 kubelet[2877]: I0313 00:32:02.707657 2877 status_manager.go:230] "Starting to sync pod status with apiserver" Mar 13 00:32:02.708354 kubelet[2877]: I0313 00:32:02.708331 2877 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 13 00:32:02.709151 kubelet[2877]: I0313 00:32:02.708485 2877 kubelet.go:2436] "Starting kubelet main sync loop" Mar 13 00:32:02.709151 kubelet[2877]: E0313 00:32:02.708551 2877 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 00:32:02.766241 kubelet[2877]: I0313 00:32:02.766200 2877 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 13 00:32:02.766241 kubelet[2877]: I0313 00:32:02.766222 2877 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 13 00:32:02.766241 kubelet[2877]: I0313 00:32:02.766247 2877 state_mem.go:36] "Initialized new in-memory state store" Mar 13 00:32:02.766502 kubelet[2877]: I0313 00:32:02.766420 2877 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 00:32:02.766502 kubelet[2877]: I0313 00:32:02.766447 2877 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 00:32:02.766502 kubelet[2877]: I0313 00:32:02.766471 2877 policy_none.go:49] "None policy: Start" Mar 13 00:32:02.766502 kubelet[2877]: I0313 00:32:02.766487 2877 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 13 00:32:02.768497 kubelet[2877]: I0313 00:32:02.767913 2877 state_mem.go:35] "Initializing new in-memory state store" Mar 13 00:32:02.768497 kubelet[2877]: I0313 00:32:02.768200 2877 state_mem.go:75] "Updated machine memory state" Mar 13 00:32:02.778566 kubelet[2877]: E0313 00:32:02.777749 2877 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Mar 13 00:32:02.778566 kubelet[2877]: I0313 00:32:02.778033 2877 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 00:32:02.778566 kubelet[2877]: I0313 00:32:02.778059 2877 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 00:32:02.779879 kubelet[2877]: I0313 00:32:02.779682 2877 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 00:32:02.781643 kubelet[2877]: E0313 00:32:02.781620 2877 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 13 00:32:02.812088 kubelet[2877]: I0313 00:32:02.811121 2877 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:02.815040 kubelet[2877]: I0313 00:32:02.815017 2877 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:02.816631 kubelet[2877]: I0313 00:32:02.816604 2877 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:02.830516 kubelet[2877]: I0313 00:32:02.830305 2877 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 13 00:32:02.830901 kubelet[2877]: I0313 00:32:02.830862 2877 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 13 00:32:02.832401 kubelet[2877]: I0313 00:32:02.832380 2877 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 13 00:32:02.838487 kubelet[2877]: I0313 00:32:02.838210 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/988a46bccf9c0eae97ab67f35df4fd03-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"988a46bccf9c0eae97ab67f35df4fd03\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:02.838487 kubelet[2877]: I0313 00:32:02.838282 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d484da9f4eef5ab506ae9a819d5ac5f6-ca-certs\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"d484da9f4eef5ab506ae9a819d5ac5f6\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:02.838487 kubelet[2877]: I0313 00:32:02.838307 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d484da9f4eef5ab506ae9a819d5ac5f6-k8s-certs\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"d484da9f4eef5ab506ae9a819d5ac5f6\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:02.838487 kubelet[2877]: I0313 00:32:02.838326 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/47e2c19421157cdcf404228783febeb7-kubeconfig\") pod \"kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"47e2c19421157cdcf404228783febeb7\") " pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:02.838710 kubelet[2877]: I0313 00:32:02.838344 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/988a46bccf9c0eae97ab67f35df4fd03-k8s-certs\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"988a46bccf9c0eae97ab67f35df4fd03\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:02.838710 kubelet[2877]: I0313 00:32:02.838363 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d484da9f4eef5ab506ae9a819d5ac5f6-flexvolume-dir\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"d484da9f4eef5ab506ae9a819d5ac5f6\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:02.838710 kubelet[2877]: I0313 00:32:02.838380 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d484da9f4eef5ab506ae9a819d5ac5f6-kubeconfig\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"d484da9f4eef5ab506ae9a819d5ac5f6\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:02.838710 kubelet[2877]: I0313 00:32:02.838409 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d484da9f4eef5ab506ae9a819d5ac5f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"d484da9f4eef5ab506ae9a819d5ac5f6\") " pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:02.838914 kubelet[2877]: I0313 00:32:02.838426 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/988a46bccf9c0eae97ab67f35df4fd03-ca-certs\") pod \"kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" (UID: \"988a46bccf9c0eae97ab67f35df4fd03\") " pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:02.913257 kubelet[2877]: I0313 00:32:02.912927 2877 kubelet_node_status.go:75] "Attempting to register node" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:02.926876 kubelet[2877]: I0313 00:32:02.926827 2877 kubelet_node_status.go:124] "Node was previously registered" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:02.927253 kubelet[2877]: I0313 00:32:02.927110 2877 kubelet_node_status.go:78] "Successfully registered node" node="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:03.050046 systemd[1]: Started sshd@9-10.128.0.72:22-111.70.49.180:44923.service - OpenSSH per-connection server daemon (111.70.49.180:44923). Mar 13 00:32:03.558208 kubelet[2877]: I0313 00:32:03.556611 2877 apiserver.go:52] "Watching apiserver" Mar 13 00:32:03.636413 kubelet[2877]: I0313 00:32:03.636357 2877 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Mar 13 00:32:03.739628 kubelet[2877]: I0313 00:32:03.739523 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" podStartSLOduration=1.739493023 podStartE2EDuration="1.739493023s" podCreationTimestamp="2026-03-13 00:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:32:03.725507429 +0000 UTC m=+1.268296021" watchObservedRunningTime="2026-03-13 00:32:03.739493023 +0000 UTC m=+1.282281625" Mar 13 00:32:03.739933 kubelet[2877]: I0313 00:32:03.739683 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" podStartSLOduration=1.7396716460000001 podStartE2EDuration="1.739671646s" podCreationTimestamp="2026-03-13 00:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:32:03.738072319 +0000 UTC m=+1.280860911" watchObservedRunningTime="2026-03-13 00:32:03.739671646 +0000 UTC m=+1.282460238" Mar 13 00:32:03.754442 kubelet[2877]: I0313 00:32:03.754397 2877 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:03.754881 kubelet[2877]: I0313 00:32:03.754770 2877 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:03.763834 kubelet[2877]: I0313 00:32:03.763795 2877 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 13 00:32:03.764095 kubelet[2877]: E0313 00:32:03.764031 2877 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" already exists" pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:03.767436 kubelet[2877]: I0313 00:32:03.766951 2877 warnings.go:110] "Warning: metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters]" Mar 13 00:32:03.767436 kubelet[2877]: E0313 00:32:03.767030 2877 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" already exists" pod="kube-system/kube-scheduler-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:03.780725 kubelet[2877]: I0313 00:32:03.780663 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" podStartSLOduration=1.780622862 podStartE2EDuration="1.780622862s" podCreationTimestamp="2026-03-13 00:32:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:32:03.762341855 +0000 UTC m=+1.305130453" watchObservedRunningTime="2026-03-13 00:32:03.780622862 +0000 UTC m=+1.323411454" Mar 13 00:32:05.858731 sshd[2921]: Connection closed by 111.70.49.180 port 44923 [preauth] Mar 13 00:32:05.860905 systemd[1]: sshd@9-10.128.0.72:22-111.70.49.180:44923.service: Deactivated successfully. Mar 13 00:32:06.137519 kubelet[2877]: I0313 00:32:06.137360 2877 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 13 00:32:06.138078 containerd[1541]: time="2026-03-13T00:32:06.137969052Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 13 00:32:06.139022 kubelet[2877]: I0313 00:32:06.138418 2877 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 13 00:32:06.789952 systemd[1]: Started sshd@10-10.128.0.72:22-200.232.114.71:34781.service - OpenSSH per-connection server daemon (200.232.114.71:34781). Mar 13 00:32:07.116085 systemd[1]: Created slice kubepods-besteffort-poded0edf4c_02c0_4d3a_8d9f_bb7099d30517.slice - libcontainer container kubepods-besteffort-poded0edf4c_02c0_4d3a_8d9f_bb7099d30517.slice. Mar 13 00:32:07.170638 kubelet[2877]: I0313 00:32:07.170508 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ed0edf4c-02c0-4d3a-8d9f-bb7099d30517-kube-proxy\") pod \"kube-proxy-ffzfv\" (UID: \"ed0edf4c-02c0-4d3a-8d9f-bb7099d30517\") " pod="kube-system/kube-proxy-ffzfv" Mar 13 00:32:07.171706 kubelet[2877]: I0313 00:32:07.170765 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed0edf4c-02c0-4d3a-8d9f-bb7099d30517-xtables-lock\") pod \"kube-proxy-ffzfv\" (UID: \"ed0edf4c-02c0-4d3a-8d9f-bb7099d30517\") " pod="kube-system/kube-proxy-ffzfv" Mar 13 00:32:07.171706 kubelet[2877]: I0313 00:32:07.170939 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed0edf4c-02c0-4d3a-8d9f-bb7099d30517-lib-modules\") pod \"kube-proxy-ffzfv\" (UID: \"ed0edf4c-02c0-4d3a-8d9f-bb7099d30517\") " pod="kube-system/kube-proxy-ffzfv" Mar 13 00:32:07.171706 kubelet[2877]: I0313 00:32:07.171110 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-272sf\" (UniqueName: \"kubernetes.io/projected/ed0edf4c-02c0-4d3a-8d9f-bb7099d30517-kube-api-access-272sf\") pod \"kube-proxy-ffzfv\" (UID: \"ed0edf4c-02c0-4d3a-8d9f-bb7099d30517\") " pod="kube-system/kube-proxy-ffzfv" Mar 13 00:32:07.428283 containerd[1541]: time="2026-03-13T00:32:07.428233550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ffzfv,Uid:ed0edf4c-02c0-4d3a-8d9f-bb7099d30517,Namespace:kube-system,Attempt:0,}" Mar 13 00:32:07.466139 systemd[1]: Created slice kubepods-besteffort-pod62db733c_f6bd_4f63_aec4_1f98bb761435.slice - libcontainer container kubepods-besteffort-pod62db733c_f6bd_4f63_aec4_1f98bb761435.slice. Mar 13 00:32:07.467545 containerd[1541]: time="2026-03-13T00:32:07.467414589Z" level=info msg="connecting to shim 4c5f5490b6a026c43fa3824fb908b84d5367544857d3e8a144fa0d2b55ee1e7c" address="unix:///run/containerd/s/70b30d1f4abbdea23eb71dece17aa22b5d301104d063be82f59e9b451acaff59" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:32:07.473635 kubelet[2877]: I0313 00:32:07.473491 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/62db733c-f6bd-4f63-aec4-1f98bb761435-var-lib-calico\") pod \"tigera-operator-6bf85f8dd-wk8rp\" (UID: \"62db733c-f6bd-4f63-aec4-1f98bb761435\") " pod="tigera-operator/tigera-operator-6bf85f8dd-wk8rp" Mar 13 00:32:07.473635 kubelet[2877]: I0313 00:32:07.473548 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmcrx\" (UniqueName: \"kubernetes.io/projected/62db733c-f6bd-4f63-aec4-1f98bb761435-kube-api-access-fmcrx\") pod \"tigera-operator-6bf85f8dd-wk8rp\" (UID: \"62db733c-f6bd-4f63-aec4-1f98bb761435\") " pod="tigera-operator/tigera-operator-6bf85f8dd-wk8rp" Mar 13 00:32:07.513399 systemd[1]: Started cri-containerd-4c5f5490b6a026c43fa3824fb908b84d5367544857d3e8a144fa0d2b55ee1e7c.scope - libcontainer container 4c5f5490b6a026c43fa3824fb908b84d5367544857d3e8a144fa0d2b55ee1e7c. Mar 13 00:32:07.551993 containerd[1541]: time="2026-03-13T00:32:07.551945816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ffzfv,Uid:ed0edf4c-02c0-4d3a-8d9f-bb7099d30517,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c5f5490b6a026c43fa3824fb908b84d5367544857d3e8a144fa0d2b55ee1e7c\"" Mar 13 00:32:07.558798 containerd[1541]: time="2026-03-13T00:32:07.558723066Z" level=info msg="CreateContainer within sandbox \"4c5f5490b6a026c43fa3824fb908b84d5367544857d3e8a144fa0d2b55ee1e7c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 13 00:32:07.571975 containerd[1541]: time="2026-03-13T00:32:07.571940057Z" level=info msg="Container f04626d5507dc836fbec5f966037157feabfc98cf38c44abc9d44672f8a8912f: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:32:07.587562 containerd[1541]: time="2026-03-13T00:32:07.587324705Z" level=info msg="CreateContainer within sandbox \"4c5f5490b6a026c43fa3824fb908b84d5367544857d3e8a144fa0d2b55ee1e7c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f04626d5507dc836fbec5f966037157feabfc98cf38c44abc9d44672f8a8912f\"" Mar 13 00:32:07.588376 containerd[1541]: time="2026-03-13T00:32:07.588344045Z" level=info msg="StartContainer for \"f04626d5507dc836fbec5f966037157feabfc98cf38c44abc9d44672f8a8912f\"" Mar 13 00:32:07.591832 containerd[1541]: time="2026-03-13T00:32:07.591745411Z" level=info msg="connecting to shim f04626d5507dc836fbec5f966037157feabfc98cf38c44abc9d44672f8a8912f" address="unix:///run/containerd/s/70b30d1f4abbdea23eb71dece17aa22b5d301104d063be82f59e9b451acaff59" protocol=ttrpc version=3 Mar 13 00:32:07.616616 systemd[1]: Started cri-containerd-f04626d5507dc836fbec5f966037157feabfc98cf38c44abc9d44672f8a8912f.scope - libcontainer container f04626d5507dc836fbec5f966037157feabfc98cf38c44abc9d44672f8a8912f. Mar 13 00:32:07.708103 containerd[1541]: time="2026-03-13T00:32:07.707974329Z" level=info msg="StartContainer for \"f04626d5507dc836fbec5f966037157feabfc98cf38c44abc9d44672f8a8912f\" returns successfully" Mar 13 00:32:07.775307 containerd[1541]: time="2026-03-13T00:32:07.775218889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-wk8rp,Uid:62db733c-f6bd-4f63-aec4-1f98bb761435,Namespace:tigera-operator,Attempt:0,}" Mar 13 00:32:07.811140 containerd[1541]: time="2026-03-13T00:32:07.811028424Z" level=info msg="connecting to shim ca865281434db19d2d47980621350068a55d7fb3e6d3d3d59e8dd3a6dbb6981c" address="unix:///run/containerd/s/608809cb3b6f0e7c30550a693dcef4ada40ae4fbc0d446a30db7d4a644eda1aa" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:32:07.849590 systemd[1]: Started cri-containerd-ca865281434db19d2d47980621350068a55d7fb3e6d3d3d59e8dd3a6dbb6981c.scope - libcontainer container ca865281434db19d2d47980621350068a55d7fb3e6d3d3d59e8dd3a6dbb6981c. Mar 13 00:32:07.957467 containerd[1541]: time="2026-03-13T00:32:07.957374439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6bf85f8dd-wk8rp,Uid:62db733c-f6bd-4f63-aec4-1f98bb761435,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ca865281434db19d2d47980621350068a55d7fb3e6d3d3d59e8dd3a6dbb6981c\"" Mar 13 00:32:07.960204 containerd[1541]: time="2026-03-13T00:32:07.959717513Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\"" Mar 13 00:32:08.904357 kubelet[2877]: I0313 00:32:08.904112 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ffzfv" podStartSLOduration=1.90409083 podStartE2EDuration="1.90409083s" podCreationTimestamp="2026-03-13 00:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:32:07.780249656 +0000 UTC m=+5.323038251" watchObservedRunningTime="2026-03-13 00:32:08.90409083 +0000 UTC m=+6.446879421" Mar 13 00:32:09.056286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2514271222.mount: Deactivated successfully. Mar 13 00:32:10.460478 sshd[2933]: PAM: Permission denied for operator from 200.232.114.71 Mar 13 00:32:11.120760 sshd[2933]: Connection closed by authenticating user operator 200.232.114.71 port 34781 [preauth] Mar 13 00:32:11.124828 systemd[1]: sshd@10-10.128.0.72:22-200.232.114.71:34781.service: Deactivated successfully. Mar 13 00:32:11.374543 containerd[1541]: time="2026-03-13T00:32:11.374391621Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.40.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:11.376205 containerd[1541]: time="2026-03-13T00:32:11.376139041Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.40.7: active requests=0, bytes read=40846156" Mar 13 00:32:11.377623 containerd[1541]: time="2026-03-13T00:32:11.377556832Z" level=info msg="ImageCreate event name:\"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:11.380241 containerd[1541]: time="2026-03-13T00:32:11.380160410Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:11.381317 containerd[1541]: time="2026-03-13T00:32:11.381146914Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.40.7\" with image id \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\", repo tag \"quay.io/tigera/operator:v1.40.7\", repo digest \"quay.io/tigera/operator@sha256:53260704fc6e638633b243729411222e01e1898647352a6e1a09cc046887973a\", size \"40842151\" in 3.421051113s" Mar 13 00:32:11.381317 containerd[1541]: time="2026-03-13T00:32:11.381212049Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.40.7\" returns image reference \"sha256:de04da31b5feb10fd313c39b7ac72d47ce9b5b8eb06161142e2e2283059a52c2\"" Mar 13 00:32:11.386141 containerd[1541]: time="2026-03-13T00:32:11.386104904Z" level=info msg="CreateContainer within sandbox \"ca865281434db19d2d47980621350068a55d7fb3e6d3d3d59e8dd3a6dbb6981c\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 13 00:32:11.395213 containerd[1541]: time="2026-03-13T00:32:11.394722496Z" level=info msg="Container 8465a97b5eaa5cc2e80f19fc32a7dfb93061359c0db1b8daf55dfd596b62c186: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:32:11.408058 containerd[1541]: time="2026-03-13T00:32:11.408006517Z" level=info msg="CreateContainer within sandbox \"ca865281434db19d2d47980621350068a55d7fb3e6d3d3d59e8dd3a6dbb6981c\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"8465a97b5eaa5cc2e80f19fc32a7dfb93061359c0db1b8daf55dfd596b62c186\"" Mar 13 00:32:11.408743 containerd[1541]: time="2026-03-13T00:32:11.408669669Z" level=info msg="StartContainer for \"8465a97b5eaa5cc2e80f19fc32a7dfb93061359c0db1b8daf55dfd596b62c186\"" Mar 13 00:32:11.411472 containerd[1541]: time="2026-03-13T00:32:11.411419504Z" level=info msg="connecting to shim 8465a97b5eaa5cc2e80f19fc32a7dfb93061359c0db1b8daf55dfd596b62c186" address="unix:///run/containerd/s/608809cb3b6f0e7c30550a693dcef4ada40ae4fbc0d446a30db7d4a644eda1aa" protocol=ttrpc version=3 Mar 13 00:32:11.441372 systemd[1]: Started cri-containerd-8465a97b5eaa5cc2e80f19fc32a7dfb93061359c0db1b8daf55dfd596b62c186.scope - libcontainer container 8465a97b5eaa5cc2e80f19fc32a7dfb93061359c0db1b8daf55dfd596b62c186. Mar 13 00:32:11.483999 containerd[1541]: time="2026-03-13T00:32:11.483932717Z" level=info msg="StartContainer for \"8465a97b5eaa5cc2e80f19fc32a7dfb93061359c0db1b8daf55dfd596b62c186\" returns successfully" Mar 13 00:32:13.413487 kubelet[2877]: I0313 00:32:13.413315 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6bf85f8dd-wk8rp" podStartSLOduration=2.989785098 podStartE2EDuration="6.413292726s" podCreationTimestamp="2026-03-13 00:32:07 +0000 UTC" firstStartedPulling="2026-03-13 00:32:07.959009592 +0000 UTC m=+5.501798161" lastFinishedPulling="2026-03-13 00:32:11.382517217 +0000 UTC m=+8.925305789" observedRunningTime="2026-03-13 00:32:11.787476341 +0000 UTC m=+9.330264933" watchObservedRunningTime="2026-03-13 00:32:13.413292726 +0000 UTC m=+10.956081319" Mar 13 00:32:18.511469 sudo[1865]: pam_unix(sudo:session): session closed for user root Mar 13 00:32:18.550697 sshd[1864]: Connection closed by 20.161.92.111 port 56504 Mar 13 00:32:18.549374 sshd-session[1861]: pam_unix(sshd:session): session closed for user core Mar 13 00:32:18.560233 systemd-logind[1525]: Session 9 logged out. Waiting for processes to exit. Mar 13 00:32:18.561485 systemd[1]: sshd@8-10.128.0.72:22-20.161.92.111:56504.service: Deactivated successfully. Mar 13 00:32:18.569108 systemd[1]: session-9.scope: Deactivated successfully. Mar 13 00:32:18.570023 systemd[1]: session-9.scope: Consumed 6.961s CPU time, 231.3M memory peak. Mar 13 00:32:18.576565 systemd-logind[1525]: Removed session 9. Mar 13 00:32:20.134564 systemd[1]: Created slice kubepods-besteffort-podaa020550_948f_4fb8_8a42_ee87e2e4876f.slice - libcontainer container kubepods-besteffort-podaa020550_948f_4fb8_8a42_ee87e2e4876f.slice. Mar 13 00:32:20.161896 kubelet[2877]: I0313 00:32:20.161831 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa020550-948f-4fb8-8a42-ee87e2e4876f-tigera-ca-bundle\") pod \"calico-typha-5fbd5bfffb-m2xdk\" (UID: \"aa020550-948f-4fb8-8a42-ee87e2e4876f\") " pod="calico-system/calico-typha-5fbd5bfffb-m2xdk" Mar 13 00:32:20.161896 kubelet[2877]: I0313 00:32:20.161896 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/aa020550-948f-4fb8-8a42-ee87e2e4876f-typha-certs\") pod \"calico-typha-5fbd5bfffb-m2xdk\" (UID: \"aa020550-948f-4fb8-8a42-ee87e2e4876f\") " pod="calico-system/calico-typha-5fbd5bfffb-m2xdk" Mar 13 00:32:20.162533 kubelet[2877]: I0313 00:32:20.161925 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j8pr\" (UniqueName: \"kubernetes.io/projected/aa020550-948f-4fb8-8a42-ee87e2e4876f-kube-api-access-9j8pr\") pod \"calico-typha-5fbd5bfffb-m2xdk\" (UID: \"aa020550-948f-4fb8-8a42-ee87e2e4876f\") " pod="calico-system/calico-typha-5fbd5bfffb-m2xdk" Mar 13 00:32:20.280449 systemd[1]: Created slice kubepods-besteffort-pod33981dbc_a55b_4809_b235_0122b5b9915f.slice - libcontainer container kubepods-besteffort-pod33981dbc_a55b_4809_b235_0122b5b9915f.slice. Mar 13 00:32:20.369421 kubelet[2877]: I0313 00:32:20.369064 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/33981dbc-a55b-4809-b235-0122b5b9915f-policysync\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.369421 kubelet[2877]: I0313 00:32:20.369117 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/33981dbc-a55b-4809-b235-0122b5b9915f-var-run-calico\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.369421 kubelet[2877]: I0313 00:32:20.369164 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/33981dbc-a55b-4809-b235-0122b5b9915f-lib-modules\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.369421 kubelet[2877]: I0313 00:32:20.369213 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/33981dbc-a55b-4809-b235-0122b5b9915f-cni-net-dir\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.369421 kubelet[2877]: I0313 00:32:20.369238 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/33981dbc-a55b-4809-b235-0122b5b9915f-xtables-lock\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.369837 kubelet[2877]: I0313 00:32:20.369271 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/33981dbc-a55b-4809-b235-0122b5b9915f-cni-bin-dir\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.369837 kubelet[2877]: I0313 00:32:20.369298 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nodeproc\" (UniqueName: \"kubernetes.io/host-path/33981dbc-a55b-4809-b235-0122b5b9915f-nodeproc\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.369837 kubelet[2877]: I0313 00:32:20.369325 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/33981dbc-a55b-4809-b235-0122b5b9915f-tigera-ca-bundle\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.369837 kubelet[2877]: I0313 00:32:20.369353 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/33981dbc-a55b-4809-b235-0122b5b9915f-node-certs\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.369837 kubelet[2877]: I0313 00:32:20.369380 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/33981dbc-a55b-4809-b235-0122b5b9915f-var-lib-calico\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.371050 kubelet[2877]: I0313 00:32:20.370508 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpffs\" (UniqueName: \"kubernetes.io/host-path/33981dbc-a55b-4809-b235-0122b5b9915f-bpffs\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.371050 kubelet[2877]: I0313 00:32:20.370842 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/33981dbc-a55b-4809-b235-0122b5b9915f-cni-log-dir\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.371050 kubelet[2877]: I0313 00:32:20.370995 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/33981dbc-a55b-4809-b235-0122b5b9915f-flexvol-driver-host\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.371706 kubelet[2877]: I0313 00:32:20.371356 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys-fs\" (UniqueName: \"kubernetes.io/host-path/33981dbc-a55b-4809-b235-0122b5b9915f-sys-fs\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.372403 kubelet[2877]: I0313 00:32:20.372254 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjldp\" (UniqueName: \"kubernetes.io/projected/33981dbc-a55b-4809-b235-0122b5b9915f-kube-api-access-jjldp\") pod \"calico-node-7hv7p\" (UID: \"33981dbc-a55b-4809-b235-0122b5b9915f\") " pod="calico-system/calico-node-7hv7p" Mar 13 00:32:20.420493 kubelet[2877]: E0313 00:32:20.420350 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lrsn" podUID="2e6f70af-e699-4466-b0e0-161b9b41f3be" Mar 13 00:32:20.442578 containerd[1541]: time="2026-03-13T00:32:20.442530405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fbd5bfffb-m2xdk,Uid:aa020550-948f-4fb8-8a42-ee87e2e4876f,Namespace:calico-system,Attempt:0,}" Mar 13 00:32:20.473461 kubelet[2877]: I0313 00:32:20.472768 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e6f70af-e699-4466-b0e0-161b9b41f3be-kubelet-dir\") pod \"csi-node-driver-8lrsn\" (UID: \"2e6f70af-e699-4466-b0e0-161b9b41f3be\") " pod="calico-system/csi-node-driver-8lrsn" Mar 13 00:32:20.474342 kubelet[2877]: I0313 00:32:20.474264 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzdn7\" (UniqueName: \"kubernetes.io/projected/2e6f70af-e699-4466-b0e0-161b9b41f3be-kube-api-access-gzdn7\") pod \"csi-node-driver-8lrsn\" (UID: \"2e6f70af-e699-4466-b0e0-161b9b41f3be\") " pod="calico-system/csi-node-driver-8lrsn" Mar 13 00:32:20.477206 kubelet[2877]: I0313 00:32:20.474592 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2e6f70af-e699-4466-b0e0-161b9b41f3be-varrun\") pod \"csi-node-driver-8lrsn\" (UID: \"2e6f70af-e699-4466-b0e0-161b9b41f3be\") " pod="calico-system/csi-node-driver-8lrsn" Mar 13 00:32:20.477206 kubelet[2877]: I0313 00:32:20.474633 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2e6f70af-e699-4466-b0e0-161b9b41f3be-socket-dir\") pod \"csi-node-driver-8lrsn\" (UID: \"2e6f70af-e699-4466-b0e0-161b9b41f3be\") " pod="calico-system/csi-node-driver-8lrsn" Mar 13 00:32:20.480535 kubelet[2877]: I0313 00:32:20.480237 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2e6f70af-e699-4466-b0e0-161b9b41f3be-registration-dir\") pod \"csi-node-driver-8lrsn\" (UID: \"2e6f70af-e699-4466-b0e0-161b9b41f3be\") " pod="calico-system/csi-node-driver-8lrsn" Mar 13 00:32:20.483068 kubelet[2877]: E0313 00:32:20.483048 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.483229 kubelet[2877]: W0313 00:32:20.483211 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.483345 kubelet[2877]: E0313 00:32:20.483330 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.487092 kubelet[2877]: E0313 00:32:20.487071 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.487264 kubelet[2877]: W0313 00:32:20.487243 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.487450 kubelet[2877]: E0313 00:32:20.487414 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.497198 kubelet[2877]: E0313 00:32:20.495205 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.497198 kubelet[2877]: W0313 00:32:20.495240 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.497198 kubelet[2877]: E0313 00:32:20.495259 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.507195 containerd[1541]: time="2026-03-13T00:32:20.505462796Z" level=info msg="connecting to shim 0095095fc014660cc1c4b9f4a298cd8308d70a932aa7ff202cd54d136ee330c9" address="unix:///run/containerd/s/18b739417a0c0aacf5211bcb5d985d4ebc7b04348ef0d1748c60a2ab38faa087" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:32:20.518199 kubelet[2877]: E0313 00:32:20.515090 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.518199 kubelet[2877]: W0313 00:32:20.515134 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.518199 kubelet[2877]: E0313 00:32:20.515156 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.582755 systemd[1]: Started cri-containerd-0095095fc014660cc1c4b9f4a298cd8308d70a932aa7ff202cd54d136ee330c9.scope - libcontainer container 0095095fc014660cc1c4b9f4a298cd8308d70a932aa7ff202cd54d136ee330c9. Mar 13 00:32:20.584121 kubelet[2877]: E0313 00:32:20.584087 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.584121 kubelet[2877]: W0313 00:32:20.584116 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.584786 kubelet[2877]: E0313 00:32:20.584143 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.586206 kubelet[2877]: E0313 00:32:20.585410 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.586206 kubelet[2877]: W0313 00:32:20.585431 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.586206 kubelet[2877]: E0313 00:32:20.585450 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.586206 kubelet[2877]: E0313 00:32:20.585807 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.586206 kubelet[2877]: W0313 00:32:20.585821 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.586206 kubelet[2877]: E0313 00:32:20.585838 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.586206 kubelet[2877]: E0313 00:32:20.586205 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.586615 kubelet[2877]: W0313 00:32:20.586227 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.586615 kubelet[2877]: E0313 00:32:20.586356 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.586717 kubelet[2877]: E0313 00:32:20.586642 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.586717 kubelet[2877]: W0313 00:32:20.586656 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.586717 kubelet[2877]: E0313 00:32:20.586669 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.588219 kubelet[2877]: E0313 00:32:20.587538 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.588219 kubelet[2877]: W0313 00:32:20.587561 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.588219 kubelet[2877]: E0313 00:32:20.587579 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.588219 kubelet[2877]: E0313 00:32:20.588117 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.589302 kubelet[2877]: W0313 00:32:20.588598 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.589302 kubelet[2877]: E0313 00:32:20.588628 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.590427 kubelet[2877]: E0313 00:32:20.590404 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.590427 kubelet[2877]: W0313 00:32:20.590425 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.590763 kubelet[2877]: E0313 00:32:20.590443 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.590763 kubelet[2877]: E0313 00:32:20.590739 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.590763 kubelet[2877]: W0313 00:32:20.590752 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.591494 kubelet[2877]: E0313 00:32:20.590768 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.591494 kubelet[2877]: E0313 00:32:20.591378 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.591494 kubelet[2877]: W0313 00:32:20.591393 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.591494 kubelet[2877]: E0313 00:32:20.591410 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.591789 kubelet[2877]: E0313 00:32:20.591751 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.591789 kubelet[2877]: W0313 00:32:20.591767 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.591789 kubelet[2877]: E0313 00:32:20.591783 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.593195 kubelet[2877]: E0313 00:32:20.592091 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.593195 kubelet[2877]: W0313 00:32:20.592108 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.593195 kubelet[2877]: E0313 00:32:20.592126 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.593195 kubelet[2877]: E0313 00:32:20.592432 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.593195 kubelet[2877]: W0313 00:32:20.592444 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.593195 kubelet[2877]: E0313 00:32:20.592459 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.593195 kubelet[2877]: E0313 00:32:20.592889 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.593195 kubelet[2877]: W0313 00:32:20.592903 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.593195 kubelet[2877]: E0313 00:32:20.592918 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.596059 kubelet[2877]: E0313 00:32:20.593494 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.596059 kubelet[2877]: W0313 00:32:20.593506 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.596059 kubelet[2877]: E0313 00:32:20.593521 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.596059 kubelet[2877]: E0313 00:32:20.593880 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.596059 kubelet[2877]: W0313 00:32:20.593895 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.596059 kubelet[2877]: E0313 00:32:20.593911 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.596059 kubelet[2877]: E0313 00:32:20.594277 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.596059 kubelet[2877]: W0313 00:32:20.594292 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.596059 kubelet[2877]: E0313 00:32:20.594311 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.596059 kubelet[2877]: E0313 00:32:20.594652 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.596701 kubelet[2877]: W0313 00:32:20.594666 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.596701 kubelet[2877]: E0313 00:32:20.594681 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.596701 kubelet[2877]: E0313 00:32:20.595019 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.596701 kubelet[2877]: W0313 00:32:20.595033 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.596701 kubelet[2877]: E0313 00:32:20.595049 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.596701 kubelet[2877]: E0313 00:32:20.595405 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.596701 kubelet[2877]: W0313 00:32:20.595420 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.596701 kubelet[2877]: E0313 00:32:20.595435 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.596701 kubelet[2877]: E0313 00:32:20.595766 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.596701 kubelet[2877]: W0313 00:32:20.595778 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.598272 kubelet[2877]: E0313 00:32:20.595794 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.598272 kubelet[2877]: E0313 00:32:20.596085 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.598272 kubelet[2877]: W0313 00:32:20.596106 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.598272 kubelet[2877]: E0313 00:32:20.596122 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.598272 kubelet[2877]: E0313 00:32:20.596667 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.598272 kubelet[2877]: W0313 00:32:20.596681 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.598272 kubelet[2877]: E0313 00:32:20.596698 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.598272 kubelet[2877]: E0313 00:32:20.597398 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.598272 kubelet[2877]: W0313 00:32:20.597415 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.598272 kubelet[2877]: E0313 00:32:20.597432 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.600594 kubelet[2877]: E0313 00:32:20.598065 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.600594 kubelet[2877]: W0313 00:32:20.598082 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.600594 kubelet[2877]: E0313 00:32:20.598099 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.603541 containerd[1541]: time="2026-03-13T00:32:20.603498286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7hv7p,Uid:33981dbc-a55b-4809-b235-0122b5b9915f,Namespace:calico-system,Attempt:0,}" Mar 13 00:32:20.633509 kubelet[2877]: E0313 00:32:20.633402 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:20.633509 kubelet[2877]: W0313 00:32:20.633426 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:20.633509 kubelet[2877]: E0313 00:32:20.633450 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:20.650864 containerd[1541]: time="2026-03-13T00:32:20.650798463Z" level=info msg="connecting to shim db63ee7e8d0f6e8557bfaff624882770d30e742a8d3f215584cbd2c3e19903b8" address="unix:///run/containerd/s/47d0b74f141c5327e16c83980bb3c03ecd77402dc530da94779f23fbd7243e78" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:32:20.719561 systemd[1]: Started cri-containerd-db63ee7e8d0f6e8557bfaff624882770d30e742a8d3f215584cbd2c3e19903b8.scope - libcontainer container db63ee7e8d0f6e8557bfaff624882770d30e742a8d3f215584cbd2c3e19903b8. Mar 13 00:32:20.783485 containerd[1541]: time="2026-03-13T00:32:20.783396914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fbd5bfffb-m2xdk,Uid:aa020550-948f-4fb8-8a42-ee87e2e4876f,Namespace:calico-system,Attempt:0,} returns sandbox id \"0095095fc014660cc1c4b9f4a298cd8308d70a932aa7ff202cd54d136ee330c9\"" Mar 13 00:32:20.785629 containerd[1541]: time="2026-03-13T00:32:20.785567652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7hv7p,Uid:33981dbc-a55b-4809-b235-0122b5b9915f,Namespace:calico-system,Attempt:0,} returns sandbox id \"db63ee7e8d0f6e8557bfaff624882770d30e742a8d3f215584cbd2c3e19903b8\"" Mar 13 00:32:20.786078 containerd[1541]: time="2026-03-13T00:32:20.785761024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\"" Mar 13 00:32:21.923513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount399501028.mount: Deactivated successfully. Mar 13 00:32:22.711753 kubelet[2877]: E0313 00:32:22.711698 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lrsn" podUID="2e6f70af-e699-4466-b0e0-161b9b41f3be" Mar 13 00:32:22.910642 containerd[1541]: time="2026-03-13T00:32:22.910581104Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:22.911818 containerd[1541]: time="2026-03-13T00:32:22.911761757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.31.4: active requests=0, bytes read=36107596" Mar 13 00:32:22.912963 containerd[1541]: time="2026-03-13T00:32:22.912896303Z" level=info msg="ImageCreate event name:\"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:22.915829 containerd[1541]: time="2026-03-13T00:32:22.915766949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:22.916789 containerd[1541]: time="2026-03-13T00:32:22.916651213Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.31.4\" with image id \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:d9396cfcd63dfcf72a65903042e473bb0bafc0cceb56bd71cd84078498a87130\", size \"36107450\" in 2.130851733s" Mar 13 00:32:22.916789 containerd[1541]: time="2026-03-13T00:32:22.916691442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.31.4\" returns image reference \"sha256:46766605472b59b9c16342b2cc74da11f598baa9ba6d1e8b07b3f8ab4f29c55b\"" Mar 13 00:32:22.918191 containerd[1541]: time="2026-03-13T00:32:22.918107870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\"" Mar 13 00:32:22.942323 containerd[1541]: time="2026-03-13T00:32:22.942276134Z" level=info msg="CreateContainer within sandbox \"0095095fc014660cc1c4b9f4a298cd8308d70a932aa7ff202cd54d136ee330c9\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 13 00:32:22.950738 containerd[1541]: time="2026-03-13T00:32:22.950376867Z" level=info msg="Container 84a2e76f7ecc4673a6f9fbe8ca2bcf20e06598837c06d54962687b1bb9b785fd: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:32:22.964860 containerd[1541]: time="2026-03-13T00:32:22.964676516Z" level=info msg="CreateContainer within sandbox \"0095095fc014660cc1c4b9f4a298cd8308d70a932aa7ff202cd54d136ee330c9\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"84a2e76f7ecc4673a6f9fbe8ca2bcf20e06598837c06d54962687b1bb9b785fd\"" Mar 13 00:32:22.965755 containerd[1541]: time="2026-03-13T00:32:22.965715808Z" level=info msg="StartContainer for \"84a2e76f7ecc4673a6f9fbe8ca2bcf20e06598837c06d54962687b1bb9b785fd\"" Mar 13 00:32:22.967988 containerd[1541]: time="2026-03-13T00:32:22.967911638Z" level=info msg="connecting to shim 84a2e76f7ecc4673a6f9fbe8ca2bcf20e06598837c06d54962687b1bb9b785fd" address="unix:///run/containerd/s/18b739417a0c0aacf5211bcb5d985d4ebc7b04348ef0d1748c60a2ab38faa087" protocol=ttrpc version=3 Mar 13 00:32:22.998400 systemd[1]: Started cri-containerd-84a2e76f7ecc4673a6f9fbe8ca2bcf20e06598837c06d54962687b1bb9b785fd.scope - libcontainer container 84a2e76f7ecc4673a6f9fbe8ca2bcf20e06598837c06d54962687b1bb9b785fd. Mar 13 00:32:23.077984 containerd[1541]: time="2026-03-13T00:32:23.077909724Z" level=info msg="StartContainer for \"84a2e76f7ecc4673a6f9fbe8ca2bcf20e06598837c06d54962687b1bb9b785fd\" returns successfully" Mar 13 00:32:23.834344 kubelet[2877]: I0313 00:32:23.834275 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fbd5bfffb-m2xdk" podStartSLOduration=1.701510657 podStartE2EDuration="3.834253771s" podCreationTimestamp="2026-03-13 00:32:20 +0000 UTC" firstStartedPulling="2026-03-13 00:32:20.785048861 +0000 UTC m=+18.327837447" lastFinishedPulling="2026-03-13 00:32:22.917791993 +0000 UTC m=+20.460580561" observedRunningTime="2026-03-13 00:32:23.833555531 +0000 UTC m=+21.376344124" watchObservedRunningTime="2026-03-13 00:32:23.834253771 +0000 UTC m=+21.377042365" Mar 13 00:32:23.865489 kubelet[2877]: E0313 00:32:23.865309 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.865489 kubelet[2877]: W0313 00:32:23.865335 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.865489 kubelet[2877]: E0313 00:32:23.865361 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.866192 kubelet[2877]: E0313 00:32:23.865973 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.866192 kubelet[2877]: W0313 00:32:23.865992 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.866192 kubelet[2877]: E0313 00:32:23.866011 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.866475 kubelet[2877]: E0313 00:32:23.866454 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.866557 kubelet[2877]: W0313 00:32:23.866473 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.866557 kubelet[2877]: E0313 00:32:23.866505 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.866888 kubelet[2877]: E0313 00:32:23.866864 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.866888 kubelet[2877]: W0313 00:32:23.866884 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.867017 kubelet[2877]: E0313 00:32:23.866903 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.867474 kubelet[2877]: E0313 00:32:23.867451 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.867474 kubelet[2877]: W0313 00:32:23.867471 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.867640 kubelet[2877]: E0313 00:32:23.867489 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.867791 kubelet[2877]: E0313 00:32:23.867773 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.867791 kubelet[2877]: W0313 00:32:23.867789 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.867918 kubelet[2877]: E0313 00:32:23.867805 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.868088 kubelet[2877]: E0313 00:32:23.868071 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.868215 kubelet[2877]: W0313 00:32:23.868095 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.868215 kubelet[2877]: E0313 00:32:23.868112 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.868447 kubelet[2877]: E0313 00:32:23.868422 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.868447 kubelet[2877]: W0313 00:32:23.868442 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.868568 kubelet[2877]: E0313 00:32:23.868459 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.868771 kubelet[2877]: E0313 00:32:23.868751 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.868771 kubelet[2877]: W0313 00:32:23.868768 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.868924 kubelet[2877]: E0313 00:32:23.868784 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.869066 kubelet[2877]: E0313 00:32:23.869048 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.869066 kubelet[2877]: W0313 00:32:23.869063 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.869225 kubelet[2877]: E0313 00:32:23.869081 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.869410 kubelet[2877]: E0313 00:32:23.869389 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.869410 kubelet[2877]: W0313 00:32:23.869407 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.869545 kubelet[2877]: E0313 00:32:23.869422 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.869735 kubelet[2877]: E0313 00:32:23.869713 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.869735 kubelet[2877]: W0313 00:32:23.869731 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.869882 kubelet[2877]: E0313 00:32:23.869747 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.870079 kubelet[2877]: E0313 00:32:23.870059 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.870079 kubelet[2877]: W0313 00:32:23.870076 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.871020 kubelet[2877]: E0313 00:32:23.870098 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.871020 kubelet[2877]: E0313 00:32:23.870401 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.871020 kubelet[2877]: W0313 00:32:23.870415 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.871020 kubelet[2877]: E0313 00:32:23.870430 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.871020 kubelet[2877]: E0313 00:32:23.870747 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.871020 kubelet[2877]: W0313 00:32:23.870759 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.871020 kubelet[2877]: E0313 00:32:23.870774 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.914646 kubelet[2877]: E0313 00:32:23.914602 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.914646 kubelet[2877]: W0313 00:32:23.914627 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.914646 kubelet[2877]: E0313 00:32:23.914649 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.915013 kubelet[2877]: E0313 00:32:23.914993 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.915013 kubelet[2877]: W0313 00:32:23.915011 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.915333 kubelet[2877]: E0313 00:32:23.915027 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.915396 kubelet[2877]: E0313 00:32:23.915378 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.915396 kubelet[2877]: W0313 00:32:23.915392 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.915569 kubelet[2877]: E0313 00:32:23.915409 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.915717 kubelet[2877]: E0313 00:32:23.915699 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.915717 kubelet[2877]: W0313 00:32:23.915715 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.915926 kubelet[2877]: E0313 00:32:23.915731 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.916024 kubelet[2877]: E0313 00:32:23.916005 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.916024 kubelet[2877]: W0313 00:32:23.916023 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.916136 kubelet[2877]: E0313 00:32:23.916040 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.916438 kubelet[2877]: E0313 00:32:23.916416 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.916438 kubelet[2877]: W0313 00:32:23.916436 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.916581 kubelet[2877]: E0313 00:32:23.916456 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.917164 kubelet[2877]: E0313 00:32:23.916965 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.917164 kubelet[2877]: W0313 00:32:23.916985 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.917164 kubelet[2877]: E0313 00:32:23.917002 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.917648 kubelet[2877]: E0313 00:32:23.917625 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.917648 kubelet[2877]: W0313 00:32:23.917644 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.917788 kubelet[2877]: E0313 00:32:23.917661 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.917963 kubelet[2877]: E0313 00:32:23.917945 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.918063 kubelet[2877]: W0313 00:32:23.917985 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.918063 kubelet[2877]: E0313 00:32:23.918004 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.918413 kubelet[2877]: E0313 00:32:23.918393 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.918413 kubelet[2877]: W0313 00:32:23.918412 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.918540 kubelet[2877]: E0313 00:32:23.918428 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.918787 kubelet[2877]: E0313 00:32:23.918765 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.918787 kubelet[2877]: W0313 00:32:23.918783 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.918920 kubelet[2877]: E0313 00:32:23.918799 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.919314 kubelet[2877]: E0313 00:32:23.919195 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.919519 kubelet[2877]: W0313 00:32:23.919365 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.919519 kubelet[2877]: E0313 00:32:23.919386 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.919872 kubelet[2877]: E0313 00:32:23.919816 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.919872 kubelet[2877]: W0313 00:32:23.919840 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.919872 kubelet[2877]: E0313 00:32:23.919857 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.920165 kubelet[2877]: E0313 00:32:23.920146 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.920165 kubelet[2877]: W0313 00:32:23.920163 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.920324 kubelet[2877]: E0313 00:32:23.920204 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.920586 kubelet[2877]: E0313 00:32:23.920496 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.920586 kubelet[2877]: W0313 00:32:23.920571 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.920718 kubelet[2877]: E0313 00:32:23.920589 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.920923 kubelet[2877]: E0313 00:32:23.920902 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.920923 kubelet[2877]: W0313 00:32:23.920920 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.921052 kubelet[2877]: E0313 00:32:23.920936 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.921285 kubelet[2877]: E0313 00:32:23.921264 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.921285 kubelet[2877]: W0313 00:32:23.921281 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.921414 kubelet[2877]: E0313 00:32:23.921297 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:23.921990 kubelet[2877]: E0313 00:32:23.921969 2877 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 13 00:32:23.921990 kubelet[2877]: W0313 00:32:23.921987 2877 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 13 00:32:23.922139 kubelet[2877]: E0313 00:32:23.922003 2877 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 13 00:32:24.106864 containerd[1541]: time="2026-03-13T00:32:24.106709724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:24.110196 containerd[1541]: time="2026-03-13T00:32:24.109313206Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4: active requests=0, bytes read=4630250" Mar 13 00:32:24.111225 containerd[1541]: time="2026-03-13T00:32:24.109543538Z" level=info msg="ImageCreate event name:\"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:24.113918 containerd[1541]: time="2026-03-13T00:32:24.113397779Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:24.114843 containerd[1541]: time="2026-03-13T00:32:24.114484804Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" with image id \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:5fa3492ac4dfef9cc34fe70a51289118e1f715a89133ea730eef81ad789dadbc\", size \"6186255\" in 1.196153816s" Mar 13 00:32:24.114843 containerd[1541]: time="2026-03-13T00:32:24.114527947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.31.4\" returns image reference \"sha256:a6ea0cf732d820506ae9f1d7e7433a14009026b894fbbb8f346b9a5f5335c47e\"" Mar 13 00:32:24.120814 containerd[1541]: time="2026-03-13T00:32:24.120734459Z" level=info msg="CreateContainer within sandbox \"db63ee7e8d0f6e8557bfaff624882770d30e742a8d3f215584cbd2c3e19903b8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 13 00:32:24.133697 containerd[1541]: time="2026-03-13T00:32:24.133364211Z" level=info msg="Container 4c0a3343ce077b41e2ec8446a14b461e35183b0b59b7c1900d57f24682aba2ec: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:32:24.143405 containerd[1541]: time="2026-03-13T00:32:24.143365683Z" level=info msg="CreateContainer within sandbox \"db63ee7e8d0f6e8557bfaff624882770d30e742a8d3f215584cbd2c3e19903b8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4c0a3343ce077b41e2ec8446a14b461e35183b0b59b7c1900d57f24682aba2ec\"" Mar 13 00:32:24.144447 containerd[1541]: time="2026-03-13T00:32:24.144396708Z" level=info msg="StartContainer for \"4c0a3343ce077b41e2ec8446a14b461e35183b0b59b7c1900d57f24682aba2ec\"" Mar 13 00:32:24.146867 containerd[1541]: time="2026-03-13T00:32:24.146831369Z" level=info msg="connecting to shim 4c0a3343ce077b41e2ec8446a14b461e35183b0b59b7c1900d57f24682aba2ec" address="unix:///run/containerd/s/47d0b74f141c5327e16c83980bb3c03ecd77402dc530da94779f23fbd7243e78" protocol=ttrpc version=3 Mar 13 00:32:24.175368 systemd[1]: Started cri-containerd-4c0a3343ce077b41e2ec8446a14b461e35183b0b59b7c1900d57f24682aba2ec.scope - libcontainer container 4c0a3343ce077b41e2ec8446a14b461e35183b0b59b7c1900d57f24682aba2ec. Mar 13 00:32:24.264883 containerd[1541]: time="2026-03-13T00:32:24.264580798Z" level=info msg="StartContainer for \"4c0a3343ce077b41e2ec8446a14b461e35183b0b59b7c1900d57f24682aba2ec\" returns successfully" Mar 13 00:32:24.278201 systemd[1]: cri-containerd-4c0a3343ce077b41e2ec8446a14b461e35183b0b59b7c1900d57f24682aba2ec.scope: Deactivated successfully. Mar 13 00:32:24.283725 containerd[1541]: time="2026-03-13T00:32:24.283661861Z" level=info msg="received container exit event container_id:\"4c0a3343ce077b41e2ec8446a14b461e35183b0b59b7c1900d57f24682aba2ec\" id:\"4c0a3343ce077b41e2ec8446a14b461e35183b0b59b7c1900d57f24682aba2ec\" pid:3521 exited_at:{seconds:1773361944 nanos:283160218}" Mar 13 00:32:24.319495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c0a3343ce077b41e2ec8446a14b461e35183b0b59b7c1900d57f24682aba2ec-rootfs.mount: Deactivated successfully. Mar 13 00:32:24.713983 kubelet[2877]: E0313 00:32:24.712486 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lrsn" podUID="2e6f70af-e699-4466-b0e0-161b9b41f3be" Mar 13 00:32:24.829226 kubelet[2877]: I0313 00:32:24.827443 2877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:32:25.834032 containerd[1541]: time="2026-03-13T00:32:25.833968119Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\"" Mar 13 00:32:26.710217 kubelet[2877]: E0313 00:32:26.710097 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lrsn" podUID="2e6f70af-e699-4466-b0e0-161b9b41f3be" Mar 13 00:32:28.709866 kubelet[2877]: E0313 00:32:28.709337 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lrsn" podUID="2e6f70af-e699-4466-b0e0-161b9b41f3be" Mar 13 00:32:30.709117 kubelet[2877]: E0313 00:32:30.709066 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lrsn" podUID="2e6f70af-e699-4466-b0e0-161b9b41f3be" Mar 13 00:32:32.712489 kubelet[2877]: E0313 00:32:32.712430 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lrsn" podUID="2e6f70af-e699-4466-b0e0-161b9b41f3be" Mar 13 00:32:32.942731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366117889.mount: Deactivated successfully. Mar 13 00:32:32.974355 containerd[1541]: time="2026-03-13T00:32:32.973956947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:32.975710 containerd[1541]: time="2026-03-13T00:32:32.975660664Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.31.4: active requests=0, bytes read=159838564" Mar 13 00:32:32.977827 containerd[1541]: time="2026-03-13T00:32:32.977753354Z" level=info msg="ImageCreate event name:\"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:32.980320 containerd[1541]: time="2026-03-13T00:32:32.980253768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:32.981377 containerd[1541]: time="2026-03-13T00:32:32.981195733Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.31.4\" with image id \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\", repo tag \"ghcr.io/flatcar/calico/node:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:22b9d32dc7480c96272121d5682d53424c6e58653c60fa869b61a1758a11d77f\", size \"159838426\" in 7.147165841s" Mar 13 00:32:32.981377 containerd[1541]: time="2026-03-13T00:32:32.981266014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.31.4\" returns image reference \"sha256:e6536b93706eda782f82ebadcac3559cb61801d09f982cc0533a134e6a8e1acf\"" Mar 13 00:32:32.987839 containerd[1541]: time="2026-03-13T00:32:32.987790880Z" level=info msg="CreateContainer within sandbox \"db63ee7e8d0f6e8557bfaff624882770d30e742a8d3f215584cbd2c3e19903b8\" for container &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,}" Mar 13 00:32:32.999142 containerd[1541]: time="2026-03-13T00:32:32.997254521Z" level=info msg="Container b4dfcdff3402a9c010c8c162f482086b572f2d3461f994628fdfaa9bc46d7b8d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:32:33.020004 containerd[1541]: time="2026-03-13T00:32:33.019941914Z" level=info msg="CreateContainer within sandbox \"db63ee7e8d0f6e8557bfaff624882770d30e742a8d3f215584cbd2c3e19903b8\" for &ContainerMetadata{Name:ebpf-bootstrap,Attempt:0,} returns container id \"b4dfcdff3402a9c010c8c162f482086b572f2d3461f994628fdfaa9bc46d7b8d\"" Mar 13 00:32:33.020805 containerd[1541]: time="2026-03-13T00:32:33.020746856Z" level=info msg="StartContainer for \"b4dfcdff3402a9c010c8c162f482086b572f2d3461f994628fdfaa9bc46d7b8d\"" Mar 13 00:32:33.023636 containerd[1541]: time="2026-03-13T00:32:33.023588968Z" level=info msg="connecting to shim b4dfcdff3402a9c010c8c162f482086b572f2d3461f994628fdfaa9bc46d7b8d" address="unix:///run/containerd/s/47d0b74f141c5327e16c83980bb3c03ecd77402dc530da94779f23fbd7243e78" protocol=ttrpc version=3 Mar 13 00:32:33.057409 systemd[1]: Started cri-containerd-b4dfcdff3402a9c010c8c162f482086b572f2d3461f994628fdfaa9bc46d7b8d.scope - libcontainer container b4dfcdff3402a9c010c8c162f482086b572f2d3461f994628fdfaa9bc46d7b8d. Mar 13 00:32:33.155476 containerd[1541]: time="2026-03-13T00:32:33.155424882Z" level=info msg="StartContainer for \"b4dfcdff3402a9c010c8c162f482086b572f2d3461f994628fdfaa9bc46d7b8d\" returns successfully" Mar 13 00:32:33.219808 systemd[1]: cri-containerd-b4dfcdff3402a9c010c8c162f482086b572f2d3461f994628fdfaa9bc46d7b8d.scope: Deactivated successfully. Mar 13 00:32:33.226932 containerd[1541]: time="2026-03-13T00:32:33.226681861Z" level=info msg="received container exit event container_id:\"b4dfcdff3402a9c010c8c162f482086b572f2d3461f994628fdfaa9bc46d7b8d\" id:\"b4dfcdff3402a9c010c8c162f482086b572f2d3461f994628fdfaa9bc46d7b8d\" pid:3572 exited_at:{seconds:1773361953 nanos:226165352}" Mar 13 00:32:33.940118 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4dfcdff3402a9c010c8c162f482086b572f2d3461f994628fdfaa9bc46d7b8d-rootfs.mount: Deactivated successfully. Mar 13 00:32:34.709797 kubelet[2877]: E0313 00:32:34.709751 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lrsn" podUID="2e6f70af-e699-4466-b0e0-161b9b41f3be" Mar 13 00:32:34.873554 containerd[1541]: time="2026-03-13T00:32:34.873402209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\"" Mar 13 00:32:36.711157 kubelet[2877]: E0313 00:32:36.711086 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lrsn" podUID="2e6f70af-e699-4466-b0e0-161b9b41f3be" Mar 13 00:32:37.618262 kubelet[2877]: I0313 00:32:37.618220 2877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:32:38.166937 containerd[1541]: time="2026-03-13T00:32:38.166879859Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:38.168221 containerd[1541]: time="2026-03-13T00:32:38.168001608Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.31.4: active requests=0, bytes read=70611671" Mar 13 00:32:38.169454 containerd[1541]: time="2026-03-13T00:32:38.169377082Z" level=info msg="ImageCreate event name:\"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:38.172333 containerd[1541]: time="2026-03-13T00:32:38.172268898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:38.173558 containerd[1541]: time="2026-03-13T00:32:38.173392474Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.31.4\" with image id \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\", repo tag \"ghcr.io/flatcar/calico/cni:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:f1c5d9a6df01061c5faec4c4b59fb9ba69f8f5164b51e01ea8daa8e373111a04\", size \"72167716\" in 3.299935995s" Mar 13 00:32:38.173558 containerd[1541]: time="2026-03-13T00:32:38.173435115Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.31.4\" returns image reference \"sha256:c433a27dd94ce9242338eece49f11629412dd42552fed314746fcf16ea958b2b\"" Mar 13 00:32:38.179201 containerd[1541]: time="2026-03-13T00:32:38.178280359Z" level=info msg="CreateContainer within sandbox \"db63ee7e8d0f6e8557bfaff624882770d30e742a8d3f215584cbd2c3e19903b8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 13 00:32:38.187576 containerd[1541]: time="2026-03-13T00:32:38.187540020Z" level=info msg="Container 4b3d5c317094ebc7f26de54f8e6c19a492137a1b9f2ac41e1dc430811f95940d: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:32:38.209116 containerd[1541]: time="2026-03-13T00:32:38.209056100Z" level=info msg="CreateContainer within sandbox \"db63ee7e8d0f6e8557bfaff624882770d30e742a8d3f215584cbd2c3e19903b8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4b3d5c317094ebc7f26de54f8e6c19a492137a1b9f2ac41e1dc430811f95940d\"" Mar 13 00:32:38.209868 containerd[1541]: time="2026-03-13T00:32:38.209797781Z" level=info msg="StartContainer for \"4b3d5c317094ebc7f26de54f8e6c19a492137a1b9f2ac41e1dc430811f95940d\"" Mar 13 00:32:38.211956 containerd[1541]: time="2026-03-13T00:32:38.211907974Z" level=info msg="connecting to shim 4b3d5c317094ebc7f26de54f8e6c19a492137a1b9f2ac41e1dc430811f95940d" address="unix:///run/containerd/s/47d0b74f141c5327e16c83980bb3c03ecd77402dc530da94779f23fbd7243e78" protocol=ttrpc version=3 Mar 13 00:32:38.247396 systemd[1]: Started cri-containerd-4b3d5c317094ebc7f26de54f8e6c19a492137a1b9f2ac41e1dc430811f95940d.scope - libcontainer container 4b3d5c317094ebc7f26de54f8e6c19a492137a1b9f2ac41e1dc430811f95940d. Mar 13 00:32:38.370183 containerd[1541]: time="2026-03-13T00:32:38.369915298Z" level=info msg="StartContainer for \"4b3d5c317094ebc7f26de54f8e6c19a492137a1b9f2ac41e1dc430811f95940d\" returns successfully" Mar 13 00:32:38.709874 kubelet[2877]: E0313 00:32:38.709797 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8lrsn" podUID="2e6f70af-e699-4466-b0e0-161b9b41f3be" Mar 13 00:32:39.393153 systemd[1]: cri-containerd-4b3d5c317094ebc7f26de54f8e6c19a492137a1b9f2ac41e1dc430811f95940d.scope: Deactivated successfully. Mar 13 00:32:39.394073 systemd[1]: cri-containerd-4b3d5c317094ebc7f26de54f8e6c19a492137a1b9f2ac41e1dc430811f95940d.scope: Consumed 663ms CPU time, 197.4M memory peak, 177M written to disk. Mar 13 00:32:39.395554 containerd[1541]: time="2026-03-13T00:32:39.394764327Z" level=info msg="received container exit event container_id:\"4b3d5c317094ebc7f26de54f8e6c19a492137a1b9f2ac41e1dc430811f95940d\" id:\"4b3d5c317094ebc7f26de54f8e6c19a492137a1b9f2ac41e1dc430811f95940d\" pid:3635 exited_at:{seconds:1773361959 nanos:394504739}" Mar 13 00:32:39.403869 kubelet[2877]: I0313 00:32:39.403838 2877 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Mar 13 00:32:39.452829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b3d5c317094ebc7f26de54f8e6c19a492137a1b9f2ac41e1dc430811f95940d-rootfs.mount: Deactivated successfully. Mar 13 00:32:39.732721 kubelet[2877]: I0313 00:32:39.732650 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4b44de6-b1a2-4e6b-a665-3ac368121f3e-config-volume\") pod \"coredns-674b8bbfcf-mtvkp\" (UID: \"b4b44de6-b1a2-4e6b-a665-3ac368121f3e\") " pod="kube-system/coredns-674b8bbfcf-mtvkp" Mar 13 00:32:39.732721 kubelet[2877]: I0313 00:32:39.732708 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tn2q\" (UniqueName: \"kubernetes.io/projected/b4b44de6-b1a2-4e6b-a665-3ac368121f3e-kube-api-access-5tn2q\") pod \"coredns-674b8bbfcf-mtvkp\" (UID: \"b4b44de6-b1a2-4e6b-a665-3ac368121f3e\") " pod="kube-system/coredns-674b8bbfcf-mtvkp" Mar 13 00:32:39.777369 systemd[1]: Created slice kubepods-burstable-pode2bf3ce5_f601_452a_966a_8cd01fc34905.slice - libcontainer container kubepods-burstable-pode2bf3ce5_f601_452a_966a_8cd01fc34905.slice. Mar 13 00:32:39.825674 systemd[1]: Created slice kubepods-besteffort-podb9c3c193_f779_4025_83cf_8cb7199d19c1.slice - libcontainer container kubepods-besteffort-podb9c3c193_f779_4025_83cf_8cb7199d19c1.slice. Mar 13 00:32:39.832985 kubelet[2877]: I0313 00:32:39.832930 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2bf3ce5-f601-452a-966a-8cd01fc34905-config-volume\") pod \"coredns-674b8bbfcf-tvgbn\" (UID: \"e2bf3ce5-f601-452a-966a-8cd01fc34905\") " pod="kube-system/coredns-674b8bbfcf-tvgbn" Mar 13 00:32:39.833241 kubelet[2877]: I0313 00:32:39.833001 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzfpz\" (UniqueName: \"kubernetes.io/projected/e2bf3ce5-f601-452a-966a-8cd01fc34905-kube-api-access-tzfpz\") pod \"coredns-674b8bbfcf-tvgbn\" (UID: \"e2bf3ce5-f601-452a-966a-8cd01fc34905\") " pod="kube-system/coredns-674b8bbfcf-tvgbn" Mar 13 00:32:39.933918 kubelet[2877]: I0313 00:32:39.933874 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b9c3c193-f779-4025-83cf-8cb7199d19c1-calico-apiserver-certs\") pod \"calico-apiserver-659d4f9cbf-ltc7r\" (UID: \"b9c3c193-f779-4025-83cf-8cb7199d19c1\") " pod="calico-system/calico-apiserver-659d4f9cbf-ltc7r" Mar 13 00:32:39.933918 kubelet[2877]: I0313 00:32:39.933964 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-577tx\" (UniqueName: \"kubernetes.io/projected/b9c3c193-f779-4025-83cf-8cb7199d19c1-kube-api-access-577tx\") pod \"calico-apiserver-659d4f9cbf-ltc7r\" (UID: \"b9c3c193-f779-4025-83cf-8cb7199d19c1\") " pod="calico-system/calico-apiserver-659d4f9cbf-ltc7r" Mar 13 00:32:39.969927 systemd[1]: Created slice kubepods-burstable-podb4b44de6_b1a2_4e6b_a665_3ac368121f3e.slice - libcontainer container kubepods-burstable-podb4b44de6_b1a2_4e6b_a665_3ac368121f3e.slice. Mar 13 00:32:39.979203 containerd[1541]: time="2026-03-13T00:32:39.979087125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mtvkp,Uid:b4b44de6-b1a2-4e6b-a665-3ac368121f3e,Namespace:kube-system,Attempt:0,}" Mar 13 00:32:39.981897 systemd[1]: Created slice kubepods-besteffort-podd29d464e_9db9_4c76_b402_8acd480907f8.slice - libcontainer container kubepods-besteffort-podd29d464e_9db9_4c76_b402_8acd480907f8.slice. Mar 13 00:32:40.035141 kubelet[2877]: I0313 00:32:40.034359 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d29d464e-9db9-4c76-b402-8acd480907f8-tigera-ca-bundle\") pod \"calico-kube-controllers-76b45757c7-kfgc2\" (UID: \"d29d464e-9db9-4c76-b402-8acd480907f8\") " pod="calico-system/calico-kube-controllers-76b45757c7-kfgc2" Mar 13 00:32:40.044071 systemd[1]: Created slice kubepods-besteffort-pod553b882e_95e4_408c_80fc_0731b316e88e.slice - libcontainer container kubepods-besteffort-pod553b882e_95e4_408c_80fc_0731b316e88e.slice. Mar 13 00:32:40.046625 kubelet[2877]: I0313 00:32:40.045464 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6tsr\" (UniqueName: \"kubernetes.io/projected/d29d464e-9db9-4c76-b402-8acd480907f8-kube-api-access-j6tsr\") pod \"calico-kube-controllers-76b45757c7-kfgc2\" (UID: \"d29d464e-9db9-4c76-b402-8acd480907f8\") " pod="calico-system/calico-kube-controllers-76b45757c7-kfgc2" Mar 13 00:32:40.085774 systemd[1]: Created slice kubepods-besteffort-pod83ee5173_94ac_4835_8915_de75a4be9229.slice - libcontainer container kubepods-besteffort-pod83ee5173_94ac_4835_8915_de75a4be9229.slice. Mar 13 00:32:40.094703 containerd[1541]: time="2026-03-13T00:32:40.093814412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tvgbn,Uid:e2bf3ce5-f601-452a-966a-8cd01fc34905,Namespace:kube-system,Attempt:0,}" Mar 13 00:32:40.107687 systemd[1]: Created slice kubepods-besteffort-pod43d46660_bac5_4548_8a40_07368360529c.slice - libcontainer container kubepods-besteffort-pod43d46660_bac5_4548_8a40_07368360529c.slice. Mar 13 00:32:40.133928 containerd[1541]: time="2026-03-13T00:32:40.133842516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-659d4f9cbf-ltc7r,Uid:b9c3c193-f779-4025-83cf-8cb7199d19c1,Namespace:calico-system,Attempt:0,}" Mar 13 00:32:40.146020 kubelet[2877]: I0313 00:32:40.145977 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/553b882e-95e4-408c-80fc-0731b316e88e-whisker-backend-key-pair\") pod \"whisker-db9f6d6b6-27tmv\" (UID: \"553b882e-95e4-408c-80fc-0731b316e88e\") " pod="calico-system/whisker-db9f6d6b6-27tmv" Mar 13 00:32:40.146634 kubelet[2877]: I0313 00:32:40.146586 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sjjp\" (UniqueName: \"kubernetes.io/projected/43d46660-bac5-4548-8a40-07368360529c-kube-api-access-6sjjp\") pod \"goldmane-5b85766d88-ttt8q\" (UID: \"43d46660-bac5-4548-8a40-07368360529c\") " pod="calico-system/goldmane-5b85766d88-ttt8q" Mar 13 00:32:40.147401 kubelet[2877]: I0313 00:32:40.146783 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxn6l\" (UniqueName: \"kubernetes.io/projected/83ee5173-94ac-4835-8915-de75a4be9229-kube-api-access-lxn6l\") pod \"calico-apiserver-659d4f9cbf-767k7\" (UID: \"83ee5173-94ac-4835-8915-de75a4be9229\") " pod="calico-system/calico-apiserver-659d4f9cbf-767k7" Mar 13 00:32:40.147529 kubelet[2877]: I0313 00:32:40.147491 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/553b882e-95e4-408c-80fc-0731b316e88e-nginx-config\") pod \"whisker-db9f6d6b6-27tmv\" (UID: \"553b882e-95e4-408c-80fc-0731b316e88e\") " pod="calico-system/whisker-db9f6d6b6-27tmv" Mar 13 00:32:40.147625 kubelet[2877]: I0313 00:32:40.147528 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztpnd\" (UniqueName: \"kubernetes.io/projected/553b882e-95e4-408c-80fc-0731b316e88e-kube-api-access-ztpnd\") pod \"whisker-db9f6d6b6-27tmv\" (UID: \"553b882e-95e4-408c-80fc-0731b316e88e\") " pod="calico-system/whisker-db9f6d6b6-27tmv" Mar 13 00:32:40.147625 kubelet[2877]: I0313 00:32:40.147577 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/43d46660-bac5-4548-8a40-07368360529c-config\") pod \"goldmane-5b85766d88-ttt8q\" (UID: \"43d46660-bac5-4548-8a40-07368360529c\") " pod="calico-system/goldmane-5b85766d88-ttt8q" Mar 13 00:32:40.147625 kubelet[2877]: I0313 00:32:40.147607 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43d46660-bac5-4548-8a40-07368360529c-goldmane-ca-bundle\") pod \"goldmane-5b85766d88-ttt8q\" (UID: \"43d46660-bac5-4548-8a40-07368360529c\") " pod="calico-system/goldmane-5b85766d88-ttt8q" Mar 13 00:32:40.147924 kubelet[2877]: I0313 00:32:40.147644 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/43d46660-bac5-4548-8a40-07368360529c-goldmane-key-pair\") pod \"goldmane-5b85766d88-ttt8q\" (UID: \"43d46660-bac5-4548-8a40-07368360529c\") " pod="calico-system/goldmane-5b85766d88-ttt8q" Mar 13 00:32:40.147924 kubelet[2877]: I0313 00:32:40.147695 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b882e-95e4-408c-80fc-0731b316e88e-whisker-ca-bundle\") pod \"whisker-db9f6d6b6-27tmv\" (UID: \"553b882e-95e4-408c-80fc-0731b316e88e\") " pod="calico-system/whisker-db9f6d6b6-27tmv" Mar 13 00:32:40.147924 kubelet[2877]: I0313 00:32:40.147724 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/83ee5173-94ac-4835-8915-de75a4be9229-calico-apiserver-certs\") pod \"calico-apiserver-659d4f9cbf-767k7\" (UID: \"83ee5173-94ac-4835-8915-de75a4be9229\") " pod="calico-system/calico-apiserver-659d4f9cbf-767k7" Mar 13 00:32:40.246707 containerd[1541]: time="2026-03-13T00:32:40.246653360Z" level=error msg="Failed to destroy network for sandbox \"004eefad98de627a29dba3173e570e433a98172a2e9998bf1c08d589631e219d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.248419 containerd[1541]: time="2026-03-13T00:32:40.248369498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mtvkp,Uid:b4b44de6-b1a2-4e6b-a665-3ac368121f3e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"004eefad98de627a29dba3173e570e433a98172a2e9998bf1c08d589631e219d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.248717 kubelet[2877]: E0313 00:32:40.248642 2877 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"004eefad98de627a29dba3173e570e433a98172a2e9998bf1c08d589631e219d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.248826 kubelet[2877]: E0313 00:32:40.248741 2877 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"004eefad98de627a29dba3173e570e433a98172a2e9998bf1c08d589631e219d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mtvkp" Mar 13 00:32:40.248826 kubelet[2877]: E0313 00:32:40.248793 2877 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"004eefad98de627a29dba3173e570e433a98172a2e9998bf1c08d589631e219d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-mtvkp" Mar 13 00:32:40.248939 kubelet[2877]: E0313 00:32:40.248861 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-mtvkp_kube-system(b4b44de6-b1a2-4e6b-a665-3ac368121f3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-mtvkp_kube-system(b4b44de6-b1a2-4e6b-a665-3ac368121f3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"004eefad98de627a29dba3173e570e433a98172a2e9998bf1c08d589631e219d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-mtvkp" podUID="b4b44de6-b1a2-4e6b-a665-3ac368121f3e" Mar 13 00:32:40.249821 containerd[1541]: time="2026-03-13T00:32:40.249096889Z" level=error msg="Failed to destroy network for sandbox \"3b8a333c8c37486862ecb257941c25e0c22be460b7919aa59b68b631eb9bdec5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.254804 containerd[1541]: time="2026-03-13T00:32:40.254275886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tvgbn,Uid:e2bf3ce5-f601-452a-966a-8cd01fc34905,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b8a333c8c37486862ecb257941c25e0c22be460b7919aa59b68b631eb9bdec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.261020 kubelet[2877]: E0313 00:32:40.260107 2877 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b8a333c8c37486862ecb257941c25e0c22be460b7919aa59b68b631eb9bdec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.261020 kubelet[2877]: E0313 00:32:40.260393 2877 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b8a333c8c37486862ecb257941c25e0c22be460b7919aa59b68b631eb9bdec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tvgbn" Mar 13 00:32:40.261020 kubelet[2877]: E0313 00:32:40.260627 2877 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b8a333c8c37486862ecb257941c25e0c22be460b7919aa59b68b631eb9bdec5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tvgbn" Mar 13 00:32:40.262467 kubelet[2877]: E0313 00:32:40.260913 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tvgbn_kube-system(e2bf3ce5-f601-452a-966a-8cd01fc34905)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tvgbn_kube-system(e2bf3ce5-f601-452a-966a-8cd01fc34905)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b8a333c8c37486862ecb257941c25e0c22be460b7919aa59b68b631eb9bdec5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tvgbn" podUID="e2bf3ce5-f601-452a-966a-8cd01fc34905" Mar 13 00:32:40.289767 containerd[1541]: time="2026-03-13T00:32:40.289105010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b45757c7-kfgc2,Uid:d29d464e-9db9-4c76-b402-8acd480907f8,Namespace:calico-system,Attempt:0,}" Mar 13 00:32:40.316457 containerd[1541]: time="2026-03-13T00:32:40.316377986Z" level=error msg="Failed to destroy network for sandbox \"b4ed7d6a799eb556425e7723f895705c6b0c6aaaf29bcadaf9fa6937ce642df1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.318385 containerd[1541]: time="2026-03-13T00:32:40.318332286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-659d4f9cbf-ltc7r,Uid:b9c3c193-f779-4025-83cf-8cb7199d19c1,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4ed7d6a799eb556425e7723f895705c6b0c6aaaf29bcadaf9fa6937ce642df1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.318990 kubelet[2877]: E0313 00:32:40.318927 2877 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4ed7d6a799eb556425e7723f895705c6b0c6aaaf29bcadaf9fa6937ce642df1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.319272 kubelet[2877]: E0313 00:32:40.319159 2877 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4ed7d6a799eb556425e7723f895705c6b0c6aaaf29bcadaf9fa6937ce642df1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-659d4f9cbf-ltc7r" Mar 13 00:32:40.319272 kubelet[2877]: E0313 00:32:40.319232 2877 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4ed7d6a799eb556425e7723f895705c6b0c6aaaf29bcadaf9fa6937ce642df1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-659d4f9cbf-ltc7r" Mar 13 00:32:40.319609 kubelet[2877]: E0313 00:32:40.319555 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-659d4f9cbf-ltc7r_calico-system(b9c3c193-f779-4025-83cf-8cb7199d19c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-659d4f9cbf-ltc7r_calico-system(b9c3c193-f779-4025-83cf-8cb7199d19c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4ed7d6a799eb556425e7723f895705c6b0c6aaaf29bcadaf9fa6937ce642df1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-659d4f9cbf-ltc7r" podUID="b9c3c193-f779-4025-83cf-8cb7199d19c1" Mar 13 00:32:40.367196 containerd[1541]: time="2026-03-13T00:32:40.367116269Z" level=error msg="Failed to destroy network for sandbox \"673e5cdf64c496cef40b10b66b04eaa2a59f453c75ad37b6e000d19e2d896eaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.369285 containerd[1541]: time="2026-03-13T00:32:40.368803777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b45757c7-kfgc2,Uid:d29d464e-9db9-4c76-b402-8acd480907f8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"673e5cdf64c496cef40b10b66b04eaa2a59f453c75ad37b6e000d19e2d896eaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.369481 kubelet[2877]: E0313 00:32:40.369062 2877 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"673e5cdf64c496cef40b10b66b04eaa2a59f453c75ad37b6e000d19e2d896eaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.369481 kubelet[2877]: E0313 00:32:40.369122 2877 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"673e5cdf64c496cef40b10b66b04eaa2a59f453c75ad37b6e000d19e2d896eaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76b45757c7-kfgc2" Mar 13 00:32:40.369481 kubelet[2877]: E0313 00:32:40.369149 2877 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"673e5cdf64c496cef40b10b66b04eaa2a59f453c75ad37b6e000d19e2d896eaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76b45757c7-kfgc2" Mar 13 00:32:40.369676 kubelet[2877]: E0313 00:32:40.369235 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76b45757c7-kfgc2_calico-system(d29d464e-9db9-4c76-b402-8acd480907f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76b45757c7-kfgc2_calico-system(d29d464e-9db9-4c76-b402-8acd480907f8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"673e5cdf64c496cef40b10b66b04eaa2a59f453c75ad37b6e000d19e2d896eaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76b45757c7-kfgc2" podUID="d29d464e-9db9-4c76-b402-8acd480907f8" Mar 13 00:32:40.370478 containerd[1541]: time="2026-03-13T00:32:40.370443909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-db9f6d6b6-27tmv,Uid:553b882e-95e4-408c-80fc-0731b316e88e,Namespace:calico-system,Attempt:0,}" Mar 13 00:32:40.398200 containerd[1541]: time="2026-03-13T00:32:40.397997008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-659d4f9cbf-767k7,Uid:83ee5173-94ac-4835-8915-de75a4be9229,Namespace:calico-system,Attempt:0,}" Mar 13 00:32:40.417229 containerd[1541]: time="2026-03-13T00:32:40.416946681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-ttt8q,Uid:43d46660-bac5-4548-8a40-07368360529c,Namespace:calico-system,Attempt:0,}" Mar 13 00:32:40.490398 containerd[1541]: time="2026-03-13T00:32:40.490340822Z" level=error msg="Failed to destroy network for sandbox \"7f967cd501ed96e41f255d2e09f5ebcd8be0463d7b95121af6763e476dc807b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.493493 containerd[1541]: time="2026-03-13T00:32:40.493437653Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-db9f6d6b6-27tmv,Uid:553b882e-95e4-408c-80fc-0731b316e88e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f967cd501ed96e41f255d2e09f5ebcd8be0463d7b95121af6763e476dc807b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.495278 kubelet[2877]: E0313 00:32:40.494303 2877 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f967cd501ed96e41f255d2e09f5ebcd8be0463d7b95121af6763e476dc807b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.495278 kubelet[2877]: E0313 00:32:40.494382 2877 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f967cd501ed96e41f255d2e09f5ebcd8be0463d7b95121af6763e476dc807b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-db9f6d6b6-27tmv" Mar 13 00:32:40.495278 kubelet[2877]: E0313 00:32:40.494414 2877 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f967cd501ed96e41f255d2e09f5ebcd8be0463d7b95121af6763e476dc807b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-db9f6d6b6-27tmv" Mar 13 00:32:40.495542 kubelet[2877]: E0313 00:32:40.494492 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-db9f6d6b6-27tmv_calico-system(553b882e-95e4-408c-80fc-0731b316e88e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-db9f6d6b6-27tmv_calico-system(553b882e-95e4-408c-80fc-0731b316e88e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f967cd501ed96e41f255d2e09f5ebcd8be0463d7b95121af6763e476dc807b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-db9f6d6b6-27tmv" podUID="553b882e-95e4-408c-80fc-0731b316e88e" Mar 13 00:32:40.499853 systemd[1]: run-netns-cni\x2d3e576276\x2d31aa\x2d708e\x2dd42b\x2da9f2cff98e3f.mount: Deactivated successfully. Mar 13 00:32:40.516048 systemd[1]: run-netns-cni\x2d088aece8\x2d78e3\x2d0a5d\x2da62f\x2dfbb65b550d6d.mount: Deactivated successfully. Mar 13 00:32:40.570292 containerd[1541]: time="2026-03-13T00:32:40.570066876Z" level=error msg="Failed to destroy network for sandbox \"c5ec68e0b78eaf774342b31c52188fff95db34a457634620f599e07b7e22beaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.576615 containerd[1541]: time="2026-03-13T00:32:40.576488059Z" level=error msg="Failed to destroy network for sandbox \"a21960d4f47a12a6debc8bbfb4107f0760399377043a3abbf715084eeeff4b70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.577260 containerd[1541]: time="2026-03-13T00:32:40.576857936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-ttt8q,Uid:43d46660-bac5-4548-8a40-07368360529c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5ec68e0b78eaf774342b31c52188fff95db34a457634620f599e07b7e22beaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.577127 systemd[1]: run-netns-cni\x2db312c3db\x2d3437\x2d9d5a\x2d44de\x2dced067480e2e.mount: Deactivated successfully. Mar 13 00:32:40.580705 containerd[1541]: time="2026-03-13T00:32:40.580611897Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-659d4f9cbf-767k7,Uid:83ee5173-94ac-4835-8915-de75a4be9229,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a21960d4f47a12a6debc8bbfb4107f0760399377043a3abbf715084eeeff4b70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.583215 kubelet[2877]: E0313 00:32:40.580908 2877 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5ec68e0b78eaf774342b31c52188fff95db34a457634620f599e07b7e22beaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.583215 kubelet[2877]: E0313 00:32:40.581288 2877 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a21960d4f47a12a6debc8bbfb4107f0760399377043a3abbf715084eeeff4b70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.583215 kubelet[2877]: E0313 00:32:40.581349 2877 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a21960d4f47a12a6debc8bbfb4107f0760399377043a3abbf715084eeeff4b70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-659d4f9cbf-767k7" Mar 13 00:32:40.583215 kubelet[2877]: E0313 00:32:40.581384 2877 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a21960d4f47a12a6debc8bbfb4107f0760399377043a3abbf715084eeeff4b70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-apiserver-659d4f9cbf-767k7" Mar 13 00:32:40.583412 kubelet[2877]: E0313 00:32:40.581448 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-659d4f9cbf-767k7_calico-system(83ee5173-94ac-4835-8915-de75a4be9229)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-659d4f9cbf-767k7_calico-system(83ee5173-94ac-4835-8915-de75a4be9229)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a21960d4f47a12a6debc8bbfb4107f0760399377043a3abbf715084eeeff4b70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-apiserver-659d4f9cbf-767k7" podUID="83ee5173-94ac-4835-8915-de75a4be9229" Mar 13 00:32:40.583412 kubelet[2877]: E0313 00:32:40.581985 2877 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5ec68e0b78eaf774342b31c52188fff95db34a457634620f599e07b7e22beaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-ttt8q" Mar 13 00:32:40.583412 kubelet[2877]: E0313 00:32:40.582058 2877 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5ec68e0b78eaf774342b31c52188fff95db34a457634620f599e07b7e22beaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-5b85766d88-ttt8q" Mar 13 00:32:40.583546 kubelet[2877]: E0313 00:32:40.582837 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-5b85766d88-ttt8q_calico-system(43d46660-bac5-4548-8a40-07368360529c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-5b85766d88-ttt8q_calico-system(43d46660-bac5-4548-8a40-07368360529c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5ec68e0b78eaf774342b31c52188fff95db34a457634620f599e07b7e22beaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-5b85766d88-ttt8q" podUID="43d46660-bac5-4548-8a40-07368360529c" Mar 13 00:32:40.585288 systemd[1]: run-netns-cni\x2df6723b51\x2dd43e\x2d7b77\x2ddd9d\x2d9f3729d8664f.mount: Deactivated successfully. Mar 13 00:32:40.718607 systemd[1]: Created slice kubepods-besteffort-pod2e6f70af_e699_4466_b0e0_161b9b41f3be.slice - libcontainer container kubepods-besteffort-pod2e6f70af_e699_4466_b0e0_161b9b41f3be.slice. Mar 13 00:32:40.722637 containerd[1541]: time="2026-03-13T00:32:40.722578520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8lrsn,Uid:2e6f70af-e699-4466-b0e0-161b9b41f3be,Namespace:calico-system,Attempt:0,}" Mar 13 00:32:40.794487 containerd[1541]: time="2026-03-13T00:32:40.794423306Z" level=error msg="Failed to destroy network for sandbox \"78c027de2f9d50a7ea20f586693ed582c8e9f0c36ea50b54f43960e27ccd3d22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.796072 containerd[1541]: time="2026-03-13T00:32:40.796019994Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8lrsn,Uid:2e6f70af-e699-4466-b0e0-161b9b41f3be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c027de2f9d50a7ea20f586693ed582c8e9f0c36ea50b54f43960e27ccd3d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.796565 kubelet[2877]: E0313 00:32:40.796517 2877 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c027de2f9d50a7ea20f586693ed582c8e9f0c36ea50b54f43960e27ccd3d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 13 00:32:40.797256 kubelet[2877]: E0313 00:32:40.796591 2877 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c027de2f9d50a7ea20f586693ed582c8e9f0c36ea50b54f43960e27ccd3d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8lrsn" Mar 13 00:32:40.797256 kubelet[2877]: E0313 00:32:40.796629 2877 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c027de2f9d50a7ea20f586693ed582c8e9f0c36ea50b54f43960e27ccd3d22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8lrsn" Mar 13 00:32:40.797256 kubelet[2877]: E0313 00:32:40.796709 2877 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8lrsn_calico-system(2e6f70af-e699-4466-b0e0-161b9b41f3be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8lrsn_calico-system(2e6f70af-e699-4466-b0e0-161b9b41f3be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78c027de2f9d50a7ea20f586693ed582c8e9f0c36ea50b54f43960e27ccd3d22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8lrsn" podUID="2e6f70af-e699-4466-b0e0-161b9b41f3be" Mar 13 00:32:40.926535 containerd[1541]: time="2026-03-13T00:32:40.926438063Z" level=info msg="CreateContainer within sandbox \"db63ee7e8d0f6e8557bfaff624882770d30e742a8d3f215584cbd2c3e19903b8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 13 00:32:40.944205 containerd[1541]: time="2026-03-13T00:32:40.944119184Z" level=info msg="Container 61e5d051b216cf65c8d1831edc57793742d84d76d3e3175750edd4308d92fa9b: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:32:40.957036 containerd[1541]: time="2026-03-13T00:32:40.956981755Z" level=info msg="CreateContainer within sandbox \"db63ee7e8d0f6e8557bfaff624882770d30e742a8d3f215584cbd2c3e19903b8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"61e5d051b216cf65c8d1831edc57793742d84d76d3e3175750edd4308d92fa9b\"" Mar 13 00:32:40.958230 containerd[1541]: time="2026-03-13T00:32:40.958158789Z" level=info msg="StartContainer for \"61e5d051b216cf65c8d1831edc57793742d84d76d3e3175750edd4308d92fa9b\"" Mar 13 00:32:40.962424 containerd[1541]: time="2026-03-13T00:32:40.962383193Z" level=info msg="connecting to shim 61e5d051b216cf65c8d1831edc57793742d84d76d3e3175750edd4308d92fa9b" address="unix:///run/containerd/s/47d0b74f141c5327e16c83980bb3c03ecd77402dc530da94779f23fbd7243e78" protocol=ttrpc version=3 Mar 13 00:32:40.991372 systemd[1]: Started cri-containerd-61e5d051b216cf65c8d1831edc57793742d84d76d3e3175750edd4308d92fa9b.scope - libcontainer container 61e5d051b216cf65c8d1831edc57793742d84d76d3e3175750edd4308d92fa9b. Mar 13 00:32:41.096806 containerd[1541]: time="2026-03-13T00:32:41.096681980Z" level=info msg="StartContainer for \"61e5d051b216cf65c8d1831edc57793742d84d76d3e3175750edd4308d92fa9b\" returns successfully" Mar 13 00:32:41.358975 kubelet[2877]: I0313 00:32:41.358462 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b882e-95e4-408c-80fc-0731b316e88e-whisker-ca-bundle\") pod \"553b882e-95e4-408c-80fc-0731b316e88e\" (UID: \"553b882e-95e4-408c-80fc-0731b316e88e\") " Mar 13 00:32:41.359790 kubelet[2877]: I0313 00:32:41.359599 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ztpnd\" (UniqueName: \"kubernetes.io/projected/553b882e-95e4-408c-80fc-0731b316e88e-kube-api-access-ztpnd\") pod \"553b882e-95e4-408c-80fc-0731b316e88e\" (UID: \"553b882e-95e4-408c-80fc-0731b316e88e\") " Mar 13 00:32:41.360105 kubelet[2877]: I0313 00:32:41.359770 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/553b882e-95e4-408c-80fc-0731b316e88e-nginx-config\") pod \"553b882e-95e4-408c-80fc-0731b316e88e\" (UID: \"553b882e-95e4-408c-80fc-0731b316e88e\") " Mar 13 00:32:41.360483 kubelet[2877]: I0313 00:32:41.360106 2877 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/553b882e-95e4-408c-80fc-0731b316e88e-whisker-backend-key-pair\") pod \"553b882e-95e4-408c-80fc-0731b316e88e\" (UID: \"553b882e-95e4-408c-80fc-0731b316e88e\") " Mar 13 00:32:41.360977 kubelet[2877]: I0313 00:32:41.360944 2877 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/553b882e-95e4-408c-80fc-0731b316e88e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "553b882e-95e4-408c-80fc-0731b316e88e" (UID: "553b882e-95e4-408c-80fc-0731b316e88e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:32:41.363001 kubelet[2877]: I0313 00:32:41.362965 2877 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/553b882e-95e4-408c-80fc-0731b316e88e-nginx-config" (OuterVolumeSpecName: "nginx-config") pod "553b882e-95e4-408c-80fc-0731b316e88e" (UID: "553b882e-95e4-408c-80fc-0731b316e88e"). InnerVolumeSpecName "nginx-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Mar 13 00:32:41.367585 kubelet[2877]: I0313 00:32:41.367403 2877 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/553b882e-95e4-408c-80fc-0731b316e88e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "553b882e-95e4-408c-80fc-0731b316e88e" (UID: "553b882e-95e4-408c-80fc-0731b316e88e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Mar 13 00:32:41.367814 kubelet[2877]: I0313 00:32:41.367793 2877 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/553b882e-95e4-408c-80fc-0731b316e88e-kube-api-access-ztpnd" (OuterVolumeSpecName: "kube-api-access-ztpnd") pod "553b882e-95e4-408c-80fc-0731b316e88e" (UID: "553b882e-95e4-408c-80fc-0731b316e88e"). InnerVolumeSpecName "kube-api-access-ztpnd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Mar 13 00:32:41.452896 systemd[1]: run-netns-cni\x2d0e8ca37e\x2de982\x2dc72d\x2dcc78\x2d361262a5da4a.mount: Deactivated successfully. Mar 13 00:32:41.453040 systemd[1]: var-lib-kubelet-pods-553b882e\x2d95e4\x2d408c\x2d80fc\x2d0731b316e88e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dztpnd.mount: Deactivated successfully. Mar 13 00:32:41.453149 systemd[1]: var-lib-kubelet-pods-553b882e\x2d95e4\x2d408c\x2d80fc\x2d0731b316e88e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Mar 13 00:32:41.461565 kubelet[2877]: I0313 00:32:41.461511 2877 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ztpnd\" (UniqueName: \"kubernetes.io/projected/553b882e-95e4-408c-80fc-0731b316e88e-kube-api-access-ztpnd\") on node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" DevicePath \"\"" Mar 13 00:32:41.461778 kubelet[2877]: I0313 00:32:41.461588 2877 reconciler_common.go:299] "Volume detached for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/553b882e-95e4-408c-80fc-0731b316e88e-nginx-config\") on node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" DevicePath \"\"" Mar 13 00:32:41.461778 kubelet[2877]: I0313 00:32:41.461607 2877 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/553b882e-95e4-408c-80fc-0731b316e88e-whisker-backend-key-pair\") on node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" DevicePath \"\"" Mar 13 00:32:41.461778 kubelet[2877]: I0313 00:32:41.461623 2877 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/553b882e-95e4-408c-80fc-0731b316e88e-whisker-ca-bundle\") on node \"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf\" DevicePath \"\"" Mar 13 00:32:41.923472 systemd[1]: Removed slice kubepods-besteffort-pod553b882e_95e4_408c_80fc_0731b316e88e.slice - libcontainer container kubepods-besteffort-pod553b882e_95e4_408c_80fc_0731b316e88e.slice. Mar 13 00:32:41.952785 kubelet[2877]: I0313 00:32:41.952709 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7hv7p" podStartSLOduration=4.566617867 podStartE2EDuration="21.952686459s" podCreationTimestamp="2026-03-13 00:32:20 +0000 UTC" firstStartedPulling="2026-03-13 00:32:20.788380765 +0000 UTC m=+18.331169346" lastFinishedPulling="2026-03-13 00:32:38.174449357 +0000 UTC m=+35.717237938" observedRunningTime="2026-03-13 00:32:41.950985592 +0000 UTC m=+39.493774184" watchObservedRunningTime="2026-03-13 00:32:41.952686459 +0000 UTC m=+39.495475051" Mar 13 00:32:42.042020 systemd[1]: Created slice kubepods-besteffort-podceb9f0c5_e0f1_46ed_8514_f153731cdfd7.slice - libcontainer container kubepods-besteffort-podceb9f0c5_e0f1_46ed_8514_f153731cdfd7.slice. Mar 13 00:32:42.165375 kubelet[2877]: I0313 00:32:42.165296 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/ceb9f0c5-e0f1-46ed-8514-f153731cdfd7-whisker-backend-key-pair\") pod \"whisker-fb5dcbd47-zfnww\" (UID: \"ceb9f0c5-e0f1-46ed-8514-f153731cdfd7\") " pod="calico-system/whisker-fb5dcbd47-zfnww" Mar 13 00:32:42.165375 kubelet[2877]: I0313 00:32:42.165361 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-config\" (UniqueName: \"kubernetes.io/configmap/ceb9f0c5-e0f1-46ed-8514-f153731cdfd7-nginx-config\") pod \"whisker-fb5dcbd47-zfnww\" (UID: \"ceb9f0c5-e0f1-46ed-8514-f153731cdfd7\") " pod="calico-system/whisker-fb5dcbd47-zfnww" Mar 13 00:32:42.165666 kubelet[2877]: I0313 00:32:42.165394 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ceb9f0c5-e0f1-46ed-8514-f153731cdfd7-whisker-ca-bundle\") pod \"whisker-fb5dcbd47-zfnww\" (UID: \"ceb9f0c5-e0f1-46ed-8514-f153731cdfd7\") " pod="calico-system/whisker-fb5dcbd47-zfnww" Mar 13 00:32:42.165666 kubelet[2877]: I0313 00:32:42.165430 2877 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sqqn\" (UniqueName: \"kubernetes.io/projected/ceb9f0c5-e0f1-46ed-8514-f153731cdfd7-kube-api-access-2sqqn\") pod \"whisker-fb5dcbd47-zfnww\" (UID: \"ceb9f0c5-e0f1-46ed-8514-f153731cdfd7\") " pod="calico-system/whisker-fb5dcbd47-zfnww" Mar 13 00:32:42.351429 containerd[1541]: time="2026-03-13T00:32:42.350746132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fb5dcbd47-zfnww,Uid:ceb9f0c5-e0f1-46ed-8514-f153731cdfd7,Namespace:calico-system,Attempt:0,}" Mar 13 00:32:42.504728 systemd-networkd[1449]: calia286a661ea3: Link UP Mar 13 00:32:42.506417 systemd-networkd[1449]: calia286a661ea3: Gained carrier Mar 13 00:32:42.531027 containerd[1541]: 2026-03-13 00:32:42.389 [ERROR][3973] cni-plugin/utils.go 116: File does not exist, skipping the error since RequireMTUFile is false error=open /var/lib/calico/mtu: no such file or directory filename="/var/lib/calico/mtu" Mar 13 00:32:42.531027 containerd[1541]: 2026-03-13 00:32:42.403 [INFO][3973] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-whisker--fb5dcbd47--zfnww-eth0 whisker-fb5dcbd47- calico-system ceb9f0c5-e0f1-46ed-8514-f153731cdfd7 930 0 2026-03-13 00:32:41 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:fb5dcbd47 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf whisker-fb5dcbd47-zfnww eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calia286a661ea3 [] [] }} ContainerID="cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" Namespace="calico-system" Pod="whisker-fb5dcbd47-zfnww" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-whisker--fb5dcbd47--zfnww-" Mar 13 00:32:42.531027 containerd[1541]: 2026-03-13 00:32:42.403 [INFO][3973] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" Namespace="calico-system" Pod="whisker-fb5dcbd47-zfnww" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-whisker--fb5dcbd47--zfnww-eth0" Mar 13 00:32:42.531027 containerd[1541]: 2026-03-13 00:32:42.435 [INFO][3986] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" HandleID="k8s-pod-network.cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-whisker--fb5dcbd47--zfnww-eth0" Mar 13 00:32:42.532559 containerd[1541]: 2026-03-13 00:32:42.446 [INFO][3986] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" HandleID="k8s-pod-network.cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-whisker--fb5dcbd47--zfnww-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000364040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", "pod":"whisker-fb5dcbd47-zfnww", "timestamp":"2026-03-13 00:32:42.435325193 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0006b6000)} Mar 13 00:32:42.532559 containerd[1541]: 2026-03-13 00:32:42.446 [INFO][3986] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:32:42.532559 containerd[1541]: 2026-03-13 00:32:42.446 [INFO][3986] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:32:42.532559 containerd[1541]: 2026-03-13 00:32:42.446 [INFO][3986] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf' Mar 13 00:32:42.532559 containerd[1541]: 2026-03-13 00:32:42.449 [INFO][3986] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:42.532559 containerd[1541]: 2026-03-13 00:32:42.454 [INFO][3986] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:42.532559 containerd[1541]: 2026-03-13 00:32:42.459 [INFO][3986] ipam/ipam.go 526: Trying affinity for 192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:42.532559 containerd[1541]: 2026-03-13 00:32:42.462 [INFO][3986] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:42.532979 containerd[1541]: 2026-03-13 00:32:42.465 [INFO][3986] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:42.532979 containerd[1541]: 2026-03-13 00:32:42.465 [INFO][3986] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.64/26 handle="k8s-pod-network.cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:42.532979 containerd[1541]: 2026-03-13 00:32:42.466 [INFO][3986] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec Mar 13 00:32:42.532979 containerd[1541]: 2026-03-13 00:32:42.472 [INFO][3986] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.64/26 handle="k8s-pod-network.cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:42.532979 containerd[1541]: 2026-03-13 00:32:42.478 [INFO][3986] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.65/26] block=192.168.87.64/26 handle="k8s-pod-network.cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:42.532979 containerd[1541]: 2026-03-13 00:32:42.478 [INFO][3986] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.65/26] handle="k8s-pod-network.cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:42.532979 containerd[1541]: 2026-03-13 00:32:42.479 [INFO][3986] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:32:42.532979 containerd[1541]: 2026-03-13 00:32:42.479 [INFO][3986] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.65/26] IPv6=[] ContainerID="cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" HandleID="k8s-pod-network.cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-whisker--fb5dcbd47--zfnww-eth0" Mar 13 00:32:42.534582 containerd[1541]: 2026-03-13 00:32:42.483 [INFO][3973] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" Namespace="calico-system" Pod="whisker-fb5dcbd47-zfnww" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-whisker--fb5dcbd47--zfnww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-whisker--fb5dcbd47--zfnww-eth0", GenerateName:"whisker-fb5dcbd47-", Namespace:"calico-system", SelfLink:"", UID:"ceb9f0c5-e0f1-46ed-8514-f153731cdfd7", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fb5dcbd47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"", Pod:"whisker-fb5dcbd47-zfnww", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.87.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia286a661ea3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:42.534702 containerd[1541]: 2026-03-13 00:32:42.483 [INFO][3973] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.65/32] ContainerID="cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" Namespace="calico-system" Pod="whisker-fb5dcbd47-zfnww" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-whisker--fb5dcbd47--zfnww-eth0" Mar 13 00:32:42.534702 containerd[1541]: 2026-03-13 00:32:42.483 [INFO][3973] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia286a661ea3 ContainerID="cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" Namespace="calico-system" Pod="whisker-fb5dcbd47-zfnww" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-whisker--fb5dcbd47--zfnww-eth0" Mar 13 00:32:42.534702 containerd[1541]: 2026-03-13 00:32:42.505 [INFO][3973] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" Namespace="calico-system" Pod="whisker-fb5dcbd47-zfnww" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-whisker--fb5dcbd47--zfnww-eth0" Mar 13 00:32:42.534862 containerd[1541]: 2026-03-13 00:32:42.508 [INFO][3973] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" Namespace="calico-system" Pod="whisker-fb5dcbd47-zfnww" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-whisker--fb5dcbd47--zfnww-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-whisker--fb5dcbd47--zfnww-eth0", GenerateName:"whisker-fb5dcbd47-", Namespace:"calico-system", SelfLink:"", UID:"ceb9f0c5-e0f1-46ed-8514-f153731cdfd7", ResourceVersion:"930", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"fb5dcbd47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec", Pod:"whisker-fb5dcbd47-zfnww", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.87.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calia286a661ea3", MAC:"56:b8:e9:4e:07:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:42.537257 containerd[1541]: 2026-03-13 00:32:42.523 [INFO][3973] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" Namespace="calico-system" Pod="whisker-fb5dcbd47-zfnww" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-whisker--fb5dcbd47--zfnww-eth0" Mar 13 00:32:42.589329 containerd[1541]: time="2026-03-13T00:32:42.589264153Z" level=info msg="connecting to shim cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec" address="unix:///run/containerd/s/f38124fa7a873d61ccd05c729d94673a6da034e378061a387fec51a7df064e55" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:32:42.643394 systemd[1]: Started cri-containerd-cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec.scope - libcontainer container cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec. Mar 13 00:32:42.717456 kubelet[2877]: I0313 00:32:42.717347 2877 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="553b882e-95e4-408c-80fc-0731b316e88e" path="/var/lib/kubelet/pods/553b882e-95e4-408c-80fc-0731b316e88e/volumes" Mar 13 00:32:42.784135 containerd[1541]: time="2026-03-13T00:32:42.784082679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-fb5dcbd47-zfnww,Uid:ceb9f0c5-e0f1-46ed-8514-f153731cdfd7,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec\"" Mar 13 00:32:42.795247 containerd[1541]: time="2026-03-13T00:32:42.795182627Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\"" Mar 13 00:32:43.902450 systemd-networkd[1449]: calia286a661ea3: Gained IPv6LL Mar 13 00:32:44.014327 systemd-networkd[1449]: vxlan.calico: Link UP Mar 13 00:32:44.014347 systemd-networkd[1449]: vxlan.calico: Gained carrier Mar 13 00:32:44.226327 containerd[1541]: time="2026-03-13T00:32:44.226246244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:44.229151 containerd[1541]: time="2026-03-13T00:32:44.229112207Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.31.4: active requests=0, bytes read=6039889" Mar 13 00:32:44.230099 containerd[1541]: time="2026-03-13T00:32:44.230065671Z" level=info msg="ImageCreate event name:\"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:44.235529 containerd[1541]: time="2026-03-13T00:32:44.235467668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:44.238039 containerd[1541]: time="2026-03-13T00:32:44.237882261Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.31.4\" with image id \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:9690cd395efad501f2e0c40ce4969d87b736ae2e5ed454644e7b0fd8f756bfbc\", size \"7595926\" in 1.442650388s" Mar 13 00:32:44.238039 containerd[1541]: time="2026-03-13T00:32:44.237924949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.31.4\" returns image reference \"sha256:c02b0051502f3aa7f0815d838ea93b53dfb6bd13f185d229260e08200daf7cf7\"" Mar 13 00:32:44.244900 containerd[1541]: time="2026-03-13T00:32:44.244763059Z" level=info msg="CreateContainer within sandbox \"cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Mar 13 00:32:44.256717 containerd[1541]: time="2026-03-13T00:32:44.256335357Z" level=info msg="Container d8f222c8a9e843bb230cda8e5cc3640dba49e0ef55271a5d32325b5fd0dec887: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:32:44.273194 containerd[1541]: time="2026-03-13T00:32:44.273140226Z" level=info msg="CreateContainer within sandbox \"cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"d8f222c8a9e843bb230cda8e5cc3640dba49e0ef55271a5d32325b5fd0dec887\"" Mar 13 00:32:44.273876 containerd[1541]: time="2026-03-13T00:32:44.273833536Z" level=info msg="StartContainer for \"d8f222c8a9e843bb230cda8e5cc3640dba49e0ef55271a5d32325b5fd0dec887\"" Mar 13 00:32:44.275499 containerd[1541]: time="2026-03-13T00:32:44.275459124Z" level=info msg="connecting to shim d8f222c8a9e843bb230cda8e5cc3640dba49e0ef55271a5d32325b5fd0dec887" address="unix:///run/containerd/s/f38124fa7a873d61ccd05c729d94673a6da034e378061a387fec51a7df064e55" protocol=ttrpc version=3 Mar 13 00:32:44.324436 systemd[1]: Started cri-containerd-d8f222c8a9e843bb230cda8e5cc3640dba49e0ef55271a5d32325b5fd0dec887.scope - libcontainer container d8f222c8a9e843bb230cda8e5cc3640dba49e0ef55271a5d32325b5fd0dec887. Mar 13 00:32:44.419956 containerd[1541]: time="2026-03-13T00:32:44.419829700Z" level=info msg="StartContainer for \"d8f222c8a9e843bb230cda8e5cc3640dba49e0ef55271a5d32325b5fd0dec887\" returns successfully" Mar 13 00:32:44.424341 containerd[1541]: time="2026-03-13T00:32:44.424298220Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\"" Mar 13 00:32:45.503635 systemd-networkd[1449]: vxlan.calico: Gained IPv6LL Mar 13 00:32:45.966997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1515785318.mount: Deactivated successfully. Mar 13 00:32:45.986710 containerd[1541]: time="2026-03-13T00:32:45.986654961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:45.988000 containerd[1541]: time="2026-03-13T00:32:45.987940394Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.31.4: active requests=0, bytes read=17609475" Mar 13 00:32:45.989132 containerd[1541]: time="2026-03-13T00:32:45.989067227Z" level=info msg="ImageCreate event name:\"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:45.992130 containerd[1541]: time="2026-03-13T00:32:45.992064179Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:45.993468 containerd[1541]: time="2026-03-13T00:32:45.993051884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" with image id \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:d252061aa298c4b17cf092517b5126af97cf95e0f56b21281b95a5f8702f15fc\", size \"17609305\" in 1.568704071s" Mar 13 00:32:45.993468 containerd[1541]: time="2026-03-13T00:32:45.993095066Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.31.4\" returns image reference \"sha256:0749e3da0398e8402eb119f09acf145e5dd9759adb6eb3802ad6dc1b9bbedf1c\"" Mar 13 00:32:45.998292 containerd[1541]: time="2026-03-13T00:32:45.998166879Z" level=info msg="CreateContainer within sandbox \"cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Mar 13 00:32:46.009286 containerd[1541]: time="2026-03-13T00:32:46.008312505Z" level=info msg="Container 0f6cfd8ff020a3ec28bf5a36cde60961b6007a909e4a9069d224fcbc2a62176a: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:32:46.017021 containerd[1541]: time="2026-03-13T00:32:46.016963001Z" level=info msg="CreateContainer within sandbox \"cb468224ee9c7fc1826a928186fcd915fc7497ad5b2a2527c5eba06dfa38fbec\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"0f6cfd8ff020a3ec28bf5a36cde60961b6007a909e4a9069d224fcbc2a62176a\"" Mar 13 00:32:46.019818 containerd[1541]: time="2026-03-13T00:32:46.019759524Z" level=info msg="StartContainer for \"0f6cfd8ff020a3ec28bf5a36cde60961b6007a909e4a9069d224fcbc2a62176a\"" Mar 13 00:32:46.021731 containerd[1541]: time="2026-03-13T00:32:46.021686587Z" level=info msg="connecting to shim 0f6cfd8ff020a3ec28bf5a36cde60961b6007a909e4a9069d224fcbc2a62176a" address="unix:///run/containerd/s/f38124fa7a873d61ccd05c729d94673a6da034e378061a387fec51a7df064e55" protocol=ttrpc version=3 Mar 13 00:32:46.057372 systemd[1]: Started cri-containerd-0f6cfd8ff020a3ec28bf5a36cde60961b6007a909e4a9069d224fcbc2a62176a.scope - libcontainer container 0f6cfd8ff020a3ec28bf5a36cde60961b6007a909e4a9069d224fcbc2a62176a. Mar 13 00:32:46.129681 containerd[1541]: time="2026-03-13T00:32:46.129633089Z" level=info msg="StartContainer for \"0f6cfd8ff020a3ec28bf5a36cde60961b6007a909e4a9069d224fcbc2a62176a\" returns successfully" Mar 13 00:32:47.748640 ntpd[1633]: Listen normally on 6 vxlan.calico 192.168.87.64:123 Mar 13 00:32:47.748730 ntpd[1633]: Listen normally on 7 calia286a661ea3 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 13 00:32:47.749251 ntpd[1633]: 13 Mar 00:32:47 ntpd[1633]: Listen normally on 6 vxlan.calico 192.168.87.64:123 Mar 13 00:32:47.749251 ntpd[1633]: 13 Mar 00:32:47 ntpd[1633]: Listen normally on 7 calia286a661ea3 [fe80::ecee:eeff:feee:eeee%4]:123 Mar 13 00:32:47.749251 ntpd[1633]: 13 Mar 00:32:47 ntpd[1633]: Listen normally on 8 vxlan.calico [fe80::648b:dbff:febf:c8d8%5]:123 Mar 13 00:32:47.748773 ntpd[1633]: Listen normally on 8 vxlan.calico [fe80::648b:dbff:febf:c8d8%5]:123 Mar 13 00:32:51.710216 containerd[1541]: time="2026-03-13T00:32:51.710111802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mtvkp,Uid:b4b44de6-b1a2-4e6b-a665-3ac368121f3e,Namespace:kube-system,Attempt:0,}" Mar 13 00:32:51.848851 systemd-networkd[1449]: calib96ed3e8cf2: Link UP Mar 13 00:32:51.850396 systemd-networkd[1449]: calib96ed3e8cf2: Gained carrier Mar 13 00:32:51.870896 kubelet[2877]: I0313 00:32:51.870356 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-fb5dcbd47-zfnww" podStartSLOduration=7.668673104 podStartE2EDuration="10.870328206s" podCreationTimestamp="2026-03-13 00:32:41 +0000 UTC" firstStartedPulling="2026-03-13 00:32:42.792431922 +0000 UTC m=+40.335220487" lastFinishedPulling="2026-03-13 00:32:45.994087017 +0000 UTC m=+43.536875589" observedRunningTime="2026-03-13 00:32:46.9534737 +0000 UTC m=+44.496262361" watchObservedRunningTime="2026-03-13 00:32:51.870328206 +0000 UTC m=+49.413116797" Mar 13 00:32:51.878869 containerd[1541]: 2026-03-13 00:32:51.762 [INFO][4383] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--mtvkp-eth0 coredns-674b8bbfcf- kube-system b4b44de6-b1a2-4e6b-a665-3ac368121f3e 876 0 2026-03-13 00:32:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf coredns-674b8bbfcf-mtvkp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib96ed3e8cf2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" Namespace="kube-system" Pod="coredns-674b8bbfcf-mtvkp" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--mtvkp-" Mar 13 00:32:51.878869 containerd[1541]: 2026-03-13 00:32:51.762 [INFO][4383] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" Namespace="kube-system" Pod="coredns-674b8bbfcf-mtvkp" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--mtvkp-eth0" Mar 13 00:32:51.878869 containerd[1541]: 2026-03-13 00:32:51.795 [INFO][4394] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" HandleID="k8s-pod-network.0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--mtvkp-eth0" Mar 13 00:32:51.879253 containerd[1541]: 2026-03-13 00:32:51.805 [INFO][4394] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" HandleID="k8s-pod-network.0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--mtvkp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fddd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", "pod":"coredns-674b8bbfcf-mtvkp", "timestamp":"2026-03-13 00:32:51.795721882 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002934a0)} Mar 13 00:32:51.879253 containerd[1541]: 2026-03-13 00:32:51.805 [INFO][4394] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:32:51.879253 containerd[1541]: 2026-03-13 00:32:51.805 [INFO][4394] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:32:51.879253 containerd[1541]: 2026-03-13 00:32:51.805 [INFO][4394] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf' Mar 13 00:32:51.879253 containerd[1541]: 2026-03-13 00:32:51.808 [INFO][4394] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:51.879253 containerd[1541]: 2026-03-13 00:32:51.813 [INFO][4394] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:51.879253 containerd[1541]: 2026-03-13 00:32:51.818 [INFO][4394] ipam/ipam.go 526: Trying affinity for 192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:51.879253 containerd[1541]: 2026-03-13 00:32:51.820 [INFO][4394] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:51.880788 containerd[1541]: 2026-03-13 00:32:51.824 [INFO][4394] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:51.880788 containerd[1541]: 2026-03-13 00:32:51.824 [INFO][4394] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.64/26 handle="k8s-pod-network.0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:51.880788 containerd[1541]: 2026-03-13 00:32:51.826 [INFO][4394] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38 Mar 13 00:32:51.880788 containerd[1541]: 2026-03-13 00:32:51.833 [INFO][4394] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.64/26 handle="k8s-pod-network.0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:51.880788 containerd[1541]: 2026-03-13 00:32:51.841 [INFO][4394] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.66/26] block=192.168.87.64/26 handle="k8s-pod-network.0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:51.880788 containerd[1541]: 2026-03-13 00:32:51.841 [INFO][4394] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.66/26] handle="k8s-pod-network.0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:51.880788 containerd[1541]: 2026-03-13 00:32:51.841 [INFO][4394] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:32:51.880788 containerd[1541]: 2026-03-13 00:32:51.841 [INFO][4394] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.66/26] IPv6=[] ContainerID="0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" HandleID="k8s-pod-network.0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--mtvkp-eth0" Mar 13 00:32:51.882032 containerd[1541]: 2026-03-13 00:32:51.844 [INFO][4383] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" Namespace="kube-system" Pod="coredns-674b8bbfcf-mtvkp" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--mtvkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--mtvkp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b4b44de6-b1a2-4e6b-a665-3ac368121f3e", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"", Pod:"coredns-674b8bbfcf-mtvkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib96ed3e8cf2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:51.882032 containerd[1541]: 2026-03-13 00:32:51.844 [INFO][4383] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.66/32] ContainerID="0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" Namespace="kube-system" Pod="coredns-674b8bbfcf-mtvkp" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--mtvkp-eth0" Mar 13 00:32:51.882032 containerd[1541]: 2026-03-13 00:32:51.844 [INFO][4383] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib96ed3e8cf2 ContainerID="0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" Namespace="kube-system" Pod="coredns-674b8bbfcf-mtvkp" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--mtvkp-eth0" Mar 13 00:32:51.882032 containerd[1541]: 2026-03-13 00:32:51.851 [INFO][4383] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" Namespace="kube-system" Pod="coredns-674b8bbfcf-mtvkp" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--mtvkp-eth0" Mar 13 00:32:51.882032 containerd[1541]: 2026-03-13 00:32:51.852 [INFO][4383] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" Namespace="kube-system" Pod="coredns-674b8bbfcf-mtvkp" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--mtvkp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--mtvkp-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"b4b44de6-b1a2-4e6b-a665-3ac368121f3e", ResourceVersion:"876", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38", Pod:"coredns-674b8bbfcf-mtvkp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib96ed3e8cf2", MAC:"26:d9:94:7c:20:96", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:51.882032 containerd[1541]: 2026-03-13 00:32:51.875 [INFO][4383] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" Namespace="kube-system" Pod="coredns-674b8bbfcf-mtvkp" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--mtvkp-eth0" Mar 13 00:32:51.930211 containerd[1541]: time="2026-03-13T00:32:51.929335255Z" level=info msg="connecting to shim 0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38" address="unix:///run/containerd/s/f499dfcf78cc2bd950a87183747e0fc0cef03765a1ff1dba872908b8c393ed7d" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:32:51.988640 systemd[1]: Started cri-containerd-0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38.scope - libcontainer container 0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38. Mar 13 00:32:52.071534 containerd[1541]: time="2026-03-13T00:32:52.071486931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-mtvkp,Uid:b4b44de6-b1a2-4e6b-a665-3ac368121f3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38\"" Mar 13 00:32:52.078883 containerd[1541]: time="2026-03-13T00:32:52.078822012Z" level=info msg="CreateContainer within sandbox \"0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:32:52.090204 containerd[1541]: time="2026-03-13T00:32:52.089501573Z" level=info msg="Container df8665814ee62324ebc6deaebfc571c229946ea4f9bdfd261d5381f3f14a7576: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:32:52.100099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2359766628.mount: Deactivated successfully. Mar 13 00:32:52.103080 containerd[1541]: time="2026-03-13T00:32:52.103025969Z" level=info msg="CreateContainer within sandbox \"0d9aebe2bd887320a709f8883371ac24712598707e8fcae03528e1106ff45b38\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"df8665814ee62324ebc6deaebfc571c229946ea4f9bdfd261d5381f3f14a7576\"" Mar 13 00:32:52.104003 containerd[1541]: time="2026-03-13T00:32:52.103965116Z" level=info msg="StartContainer for \"df8665814ee62324ebc6deaebfc571c229946ea4f9bdfd261d5381f3f14a7576\"" Mar 13 00:32:52.105726 containerd[1541]: time="2026-03-13T00:32:52.105673868Z" level=info msg="connecting to shim df8665814ee62324ebc6deaebfc571c229946ea4f9bdfd261d5381f3f14a7576" address="unix:///run/containerd/s/f499dfcf78cc2bd950a87183747e0fc0cef03765a1ff1dba872908b8c393ed7d" protocol=ttrpc version=3 Mar 13 00:32:52.132692 systemd[1]: Started cri-containerd-df8665814ee62324ebc6deaebfc571c229946ea4f9bdfd261d5381f3f14a7576.scope - libcontainer container df8665814ee62324ebc6deaebfc571c229946ea4f9bdfd261d5381f3f14a7576. Mar 13 00:32:52.172923 containerd[1541]: time="2026-03-13T00:32:52.172855040Z" level=info msg="StartContainer for \"df8665814ee62324ebc6deaebfc571c229946ea4f9bdfd261d5381f3f14a7576\" returns successfully" Mar 13 00:32:52.969951 kubelet[2877]: I0313 00:32:52.969728 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-mtvkp" podStartSLOduration=45.969705901 podStartE2EDuration="45.969705901s" podCreationTimestamp="2026-03-13 00:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:32:52.969029067 +0000 UTC m=+50.511817676" watchObservedRunningTime="2026-03-13 00:32:52.969705901 +0000 UTC m=+50.512494493" Mar 13 00:32:53.710856 containerd[1541]: time="2026-03-13T00:32:53.710383072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tvgbn,Uid:e2bf3ce5-f601-452a-966a-8cd01fc34905,Namespace:kube-system,Attempt:0,}" Mar 13 00:32:53.710856 containerd[1541]: time="2026-03-13T00:32:53.710432589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-ttt8q,Uid:43d46660-bac5-4548-8a40-07368360529c,Namespace:calico-system,Attempt:0,}" Mar 13 00:32:53.711523 containerd[1541]: time="2026-03-13T00:32:53.711258749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-659d4f9cbf-767k7,Uid:83ee5173-94ac-4835-8915-de75a4be9229,Namespace:calico-system,Attempt:0,}" Mar 13 00:32:53.888573 systemd-networkd[1449]: calib96ed3e8cf2: Gained IPv6LL Mar 13 00:32:53.979672 systemd-networkd[1449]: calic098caaa26f: Link UP Mar 13 00:32:53.981222 systemd-networkd[1449]: calic098caaa26f: Gained carrier Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.832 [INFO][4527] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--767k7-eth0 calico-apiserver-659d4f9cbf- calico-system 83ee5173-94ac-4835-8915-de75a4be9229 880 0 2026-03-13 00:32:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:659d4f9cbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf calico-apiserver-659d4f9cbf-767k7 eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calic098caaa26f [] [] }} ContainerID="06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-767k7" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--767k7-" Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.833 [INFO][4527] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-767k7" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--767k7-eth0" Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.896 [INFO][4550] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" HandleID="k8s-pod-network.06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--767k7-eth0" Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.921 [INFO][4550] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" HandleID="k8s-pod-network.06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--767k7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002fde90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", "pod":"calico-apiserver-659d4f9cbf-767k7", "timestamp":"2026-03-13 00:32:53.896573075 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.922 [INFO][4550] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.922 [INFO][4550] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.922 [INFO][4550] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf' Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.926 [INFO][4550] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.933 [INFO][4550] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.943 [INFO][4550] ipam/ipam.go 526: Trying affinity for 192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.945 [INFO][4550] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.949 [INFO][4550] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.949 [INFO][4550] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.64/26 handle="k8s-pod-network.06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.951 [INFO][4550] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.961 [INFO][4550] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.64/26 handle="k8s-pod-network.06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.972 [INFO][4550] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.67/26] block=192.168.87.64/26 handle="k8s-pod-network.06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.972 [INFO][4550] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.67/26] handle="k8s-pod-network.06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.972 [INFO][4550] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:32:54.004524 containerd[1541]: 2026-03-13 00:32:53.972 [INFO][4550] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.67/26] IPv6=[] ContainerID="06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" HandleID="k8s-pod-network.06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--767k7-eth0" Mar 13 00:32:54.007319 containerd[1541]: 2026-03-13 00:32:53.975 [INFO][4527] cni-plugin/k8s.go 418: Populated endpoint ContainerID="06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-767k7" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--767k7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--767k7-eth0", GenerateName:"calico-apiserver-659d4f9cbf-", Namespace:"calico-system", SelfLink:"", UID:"83ee5173-94ac-4835-8915-de75a4be9229", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"659d4f9cbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"", Pod:"calico-apiserver-659d4f9cbf-767k7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic098caaa26f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:54.007319 containerd[1541]: 2026-03-13 00:32:53.975 [INFO][4527] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.67/32] ContainerID="06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-767k7" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--767k7-eth0" Mar 13 00:32:54.007319 containerd[1541]: 2026-03-13 00:32:53.975 [INFO][4527] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic098caaa26f ContainerID="06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-767k7" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--767k7-eth0" Mar 13 00:32:54.007319 containerd[1541]: 2026-03-13 00:32:53.980 [INFO][4527] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-767k7" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--767k7-eth0" Mar 13 00:32:54.007319 containerd[1541]: 2026-03-13 00:32:53.981 [INFO][4527] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-767k7" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--767k7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--767k7-eth0", GenerateName:"calico-apiserver-659d4f9cbf-", Namespace:"calico-system", SelfLink:"", UID:"83ee5173-94ac-4835-8915-de75a4be9229", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"659d4f9cbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba", Pod:"calico-apiserver-659d4f9cbf-767k7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic098caaa26f", MAC:"6e:44:4b:a4:0f:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:54.007319 containerd[1541]: 2026-03-13 00:32:53.996 [INFO][4527] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-767k7" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--767k7-eth0" Mar 13 00:32:54.050097 containerd[1541]: time="2026-03-13T00:32:54.049945875Z" level=info msg="connecting to shim 06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba" address="unix:///run/containerd/s/1ac8583d6208bca2ae8a1183070f0c9d1aeb65f333dc8ebfdd467a9c037f4e81" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:32:54.123560 systemd[1]: Started cri-containerd-06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba.scope - libcontainer container 06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba. Mar 13 00:32:54.138979 systemd-networkd[1449]: cali1b62f7ab868: Link UP Mar 13 00:32:54.141780 systemd-networkd[1449]: cali1b62f7ab868: Gained carrier Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:53.837 [INFO][4507] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-goldmane--5b85766d88--ttt8q-eth0 goldmane-5b85766d88- calico-system 43d46660-bac5-4548-8a40-07368360529c 881 0 2026-03-13 00:32:19 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:5b85766d88 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf goldmane-5b85766d88-ttt8q eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali1b62f7ab868 [] [] }} ContainerID="9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" Namespace="calico-system" Pod="goldmane-5b85766d88-ttt8q" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-goldmane--5b85766d88--ttt8q-" Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:53.839 [INFO][4507] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" Namespace="calico-system" Pod="goldmane-5b85766d88-ttt8q" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-goldmane--5b85766d88--ttt8q-eth0" Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:53.914 [INFO][4552] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" HandleID="k8s-pod-network.9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-goldmane--5b85766d88--ttt8q-eth0" Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:53.932 [INFO][4552] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" HandleID="k8s-pod-network.9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-goldmane--5b85766d88--ttt8q-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e7e80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", "pod":"goldmane-5b85766d88-ttt8q", "timestamp":"2026-03-13 00:32:53.914771277 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000168f20)} Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:53.941 [INFO][4552] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:53.972 [INFO][4552] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:53.972 [INFO][4552] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf' Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:54.027 [INFO][4552] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:54.046 [INFO][4552] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:54.059 [INFO][4552] ipam/ipam.go 526: Trying affinity for 192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:54.064 [INFO][4552] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:54.075 [INFO][4552] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:54.078 [INFO][4552] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.64/26 handle="k8s-pod-network.9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:54.083 [INFO][4552] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57 Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:54.097 [INFO][4552] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.64/26 handle="k8s-pod-network.9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:54.111 [INFO][4552] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.68/26] block=192.168.87.64/26 handle="k8s-pod-network.9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:54.112 [INFO][4552] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.68/26] handle="k8s-pod-network.9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:54.112 [INFO][4552] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:32:54.183382 containerd[1541]: 2026-03-13 00:32:54.117 [INFO][4552] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.68/26] IPv6=[] ContainerID="9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" HandleID="k8s-pod-network.9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-goldmane--5b85766d88--ttt8q-eth0" Mar 13 00:32:54.187921 containerd[1541]: 2026-03-13 00:32:54.130 [INFO][4507] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" Namespace="calico-system" Pod="goldmane-5b85766d88-ttt8q" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-goldmane--5b85766d88--ttt8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-goldmane--5b85766d88--ttt8q-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"43d46660-bac5-4548-8a40-07368360529c", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"", Pod:"goldmane-5b85766d88-ttt8q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1b62f7ab868", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:54.187921 containerd[1541]: 2026-03-13 00:32:54.130 [INFO][4507] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.68/32] ContainerID="9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" Namespace="calico-system" Pod="goldmane-5b85766d88-ttt8q" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-goldmane--5b85766d88--ttt8q-eth0" Mar 13 00:32:54.187921 containerd[1541]: 2026-03-13 00:32:54.130 [INFO][4507] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b62f7ab868 ContainerID="9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" Namespace="calico-system" Pod="goldmane-5b85766d88-ttt8q" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-goldmane--5b85766d88--ttt8q-eth0" Mar 13 00:32:54.187921 containerd[1541]: 2026-03-13 00:32:54.138 [INFO][4507] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" Namespace="calico-system" Pod="goldmane-5b85766d88-ttt8q" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-goldmane--5b85766d88--ttt8q-eth0" Mar 13 00:32:54.187921 containerd[1541]: 2026-03-13 00:32:54.139 [INFO][4507] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" Namespace="calico-system" Pod="goldmane-5b85766d88-ttt8q" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-goldmane--5b85766d88--ttt8q-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-goldmane--5b85766d88--ttt8q-eth0", GenerateName:"goldmane-5b85766d88-", Namespace:"calico-system", SelfLink:"", UID:"43d46660-bac5-4548-8a40-07368360529c", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"5b85766d88", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57", Pod:"goldmane-5b85766d88-ttt8q", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.87.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali1b62f7ab868", MAC:"66:b6:39:c6:cd:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:54.187921 containerd[1541]: 2026-03-13 00:32:54.167 [INFO][4507] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" Namespace="calico-system" Pod="goldmane-5b85766d88-ttt8q" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-goldmane--5b85766d88--ttt8q-eth0" Mar 13 00:32:54.274128 containerd[1541]: time="2026-03-13T00:32:54.273829353Z" level=info msg="connecting to shim 9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57" address="unix:///run/containerd/s/3e3d240e655317a55bc23466095f587702335850eaf9f449b98f7f36c7d8b5c5" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:32:54.286576 systemd-networkd[1449]: cali9b114f83460: Link UP Mar 13 00:32:54.291440 systemd-networkd[1449]: cali9b114f83460: Gained carrier Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:53.835 [INFO][4506] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--tvgbn-eth0 coredns-674b8bbfcf- kube-system e2bf3ce5-f601-452a-966a-8cd01fc34905 874 0 2026-03-13 00:32:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf coredns-674b8bbfcf-tvgbn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9b114f83460 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvgbn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--tvgbn-" Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:53.835 [INFO][4506] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvgbn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--tvgbn-eth0" Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:53.945 [INFO][4549] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" HandleID="k8s-pod-network.73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--tvgbn-eth0" Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:53.967 [INFO][4549] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" HandleID="k8s-pod-network.73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--tvgbn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003af430), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", "pod":"coredns-674b8bbfcf-tvgbn", "timestamp":"2026-03-13 00:32:53.945797277 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc000195080)} Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:53.967 [INFO][4549] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:54.114 [INFO][4549] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:54.120 [INFO][4549] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf' Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:54.128 [INFO][4549] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:54.149 [INFO][4549] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:54.170 [INFO][4549] ipam/ipam.go 526: Trying affinity for 192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:54.175 [INFO][4549] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:54.182 [INFO][4549] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:54.186 [INFO][4549] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.64/26 handle="k8s-pod-network.73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:54.195 [INFO][4549] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:54.205 [INFO][4549] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.64/26 handle="k8s-pod-network.73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:54.223 [INFO][4549] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.69/26] block=192.168.87.64/26 handle="k8s-pod-network.73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:54.223 [INFO][4549] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.69/26] handle="k8s-pod-network.73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:54.223 [INFO][4549] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:32:54.330272 containerd[1541]: 2026-03-13 00:32:54.224 [INFO][4549] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.69/26] IPv6=[] ContainerID="73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" HandleID="k8s-pod-network.73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--tvgbn-eth0" Mar 13 00:32:54.332965 containerd[1541]: 2026-03-13 00:32:54.236 [INFO][4506] cni-plugin/k8s.go 418: Populated endpoint ContainerID="73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvgbn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--tvgbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--tvgbn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e2bf3ce5-f601-452a-966a-8cd01fc34905", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"", Pod:"coredns-674b8bbfcf-tvgbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b114f83460", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:54.332965 containerd[1541]: 2026-03-13 00:32:54.242 [INFO][4506] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.69/32] ContainerID="73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvgbn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--tvgbn-eth0" Mar 13 00:32:54.332965 containerd[1541]: 2026-03-13 00:32:54.242 [INFO][4506] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b114f83460 ContainerID="73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvgbn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--tvgbn-eth0" Mar 13 00:32:54.332965 containerd[1541]: 2026-03-13 00:32:54.304 [INFO][4506] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvgbn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--tvgbn-eth0" Mar 13 00:32:54.332965 containerd[1541]: 2026-03-13 00:32:54.305 [INFO][4506] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvgbn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--tvgbn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--tvgbn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"e2bf3ce5-f601-452a-966a-8cd01fc34905", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e", Pod:"coredns-674b8bbfcf-tvgbn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.87.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9b114f83460", MAC:"82:05:8e:60:af:93", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:54.332965 containerd[1541]: 2026-03-13 00:32:54.325 [INFO][4506] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" Namespace="kube-system" Pod="coredns-674b8bbfcf-tvgbn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-coredns--674b8bbfcf--tvgbn-eth0" Mar 13 00:32:54.371473 systemd[1]: Started cri-containerd-9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57.scope - libcontainer container 9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57. Mar 13 00:32:54.424520 containerd[1541]: time="2026-03-13T00:32:54.424437824Z" level=info msg="connecting to shim 73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e" address="unix:///run/containerd/s/3e55382bf332dd3c7ec112381ae96da6380649f17ec72f767125371ba5b9e4b7" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:32:54.438615 containerd[1541]: time="2026-03-13T00:32:54.438329765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-659d4f9cbf-767k7,Uid:83ee5173-94ac-4835-8915-de75a4be9229,Namespace:calico-system,Attempt:0,} returns sandbox id \"06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba\"" Mar 13 00:32:54.445002 containerd[1541]: time="2026-03-13T00:32:54.444781550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:32:54.490419 systemd[1]: Started cri-containerd-73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e.scope - libcontainer container 73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e. Mar 13 00:32:54.615218 containerd[1541]: time="2026-03-13T00:32:54.615088480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-5b85766d88-ttt8q,Uid:43d46660-bac5-4548-8a40-07368360529c,Namespace:calico-system,Attempt:0,} returns sandbox id \"9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57\"" Mar 13 00:32:54.617603 containerd[1541]: time="2026-03-13T00:32:54.617531039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tvgbn,Uid:e2bf3ce5-f601-452a-966a-8cd01fc34905,Namespace:kube-system,Attempt:0,} returns sandbox id \"73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e\"" Mar 13 00:32:54.625483 containerd[1541]: time="2026-03-13T00:32:54.625439609Z" level=info msg="CreateContainer within sandbox \"73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 13 00:32:54.635015 containerd[1541]: time="2026-03-13T00:32:54.634918717Z" level=info msg="Container c27f313020aaa18a0dc5e1e2a2e2f0aa3453af06e542efc13264c952311624e8: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:32:54.641637 containerd[1541]: time="2026-03-13T00:32:54.641578384Z" level=info msg="CreateContainer within sandbox \"73891b0823e4df45b129eca69059034c5bd6edc34283e51589e767854bf7685e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c27f313020aaa18a0dc5e1e2a2e2f0aa3453af06e542efc13264c952311624e8\"" Mar 13 00:32:54.642391 containerd[1541]: time="2026-03-13T00:32:54.642097064Z" level=info msg="StartContainer for \"c27f313020aaa18a0dc5e1e2a2e2f0aa3453af06e542efc13264c952311624e8\"" Mar 13 00:32:54.644042 containerd[1541]: time="2026-03-13T00:32:54.643999543Z" level=info msg="connecting to shim c27f313020aaa18a0dc5e1e2a2e2f0aa3453af06e542efc13264c952311624e8" address="unix:///run/containerd/s/3e55382bf332dd3c7ec112381ae96da6380649f17ec72f767125371ba5b9e4b7" protocol=ttrpc version=3 Mar 13 00:32:54.670502 systemd[1]: Started cri-containerd-c27f313020aaa18a0dc5e1e2a2e2f0aa3453af06e542efc13264c952311624e8.scope - libcontainer container c27f313020aaa18a0dc5e1e2a2e2f0aa3453af06e542efc13264c952311624e8. Mar 13 00:32:54.741488 containerd[1541]: time="2026-03-13T00:32:54.741238144Z" level=info msg="StartContainer for \"c27f313020aaa18a0dc5e1e2a2e2f0aa3453af06e542efc13264c952311624e8\" returns successfully" Mar 13 00:32:54.998314 kubelet[2877]: I0313 00:32:54.998234 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tvgbn" podStartSLOduration=47.998210297 podStartE2EDuration="47.998210297s" podCreationTimestamp="2026-03-13 00:32:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 00:32:54.981059624 +0000 UTC m=+52.523848215" watchObservedRunningTime="2026-03-13 00:32:54.998210297 +0000 UTC m=+52.540998881" Mar 13 00:32:55.294683 systemd-networkd[1449]: calic098caaa26f: Gained IPv6LL Mar 13 00:32:55.422735 systemd-networkd[1449]: cali1b62f7ab868: Gained IPv6LL Mar 13 00:32:55.487956 systemd-networkd[1449]: cali9b114f83460: Gained IPv6LL Mar 13 00:32:55.710589 containerd[1541]: time="2026-03-13T00:32:55.710141612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b45757c7-kfgc2,Uid:d29d464e-9db9-4c76-b402-8acd480907f8,Namespace:calico-system,Attempt:0,}" Mar 13 00:32:55.710785 containerd[1541]: time="2026-03-13T00:32:55.710386972Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-659d4f9cbf-ltc7r,Uid:b9c3c193-f779-4025-83cf-8cb7199d19c1,Namespace:calico-system,Attempt:0,}" Mar 13 00:32:55.711193 containerd[1541]: time="2026-03-13T00:32:55.710519075Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8lrsn,Uid:2e6f70af-e699-4466-b0e0-161b9b41f3be,Namespace:calico-system,Attempt:0,}" Mar 13 00:32:56.112162 systemd-networkd[1449]: calic7c609bd8d8: Link UP Mar 13 00:32:56.115376 systemd-networkd[1449]: calic7c609bd8d8: Gained carrier Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:55.867 [INFO][4820] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--ltc7r-eth0 calico-apiserver-659d4f9cbf- calico-system b9c3c193-f779-4025-83cf-8cb7199d19c1 875 0 2026-03-13 00:32:19 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:659d4f9cbf projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf calico-apiserver-659d4f9cbf-ltc7r eth0 calico-apiserver [] [] [kns.calico-system ksa.calico-system.calico-apiserver] calic7c609bd8d8 [] [] }} ContainerID="dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-ltc7r" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--ltc7r-" Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:55.868 [INFO][4820] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-ltc7r" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--ltc7r-eth0" Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.008 [INFO][4859] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" HandleID="k8s-pod-network.dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--ltc7r-eth0" Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.029 [INFO][4859] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" HandleID="k8s-pod-network.dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--ltc7r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000387860), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", "pod":"calico-apiserver-659d4f9cbf-ltc7r", "timestamp":"2026-03-13 00:32:56.008947569 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0002ba420)} Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.029 [INFO][4859] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.029 [INFO][4859] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.029 [INFO][4859] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf' Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.034 [INFO][4859] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.043 [INFO][4859] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.055 [INFO][4859] ipam/ipam.go 526: Trying affinity for 192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.062 [INFO][4859] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.068 [INFO][4859] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.068 [INFO][4859] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.64/26 handle="k8s-pod-network.dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.071 [INFO][4859] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.080 [INFO][4859] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.64/26 handle="k8s-pod-network.dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.097 [INFO][4859] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.70/26] block=192.168.87.64/26 handle="k8s-pod-network.dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.098 [INFO][4859] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.70/26] handle="k8s-pod-network.dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.098 [INFO][4859] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:32:56.154129 containerd[1541]: 2026-03-13 00:32:56.098 [INFO][4859] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.70/26] IPv6=[] ContainerID="dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" HandleID="k8s-pod-network.dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--ltc7r-eth0" Mar 13 00:32:56.158141 containerd[1541]: 2026-03-13 00:32:56.103 [INFO][4820] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-ltc7r" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--ltc7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--ltc7r-eth0", GenerateName:"calico-apiserver-659d4f9cbf-", Namespace:"calico-system", SelfLink:"", UID:"b9c3c193-f779-4025-83cf-8cb7199d19c1", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"659d4f9cbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"", Pod:"calico-apiserver-659d4f9cbf-ltc7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic7c609bd8d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:56.158141 containerd[1541]: 2026-03-13 00:32:56.104 [INFO][4820] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.70/32] ContainerID="dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-ltc7r" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--ltc7r-eth0" Mar 13 00:32:56.158141 containerd[1541]: 2026-03-13 00:32:56.104 [INFO][4820] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic7c609bd8d8 ContainerID="dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-ltc7r" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--ltc7r-eth0" Mar 13 00:32:56.158141 containerd[1541]: 2026-03-13 00:32:56.118 [INFO][4820] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-ltc7r" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--ltc7r-eth0" Mar 13 00:32:56.158141 containerd[1541]: 2026-03-13 00:32:56.121 [INFO][4820] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-ltc7r" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--ltc7r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--ltc7r-eth0", GenerateName:"calico-apiserver-659d4f9cbf-", Namespace:"calico-system", SelfLink:"", UID:"b9c3c193-f779-4025-83cf-8cb7199d19c1", ResourceVersion:"875", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"659d4f9cbf", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a", Pod:"calico-apiserver-659d4f9cbf-ltc7r", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.87.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-apiserver"}, InterfaceName:"calic7c609bd8d8", MAC:"fe:81:97:a6:be:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:56.158141 containerd[1541]: 2026-03-13 00:32:56.150 [INFO][4820] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" Namespace="calico-system" Pod="calico-apiserver-659d4f9cbf-ltc7r" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--apiserver--659d4f9cbf--ltc7r-eth0" Mar 13 00:32:56.256481 containerd[1541]: time="2026-03-13T00:32:56.254755873Z" level=info msg="connecting to shim dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a" address="unix:///run/containerd/s/7abfd6b4874574e0fe0ffd69e0b847f6165d4998f4fe76f46bcc2bb6532c5baf" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:32:56.293620 systemd-networkd[1449]: calia475e394ebb: Link UP Mar 13 00:32:56.296639 systemd-networkd[1449]: calia475e394ebb: Gained carrier Mar 13 00:32:56.330723 systemd[1]: Started cri-containerd-dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a.scope - libcontainer container dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a. Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:55.975 [INFO][4824] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-csi--node--driver--8lrsn-eth0 csi-node-driver- calico-system 2e6f70af-e699-4466-b0e0-161b9b41f3be 724 0 2026-03-13 00:32:20 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6d9d697c7c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf csi-node-driver-8lrsn eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia475e394ebb [] [] }} ContainerID="981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" Namespace="calico-system" Pod="csi-node-driver-8lrsn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-csi--node--driver--8lrsn-" Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:55.975 [INFO][4824] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" Namespace="calico-system" Pod="csi-node-driver-8lrsn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-csi--node--driver--8lrsn-eth0" Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.098 [INFO][4877] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" HandleID="k8s-pod-network.981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-csi--node--driver--8lrsn-eth0" Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.122 [INFO][4877] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" HandleID="k8s-pod-network.981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-csi--node--driver--8lrsn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00032bea0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", "pod":"csi-node-driver-8lrsn", "timestamp":"2026-03-13 00:32:56.098748925 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc0001882c0)} Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.122 [INFO][4877] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.123 [INFO][4877] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.123 [INFO][4877] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf' Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.141 [INFO][4877] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.160 [INFO][4877] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.174 [INFO][4877] ipam/ipam.go 526: Trying affinity for 192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.178 [INFO][4877] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.183 [INFO][4877] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.183 [INFO][4877] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.64/26 handle="k8s-pod-network.981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.190 [INFO][4877] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433 Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.217 [INFO][4877] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.64/26 handle="k8s-pod-network.981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.241 [INFO][4877] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.71/26] block=192.168.87.64/26 handle="k8s-pod-network.981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.243 [INFO][4877] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.71/26] handle="k8s-pod-network.981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.244 [INFO][4877] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:32:56.360930 containerd[1541]: 2026-03-13 00:32:56.244 [INFO][4877] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.71/26] IPv6=[] ContainerID="981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" HandleID="k8s-pod-network.981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-csi--node--driver--8lrsn-eth0" Mar 13 00:32:56.365131 containerd[1541]: 2026-03-13 00:32:56.257 [INFO][4824] cni-plugin/k8s.go 418: Populated endpoint ContainerID="981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" Namespace="calico-system" Pod="csi-node-driver-8lrsn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-csi--node--driver--8lrsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-csi--node--driver--8lrsn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e6f70af-e699-4466-b0e0-161b9b41f3be", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"", Pod:"csi-node-driver-8lrsn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia475e394ebb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:56.365131 containerd[1541]: 2026-03-13 00:32:56.257 [INFO][4824] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.71/32] ContainerID="981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" Namespace="calico-system" Pod="csi-node-driver-8lrsn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-csi--node--driver--8lrsn-eth0" Mar 13 00:32:56.365131 containerd[1541]: 2026-03-13 00:32:56.258 [INFO][4824] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia475e394ebb ContainerID="981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" Namespace="calico-system" Pod="csi-node-driver-8lrsn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-csi--node--driver--8lrsn-eth0" Mar 13 00:32:56.365131 containerd[1541]: 2026-03-13 00:32:56.301 [INFO][4824] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" Namespace="calico-system" Pod="csi-node-driver-8lrsn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-csi--node--driver--8lrsn-eth0" Mar 13 00:32:56.365131 containerd[1541]: 2026-03-13 00:32:56.307 [INFO][4824] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" Namespace="calico-system" Pod="csi-node-driver-8lrsn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-csi--node--driver--8lrsn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-csi--node--driver--8lrsn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2e6f70af-e699-4466-b0e0-161b9b41f3be", ResourceVersion:"724", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6d9d697c7c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433", Pod:"csi-node-driver-8lrsn", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.87.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia475e394ebb", MAC:"56:59:4b:e6:9c:6f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:56.365131 containerd[1541]: 2026-03-13 00:32:56.341 [INFO][4824] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" Namespace="calico-system" Pod="csi-node-driver-8lrsn" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-csi--node--driver--8lrsn-eth0" Mar 13 00:32:56.391320 systemd-networkd[1449]: cali82b8d520c87: Link UP Mar 13 00:32:56.397737 systemd-networkd[1449]: cali82b8d520c87: Gained carrier Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:55.975 [INFO][4823] cni-plugin/plugin.go 342: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--kube--controllers--76b45757c7--kfgc2-eth0 calico-kube-controllers-76b45757c7- calico-system d29d464e-9db9-4c76-b402-8acd480907f8 877 0 2026-03-13 00:32:20 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76b45757c7 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf calico-kube-controllers-76b45757c7-kfgc2 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali82b8d520c87 [] [] }} ContainerID="c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" Namespace="calico-system" Pod="calico-kube-controllers-76b45757c7-kfgc2" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--kube--controllers--76b45757c7--kfgc2-" Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:55.976 [INFO][4823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" Namespace="calico-system" Pod="calico-kube-controllers-76b45757c7-kfgc2" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--kube--controllers--76b45757c7--kfgc2-eth0" Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.115 [INFO][4871] ipam/ipam_plugin.go 235: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" HandleID="k8s-pod-network.c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--kube--controllers--76b45757c7--kfgc2-eth0" Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.152 [INFO][4871] ipam/ipam_plugin.go 301: Auto assigning IP ContainerID="c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" HandleID="k8s-pod-network.c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--kube--controllers--76b45757c7--kfgc2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004fb80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", "pod":"calico-kube-controllers-76b45757c7-kfgc2", "timestamp":"2026-03-13 00:32:56.114610223 +0000 UTC"}, Hostname:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload", Namespace:(*v1.Namespace)(0xc00019a2c0)} Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.152 [INFO][4871] ipam/ipam_plugin.go 438: About to acquire host-wide IPAM lock. Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.245 [INFO][4871] ipam/ipam_plugin.go 453: Acquired host-wide IPAM lock. Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.245 [INFO][4871] ipam/ipam.go 112: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf' Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.250 [INFO][4871] ipam/ipam.go 707: Looking up existing affinities for host handle="k8s-pod-network.c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.260 [INFO][4871] ipam/ipam.go 409: Looking up existing affinities for host host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.285 [INFO][4871] ipam/ipam.go 526: Trying affinity for 192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.297 [INFO][4871] ipam/ipam.go 160: Attempting to load block cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.304 [INFO][4871] ipam/ipam.go 237: Affinity is confirmed and block has been loaded cidr=192.168.87.64/26 host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.309 [INFO][4871] ipam/ipam.go 1245: Attempting to assign 1 addresses from block block=192.168.87.64/26 handle="k8s-pod-network.c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.319 [INFO][4871] ipam/ipam.go 1806: Creating new handle: k8s-pod-network.c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9 Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.346 [INFO][4871] ipam/ipam.go 1272: Writing block in order to claim IPs block=192.168.87.64/26 handle="k8s-pod-network.c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.364 [INFO][4871] ipam/ipam.go 1288: Successfully claimed IPs: [192.168.87.72/26] block=192.168.87.64/26 handle="k8s-pod-network.c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.364 [INFO][4871] ipam/ipam.go 895: Auto-assigned 1 out of 1 IPv4s: [192.168.87.72/26] handle="k8s-pod-network.c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" host="ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf" Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.364 [INFO][4871] ipam/ipam_plugin.go 459: Released host-wide IPAM lock. Mar 13 00:32:56.439988 containerd[1541]: 2026-03-13 00:32:56.364 [INFO][4871] ipam/ipam_plugin.go 325: Calico CNI IPAM assigned addresses IPv4=[192.168.87.72/26] IPv6=[] ContainerID="c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" HandleID="k8s-pod-network.c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" Workload="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--kube--controllers--76b45757c7--kfgc2-eth0" Mar 13 00:32:56.441164 containerd[1541]: 2026-03-13 00:32:56.381 [INFO][4823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" Namespace="calico-system" Pod="calico-kube-controllers-76b45757c7-kfgc2" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--kube--controllers--76b45757c7--kfgc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--kube--controllers--76b45757c7--kfgc2-eth0", GenerateName:"calico-kube-controllers-76b45757c7-", Namespace:"calico-system", SelfLink:"", UID:"d29d464e-9db9-4c76-b402-8acd480907f8", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76b45757c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"", Pod:"calico-kube-controllers-76b45757c7-kfgc2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali82b8d520c87", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:56.441164 containerd[1541]: 2026-03-13 00:32:56.381 [INFO][4823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.87.72/32] ContainerID="c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" Namespace="calico-system" Pod="calico-kube-controllers-76b45757c7-kfgc2" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--kube--controllers--76b45757c7--kfgc2-eth0" Mar 13 00:32:56.441164 containerd[1541]: 2026-03-13 00:32:56.381 [INFO][4823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali82b8d520c87 ContainerID="c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" Namespace="calico-system" Pod="calico-kube-controllers-76b45757c7-kfgc2" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--kube--controllers--76b45757c7--kfgc2-eth0" Mar 13 00:32:56.441164 containerd[1541]: 2026-03-13 00:32:56.405 [INFO][4823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" Namespace="calico-system" Pod="calico-kube-controllers-76b45757c7-kfgc2" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--kube--controllers--76b45757c7--kfgc2-eth0" Mar 13 00:32:56.441164 containerd[1541]: 2026-03-13 00:32:56.407 [INFO][4823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" Namespace="calico-system" Pod="calico-kube-controllers-76b45757c7-kfgc2" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--kube--controllers--76b45757c7--kfgc2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--kube--controllers--76b45757c7--kfgc2-eth0", GenerateName:"calico-kube-controllers-76b45757c7-", Namespace:"calico-system", SelfLink:"", UID:"d29d464e-9db9-4c76-b402-8acd480907f8", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2026, time.March, 13, 0, 32, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76b45757c7", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4459-2-4-nightly-20260312-2100-cb93f6b72eee96d5aecf", ContainerID:"c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9", Pod:"calico-kube-controllers-76b45757c7-kfgc2", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.87.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali82b8d520c87", MAC:"fa:d8:e9:43:51:25", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Mar 13 00:32:56.441164 containerd[1541]: 2026-03-13 00:32:56.434 [INFO][4823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" Namespace="calico-system" Pod="calico-kube-controllers-76b45757c7-kfgc2" WorkloadEndpoint="ci--4459--2--4--nightly--20260312--2100--cb93f6b72eee96d5aecf-k8s-calico--kube--controllers--76b45757c7--kfgc2-eth0" Mar 13 00:32:56.460945 containerd[1541]: time="2026-03-13T00:32:56.460782544Z" level=info msg="connecting to shim 981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433" address="unix:///run/containerd/s/0a73ded21b136c34016304e84628c060e27da020e943a23fff3acb96f22cb8f8" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:32:56.523409 containerd[1541]: time="2026-03-13T00:32:56.523341368Z" level=info msg="connecting to shim c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9" address="unix:///run/containerd/s/cb8ec4e97702bec1cc4215f2f3572e15c47a3afd0a25b726ca16351061c9636f" namespace=k8s.io protocol=ttrpc version=3 Mar 13 00:32:56.563399 systemd[1]: Started cri-containerd-981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433.scope - libcontainer container 981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433. Mar 13 00:32:56.639361 systemd[1]: Started cri-containerd-c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9.scope - libcontainer container c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9. Mar 13 00:32:56.667254 containerd[1541]: time="2026-03-13T00:32:56.665915997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-659d4f9cbf-ltc7r,Uid:b9c3c193-f779-4025-83cf-8cb7199d19c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a\"" Mar 13 00:32:56.748886 containerd[1541]: time="2026-03-13T00:32:56.748385217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8lrsn,Uid:2e6f70af-e699-4466-b0e0-161b9b41f3be,Namespace:calico-system,Attempt:0,} returns sandbox id \"981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433\"" Mar 13 00:32:56.839533 containerd[1541]: time="2026-03-13T00:32:56.839469185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76b45757c7-kfgc2,Uid:d29d464e-9db9-4c76-b402-8acd480907f8,Namespace:calico-system,Attempt:0,} returns sandbox id \"c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9\"" Mar 13 00:32:57.727350 systemd-networkd[1449]: calia475e394ebb: Gained IPv6LL Mar 13 00:32:57.729315 systemd-networkd[1449]: calic7c609bd8d8: Gained IPv6LL Mar 13 00:32:57.812335 containerd[1541]: time="2026-03-13T00:32:57.812274657Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:57.813890 containerd[1541]: time="2026-03-13T00:32:57.813851653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=48415780" Mar 13 00:32:57.815022 containerd[1541]: time="2026-03-13T00:32:57.814670808Z" level=info msg="ImageCreate event name:\"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:57.817942 containerd[1541]: time="2026-03-13T00:32:57.817877402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:32:57.819191 containerd[1541]: time="2026-03-13T00:32:57.818883914Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 3.373329712s" Mar 13 00:32:57.819191 containerd[1541]: time="2026-03-13T00:32:57.818925090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:32:57.820615 containerd[1541]: time="2026-03-13T00:32:57.820584064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\"" Mar 13 00:32:57.824606 containerd[1541]: time="2026-03-13T00:32:57.824495725Z" level=info msg="CreateContainer within sandbox \"06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:32:57.834807 containerd[1541]: time="2026-03-13T00:32:57.833859765Z" level=info msg="Container d087899ff16807906ed3a90642cd733e798be6dd03a139f35d83ca542c3c94f1: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:32:57.843701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount678010018.mount: Deactivated successfully. Mar 13 00:32:57.852684 containerd[1541]: time="2026-03-13T00:32:57.852626251Z" level=info msg="CreateContainer within sandbox \"06cd40a8a5e9493130538935a7a51ad23d698675156ff7d808fee886a9ae2cba\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d087899ff16807906ed3a90642cd733e798be6dd03a139f35d83ca542c3c94f1\"" Mar 13 00:32:57.854210 containerd[1541]: time="2026-03-13T00:32:57.853674030Z" level=info msg="StartContainer for \"d087899ff16807906ed3a90642cd733e798be6dd03a139f35d83ca542c3c94f1\"" Mar 13 00:32:57.856575 containerd[1541]: time="2026-03-13T00:32:57.856528109Z" level=info msg="connecting to shim d087899ff16807906ed3a90642cd733e798be6dd03a139f35d83ca542c3c94f1" address="unix:///run/containerd/s/1ac8583d6208bca2ae8a1183070f0c9d1aeb65f333dc8ebfdd467a9c037f4e81" protocol=ttrpc version=3 Mar 13 00:32:57.899381 systemd[1]: Started cri-containerd-d087899ff16807906ed3a90642cd733e798be6dd03a139f35d83ca542c3c94f1.scope - libcontainer container d087899ff16807906ed3a90642cd733e798be6dd03a139f35d83ca542c3c94f1. Mar 13 00:32:57.968278 containerd[1541]: time="2026-03-13T00:32:57.968202942Z" level=info msg="StartContainer for \"d087899ff16807906ed3a90642cd733e798be6dd03a139f35d83ca542c3c94f1\" returns successfully" Mar 13 00:32:57.982475 systemd-networkd[1449]: cali82b8d520c87: Gained IPv6LL Mar 13 00:32:58.017413 kubelet[2877]: I0313 00:32:58.016834 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-659d4f9cbf-767k7" podStartSLOduration=35.639908355 podStartE2EDuration="39.016807402s" podCreationTimestamp="2026-03-13 00:32:19 +0000 UTC" firstStartedPulling="2026-03-13 00:32:54.443423356 +0000 UTC m=+51.986211921" lastFinishedPulling="2026-03-13 00:32:57.820322401 +0000 UTC m=+55.363110968" observedRunningTime="2026-03-13 00:32:58.01603495 +0000 UTC m=+55.558823544" watchObservedRunningTime="2026-03-13 00:32:58.016807402 +0000 UTC m=+55.559595993" Mar 13 00:32:59.013054 kubelet[2877]: I0313 00:32:59.012544 2877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:32:59.820224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4205084050.mount: Deactivated successfully. Mar 13 00:33:00.429282 containerd[1541]: time="2026-03-13T00:33:00.429217913Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:33:00.430516 containerd[1541]: time="2026-03-13T00:33:00.430452123Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.31.4: active requests=0, bytes read=55623386" Mar 13 00:33:00.431734 containerd[1541]: time="2026-03-13T00:33:00.431663166Z" level=info msg="ImageCreate event name:\"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:33:00.434722 containerd[1541]: time="2026-03-13T00:33:00.434655358Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:33:00.435868 containerd[1541]: time="2026-03-13T00:33:00.435686472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" with image id \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:44395ca5ebfe88f21ed51acfbec5fc0f31d2762966e2007a0a2eb9b30e35fc4d\", size \"55623232\" in 2.615059982s" Mar 13 00:33:00.435868 containerd[1541]: time="2026-03-13T00:33:00.435728827Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.31.4\" returns image reference \"sha256:714983e5e920bbe810fab04d9f06bd16ef4e552b0d2deffd7ab2b2c4a001acbb\"" Mar 13 00:33:00.437031 containerd[1541]: time="2026-03-13T00:33:00.437001321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\"" Mar 13 00:33:00.441210 containerd[1541]: time="2026-03-13T00:33:00.441118267Z" level=info msg="CreateContainer within sandbox \"9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Mar 13 00:33:00.451380 containerd[1541]: time="2026-03-13T00:33:00.451344402Z" level=info msg="Container 0aa78d05e5d8ab56f408a4f700fb83228dd4ce3928c9948127ae198837c901a7: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:33:00.464079 containerd[1541]: time="2026-03-13T00:33:00.464020770Z" level=info msg="CreateContainer within sandbox \"9665d85511fbdca7958f5911f310b6f50089df31a675115d4b1dbb10cbb3ac57\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"0aa78d05e5d8ab56f408a4f700fb83228dd4ce3928c9948127ae198837c901a7\"" Mar 13 00:33:00.464918 containerd[1541]: time="2026-03-13T00:33:00.464884054Z" level=info msg="StartContainer for \"0aa78d05e5d8ab56f408a4f700fb83228dd4ce3928c9948127ae198837c901a7\"" Mar 13 00:33:00.466775 containerd[1541]: time="2026-03-13T00:33:00.466717853Z" level=info msg="connecting to shim 0aa78d05e5d8ab56f408a4f700fb83228dd4ce3928c9948127ae198837c901a7" address="unix:///run/containerd/s/3e3d240e655317a55bc23466095f587702335850eaf9f449b98f7f36c7d8b5c5" protocol=ttrpc version=3 Mar 13 00:33:00.503345 systemd[1]: Started cri-containerd-0aa78d05e5d8ab56f408a4f700fb83228dd4ce3928c9948127ae198837c901a7.scope - libcontainer container 0aa78d05e5d8ab56f408a4f700fb83228dd4ce3928c9948127ae198837c901a7. Mar 13 00:33:00.582591 containerd[1541]: time="2026-03-13T00:33:00.582479590Z" level=info msg="StartContainer for \"0aa78d05e5d8ab56f408a4f700fb83228dd4ce3928c9948127ae198837c901a7\" returns successfully" Mar 13 00:33:00.714641 containerd[1541]: time="2026-03-13T00:33:00.714492058Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:33:00.716032 containerd[1541]: time="2026-03-13T00:33:00.715988514Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.31.4: active requests=0, bytes read=77" Mar 13 00:33:00.718666 containerd[1541]: time="2026-03-13T00:33:00.718626912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" with image id \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:d212af1da3dd52a633bc9e36653a7d901d95a570f8d51d1968a837dcf6879730\", size \"49971841\" in 281.337997ms" Mar 13 00:33:00.718666 containerd[1541]: time="2026-03-13T00:33:00.718665912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.31.4\" returns image reference \"sha256:f7ff80340b9b4973ceda29859065985831588b2898f2b4009f742b5789010898\"" Mar 13 00:33:00.719727 containerd[1541]: time="2026-03-13T00:33:00.719659357Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\"" Mar 13 00:33:00.724563 containerd[1541]: time="2026-03-13T00:33:00.724491317Z" level=info msg="CreateContainer within sandbox \"dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 13 00:33:00.735210 containerd[1541]: time="2026-03-13T00:33:00.734394994Z" level=info msg="Container 6d7a4f068325f0a5ece1996d87c418a43d41b36be9d1ce250dc6bc0c6483c2b4: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:33:00.748815 ntpd[1633]: Listen normally on 9 calib96ed3e8cf2 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 13 00:33:00.748913 ntpd[1633]: Listen normally on 10 calic098caaa26f [fe80::ecee:eeff:feee:eeee%9]:123 Mar 13 00:33:00.749360 ntpd[1633]: 13 Mar 00:33:00 ntpd[1633]: Listen normally on 9 calib96ed3e8cf2 [fe80::ecee:eeff:feee:eeee%8]:123 Mar 13 00:33:00.749360 ntpd[1633]: 13 Mar 00:33:00 ntpd[1633]: Listen normally on 10 calic098caaa26f [fe80::ecee:eeff:feee:eeee%9]:123 Mar 13 00:33:00.749360 ntpd[1633]: 13 Mar 00:33:00 ntpd[1633]: Listen normally on 11 cali1b62f7ab868 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 13 00:33:00.749360 ntpd[1633]: 13 Mar 00:33:00 ntpd[1633]: Listen normally on 12 cali9b114f83460 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 13 00:33:00.749360 ntpd[1633]: 13 Mar 00:33:00 ntpd[1633]: Listen normally on 13 calic7c609bd8d8 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 13 00:33:00.749360 ntpd[1633]: 13 Mar 00:33:00 ntpd[1633]: Listen normally on 14 calia475e394ebb [fe80::ecee:eeff:feee:eeee%13]:123 Mar 13 00:33:00.749360 ntpd[1633]: 13 Mar 00:33:00 ntpd[1633]: Listen normally on 15 cali82b8d520c87 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 13 00:33:00.748961 ntpd[1633]: Listen normally on 11 cali1b62f7ab868 [fe80::ecee:eeff:feee:eeee%10]:123 Mar 13 00:33:00.749003 ntpd[1633]: Listen normally on 12 cali9b114f83460 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 13 00:33:00.749045 ntpd[1633]: Listen normally on 13 calic7c609bd8d8 [fe80::ecee:eeff:feee:eeee%12]:123 Mar 13 00:33:00.749090 ntpd[1633]: Listen normally on 14 calia475e394ebb [fe80::ecee:eeff:feee:eeee%13]:123 Mar 13 00:33:00.749129 ntpd[1633]: Listen normally on 15 cali82b8d520c87 [fe80::ecee:eeff:feee:eeee%14]:123 Mar 13 00:33:00.752046 containerd[1541]: time="2026-03-13T00:33:00.751984132Z" level=info msg="CreateContainer within sandbox \"dca81160f7cf9219a59a79caebfa34958855d2f4af753bf08970fa761986ac1a\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6d7a4f068325f0a5ece1996d87c418a43d41b36be9d1ce250dc6bc0c6483c2b4\"" Mar 13 00:33:00.752890 containerd[1541]: time="2026-03-13T00:33:00.752774390Z" level=info msg="StartContainer for \"6d7a4f068325f0a5ece1996d87c418a43d41b36be9d1ce250dc6bc0c6483c2b4\"" Mar 13 00:33:00.755547 containerd[1541]: time="2026-03-13T00:33:00.755440165Z" level=info msg="connecting to shim 6d7a4f068325f0a5ece1996d87c418a43d41b36be9d1ce250dc6bc0c6483c2b4" address="unix:///run/containerd/s/7abfd6b4874574e0fe0ffd69e0b847f6165d4998f4fe76f46bcc2bb6532c5baf" protocol=ttrpc version=3 Mar 13 00:33:00.790357 systemd[1]: Started cri-containerd-6d7a4f068325f0a5ece1996d87c418a43d41b36be9d1ce250dc6bc0c6483c2b4.scope - libcontainer container 6d7a4f068325f0a5ece1996d87c418a43d41b36be9d1ce250dc6bc0c6483c2b4. Mar 13 00:33:00.864687 containerd[1541]: time="2026-03-13T00:33:00.864643472Z" level=info msg="StartContainer for \"6d7a4f068325f0a5ece1996d87c418a43d41b36be9d1ce250dc6bc0c6483c2b4\" returns successfully" Mar 13 00:33:01.057692 kubelet[2877]: I0313 00:33:01.057522 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-apiserver-659d4f9cbf-ltc7r" podStartSLOduration=38.017561087 podStartE2EDuration="42.057421798s" podCreationTimestamp="2026-03-13 00:32:19 +0000 UTC" firstStartedPulling="2026-03-13 00:32:56.679616883 +0000 UTC m=+54.222405449" lastFinishedPulling="2026-03-13 00:33:00.719477546 +0000 UTC m=+58.262266160" observedRunningTime="2026-03-13 00:33:01.054656087 +0000 UTC m=+58.597444679" watchObservedRunningTime="2026-03-13 00:33:01.057421798 +0000 UTC m=+58.600210386" Mar 13 00:33:01.210844 kubelet[2877]: I0313 00:33:01.210719 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-5b85766d88-ttt8q" podStartSLOduration=36.392362141 podStartE2EDuration="42.210697916s" podCreationTimestamp="2026-03-13 00:32:19 +0000 UTC" firstStartedPulling="2026-03-13 00:32:54.618506538 +0000 UTC m=+52.161295119" lastFinishedPulling="2026-03-13 00:33:00.436842302 +0000 UTC m=+57.979630894" observedRunningTime="2026-03-13 00:33:01.086969086 +0000 UTC m=+58.629757678" watchObservedRunningTime="2026-03-13 00:33:01.210697916 +0000 UTC m=+58.753486508" Mar 13 00:33:02.036304 kubelet[2877]: I0313 00:33:02.036263 2877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:33:02.102214 containerd[1541]: time="2026-03-13T00:33:02.101264854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:33:02.104045 containerd[1541]: time="2026-03-13T00:33:02.104007390Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.31.4: active requests=0, bytes read=8792502" Mar 13 00:33:02.105001 containerd[1541]: time="2026-03-13T00:33:02.104971557Z" level=info msg="ImageCreate event name:\"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:33:02.109137 containerd[1541]: time="2026-03-13T00:33:02.109106470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:33:02.111398 containerd[1541]: time="2026-03-13T00:33:02.111365432Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.31.4\" with image id \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\", repo tag \"ghcr.io/flatcar/calico/csi:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ab57dd6f8423ef7b3ff382bf4ca5ace6063bdca77d441d852c75ec58847dd280\", size \"10348547\" in 1.391500684s" Mar 13 00:33:02.111571 containerd[1541]: time="2026-03-13T00:33:02.111549594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.31.4\" returns image reference \"sha256:4c8cd7d0b10a4df64a5bd90e9845e9d1edbe0e37c2ebfc171bb28698e07abf72\"" Mar 13 00:33:02.114367 containerd[1541]: time="2026-03-13T00:33:02.114278853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\"" Mar 13 00:33:02.119673 containerd[1541]: time="2026-03-13T00:33:02.119630816Z" level=info msg="CreateContainer within sandbox \"981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 13 00:33:02.141931 containerd[1541]: time="2026-03-13T00:33:02.139812285Z" level=info msg="Container 8436efa223c9dfe7fde6030fe7e6c336226a00e3f56d6f73dba81229cacb25f1: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:33:02.169664 containerd[1541]: time="2026-03-13T00:33:02.169560587Z" level=info msg="CreateContainer within sandbox \"981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8436efa223c9dfe7fde6030fe7e6c336226a00e3f56d6f73dba81229cacb25f1\"" Mar 13 00:33:02.170493 containerd[1541]: time="2026-03-13T00:33:02.170450546Z" level=info msg="StartContainer for \"8436efa223c9dfe7fde6030fe7e6c336226a00e3f56d6f73dba81229cacb25f1\"" Mar 13 00:33:02.175059 containerd[1541]: time="2026-03-13T00:33:02.175012950Z" level=info msg="connecting to shim 8436efa223c9dfe7fde6030fe7e6c336226a00e3f56d6f73dba81229cacb25f1" address="unix:///run/containerd/s/0a73ded21b136c34016304e84628c060e27da020e943a23fff3acb96f22cb8f8" protocol=ttrpc version=3 Mar 13 00:33:02.229402 systemd[1]: Started cri-containerd-8436efa223c9dfe7fde6030fe7e6c336226a00e3f56d6f73dba81229cacb25f1.scope - libcontainer container 8436efa223c9dfe7fde6030fe7e6c336226a00e3f56d6f73dba81229cacb25f1. Mar 13 00:33:02.339353 containerd[1541]: time="2026-03-13T00:33:02.337539852Z" level=info msg="StartContainer for \"8436efa223c9dfe7fde6030fe7e6c336226a00e3f56d6f73dba81229cacb25f1\" returns successfully" Mar 13 00:33:02.924060 kubelet[2877]: I0313 00:33:02.923765 2877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:33:05.128515 containerd[1541]: time="2026-03-13T00:33:05.128443161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:33:05.129650 containerd[1541]: time="2026-03-13T00:33:05.129581017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.31.4: active requests=0, bytes read=52406348" Mar 13 00:33:05.131084 containerd[1541]: time="2026-03-13T00:33:05.130803006Z" level=info msg="ImageCreate event name:\"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:33:05.133664 containerd[1541]: time="2026-03-13T00:33:05.133596516Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:33:05.134885 containerd[1541]: time="2026-03-13T00:33:05.134555830Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" with image id \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:99b8bb50141ca55b4b6ddfcf2f2fbde838265508ab2ac96ed08e72cd39800713\", size \"53962361\" in 3.020228174s" Mar 13 00:33:05.134885 containerd[1541]: time="2026-03-13T00:33:05.134598997Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.31.4\" returns image reference \"sha256:ff033cc89dab51090bfa1b04e155a5ce1e3b1f324f74b7b2be0dd6f0b6b10e89\"" Mar 13 00:33:05.136708 containerd[1541]: time="2026-03-13T00:33:05.136671996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\"" Mar 13 00:33:05.160569 containerd[1541]: time="2026-03-13T00:33:05.160527635Z" level=info msg="CreateContainer within sandbox \"c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 13 00:33:05.170417 containerd[1541]: time="2026-03-13T00:33:05.170373746Z" level=info msg="Container 3bd7a0b86986dd462d75ff49201349d7956240802cb8528b9ba6ae613bdc2691: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:33:05.182691 containerd[1541]: time="2026-03-13T00:33:05.182644954Z" level=info msg="CreateContainer within sandbox \"c346cc8a1aa58f30743ae17765ff73ce8bc19333d2a8c0dd6d68b822bc672db9\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3bd7a0b86986dd462d75ff49201349d7956240802cb8528b9ba6ae613bdc2691\"" Mar 13 00:33:05.183750 containerd[1541]: time="2026-03-13T00:33:05.183670116Z" level=info msg="StartContainer for \"3bd7a0b86986dd462d75ff49201349d7956240802cb8528b9ba6ae613bdc2691\"" Mar 13 00:33:05.186072 containerd[1541]: time="2026-03-13T00:33:05.186038121Z" level=info msg="connecting to shim 3bd7a0b86986dd462d75ff49201349d7956240802cb8528b9ba6ae613bdc2691" address="unix:///run/containerd/s/cb8ec4e97702bec1cc4215f2f3572e15c47a3afd0a25b726ca16351061c9636f" protocol=ttrpc version=3 Mar 13 00:33:05.221376 systemd[1]: Started cri-containerd-3bd7a0b86986dd462d75ff49201349d7956240802cb8528b9ba6ae613bdc2691.scope - libcontainer container 3bd7a0b86986dd462d75ff49201349d7956240802cb8528b9ba6ae613bdc2691. Mar 13 00:33:05.291705 containerd[1541]: time="2026-03-13T00:33:05.291643127Z" level=info msg="StartContainer for \"3bd7a0b86986dd462d75ff49201349d7956240802cb8528b9ba6ae613bdc2691\" returns successfully" Mar 13 00:33:06.159206 kubelet[2877]: I0313 00:33:06.158558 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-76b45757c7-kfgc2" podStartSLOduration=37.865314944 podStartE2EDuration="46.158534929s" podCreationTimestamp="2026-03-13 00:32:20 +0000 UTC" firstStartedPulling="2026-03-13 00:32:56.842551808 +0000 UTC m=+54.385340389" lastFinishedPulling="2026-03-13 00:33:05.135771809 +0000 UTC m=+62.678560374" observedRunningTime="2026-03-13 00:33:06.080052766 +0000 UTC m=+63.622841362" watchObservedRunningTime="2026-03-13 00:33:06.158534929 +0000 UTC m=+63.701323521" Mar 13 00:33:06.763748 containerd[1541]: time="2026-03-13T00:33:06.763256198Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:33:06.765561 containerd[1541]: time="2026-03-13T00:33:06.765428586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4: active requests=0, bytes read=14704317" Mar 13 00:33:06.767044 containerd[1541]: time="2026-03-13T00:33:06.766868030Z" level=info msg="ImageCreate event name:\"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:33:06.770861 containerd[1541]: time="2026-03-13T00:33:06.770808651Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 13 00:33:06.771874 containerd[1541]: time="2026-03-13T00:33:06.771654006Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" with image id \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:e41c0d73bcd33ff28ae2f2983cf781a4509d212e102d53883dbbf436ab3cd97d\", size \"16260314\" in 1.634934984s" Mar 13 00:33:06.771874 containerd[1541]: time="2026-03-13T00:33:06.771790508Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.31.4\" returns image reference \"sha256:d7aeb99114cbb6499e9048f43d3faa5f199d1a05ed44165e5974d0368ac32771\"" Mar 13 00:33:06.787558 containerd[1541]: time="2026-03-13T00:33:06.787519872Z" level=info msg="CreateContainer within sandbox \"981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 13 00:33:06.804203 containerd[1541]: time="2026-03-13T00:33:06.801331791Z" level=info msg="Container 8b4dd202f4c371638ae2e950534882916f7d8d7544e07d59d8d80eff0e055898: CDI devices from CRI Config.CDIDevices: []" Mar 13 00:33:06.818844 containerd[1541]: time="2026-03-13T00:33:06.818801991Z" level=info msg="CreateContainer within sandbox \"981053aa84edb7c55487ca3ad8da199d39337732327d3c32e3db4c5c4daa4433\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"8b4dd202f4c371638ae2e950534882916f7d8d7544e07d59d8d80eff0e055898\"" Mar 13 00:33:06.820948 containerd[1541]: time="2026-03-13T00:33:06.820823559Z" level=info msg="StartContainer for \"8b4dd202f4c371638ae2e950534882916f7d8d7544e07d59d8d80eff0e055898\"" Mar 13 00:33:06.825458 containerd[1541]: time="2026-03-13T00:33:06.825413756Z" level=info msg="connecting to shim 8b4dd202f4c371638ae2e950534882916f7d8d7544e07d59d8d80eff0e055898" address="unix:///run/containerd/s/0a73ded21b136c34016304e84628c060e27da020e943a23fff3acb96f22cb8f8" protocol=ttrpc version=3 Mar 13 00:33:06.884509 systemd[1]: Started cri-containerd-8b4dd202f4c371638ae2e950534882916f7d8d7544e07d59d8d80eff0e055898.scope - libcontainer container 8b4dd202f4c371638ae2e950534882916f7d8d7544e07d59d8d80eff0e055898. Mar 13 00:33:06.960713 systemd[1]: Started sshd@11-10.128.0.72:22-20.161.92.111:57756.service - OpenSSH per-connection server daemon (20.161.92.111:57756). Mar 13 00:33:07.013548 containerd[1541]: time="2026-03-13T00:33:07.013506436Z" level=info msg="StartContainer for \"8b4dd202f4c371638ae2e950534882916f7d8d7544e07d59d8d80eff0e055898\" returns successfully" Mar 13 00:33:07.228079 sshd[5371]: Accepted publickey for core from 20.161.92.111 port 57756 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:07.229992 sshd-session[5371]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:07.237695 systemd-logind[1525]: New session 10 of user core. Mar 13 00:33:07.248383 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 13 00:33:07.448109 sshd[5386]: Connection closed by 20.161.92.111 port 57756 Mar 13 00:33:07.449481 sshd-session[5371]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:07.455911 systemd[1]: sshd@11-10.128.0.72:22-20.161.92.111:57756.service: Deactivated successfully. Mar 13 00:33:07.459111 systemd[1]: session-10.scope: Deactivated successfully. Mar 13 00:33:07.461028 systemd-logind[1525]: Session 10 logged out. Waiting for processes to exit. Mar 13 00:33:07.463725 systemd-logind[1525]: Removed session 10. Mar 13 00:33:07.825611 kubelet[2877]: I0313 00:33:07.825556 2877 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 13 00:33:07.825611 kubelet[2877]: I0313 00:33:07.825606 2877 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 13 00:33:12.505925 systemd[1]: Started sshd@12-10.128.0.72:22-20.161.92.111:50908.service - OpenSSH per-connection server daemon (20.161.92.111:50908). Mar 13 00:33:12.746270 sshd[5403]: Accepted publickey for core from 20.161.92.111 port 50908 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:12.748107 sshd-session[5403]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:12.757291 systemd-logind[1525]: New session 11 of user core. Mar 13 00:33:12.763360 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 13 00:33:12.943123 sshd[5406]: Connection closed by 20.161.92.111 port 50908 Mar 13 00:33:12.944616 sshd-session[5403]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:12.956731 systemd[1]: sshd@12-10.128.0.72:22-20.161.92.111:50908.service: Deactivated successfully. Mar 13 00:33:12.962064 systemd[1]: session-11.scope: Deactivated successfully. Mar 13 00:33:12.965421 systemd-logind[1525]: Session 11 logged out. Waiting for processes to exit. Mar 13 00:33:12.967578 systemd-logind[1525]: Removed session 11. Mar 13 00:33:13.041681 kubelet[2877]: I0313 00:33:13.041121 2877 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-8lrsn" podStartSLOduration=43.016646173 podStartE2EDuration="53.040946307s" podCreationTimestamp="2026-03-13 00:32:20 +0000 UTC" firstStartedPulling="2026-03-13 00:32:56.754352148 +0000 UTC m=+54.297140731" lastFinishedPulling="2026-03-13 00:33:06.778652285 +0000 UTC m=+64.321440865" observedRunningTime="2026-03-13 00:33:07.113341321 +0000 UTC m=+64.656129914" watchObservedRunningTime="2026-03-13 00:33:13.040946307 +0000 UTC m=+70.583734900" Mar 13 00:33:17.991469 systemd[1]: Started sshd@13-10.128.0.72:22-20.161.92.111:50914.service - OpenSSH per-connection server daemon (20.161.92.111:50914). Mar 13 00:33:18.224668 sshd[5450]: Accepted publickey for core from 20.161.92.111 port 50914 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:18.226242 sshd-session[5450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:18.233282 systemd-logind[1525]: New session 12 of user core. Mar 13 00:33:18.239384 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 13 00:33:18.422525 sshd[5453]: Connection closed by 20.161.92.111 port 50914 Mar 13 00:33:18.424445 sshd-session[5450]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:18.431118 systemd[1]: sshd@13-10.128.0.72:22-20.161.92.111:50914.service: Deactivated successfully. Mar 13 00:33:18.434766 systemd[1]: session-12.scope: Deactivated successfully. Mar 13 00:33:18.436668 systemd-logind[1525]: Session 12 logged out. Waiting for processes to exit. Mar 13 00:33:18.439052 systemd-logind[1525]: Removed session 12. Mar 13 00:33:23.468982 systemd[1]: Started sshd@14-10.128.0.72:22-20.161.92.111:43530.service - OpenSSH per-connection server daemon (20.161.92.111:43530). Mar 13 00:33:23.704347 sshd[5494]: Accepted publickey for core from 20.161.92.111 port 43530 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:23.706394 sshd-session[5494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:23.715868 systemd-logind[1525]: New session 13 of user core. Mar 13 00:33:23.720481 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 13 00:33:23.897383 sshd[5497]: Connection closed by 20.161.92.111 port 43530 Mar 13 00:33:23.899526 sshd-session[5494]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:23.906669 systemd[1]: sshd@14-10.128.0.72:22-20.161.92.111:43530.service: Deactivated successfully. Mar 13 00:33:23.910454 systemd[1]: session-13.scope: Deactivated successfully. Mar 13 00:33:23.911954 systemd-logind[1525]: Session 13 logged out. Waiting for processes to exit. Mar 13 00:33:23.914687 systemd-logind[1525]: Removed session 13. Mar 13 00:33:24.171755 kubelet[2877]: I0313 00:33:24.171371 2877 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 00:33:28.944876 systemd[1]: Started sshd@15-10.128.0.72:22-20.161.92.111:43532.service - OpenSSH per-connection server daemon (20.161.92.111:43532). Mar 13 00:33:29.171112 sshd[5542]: Accepted publickey for core from 20.161.92.111 port 43532 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:29.173138 sshd-session[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:29.180263 systemd-logind[1525]: New session 14 of user core. Mar 13 00:33:29.188478 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 13 00:33:29.357589 sshd[5545]: Connection closed by 20.161.92.111 port 43532 Mar 13 00:33:29.358838 sshd-session[5542]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:29.364326 systemd[1]: sshd@15-10.128.0.72:22-20.161.92.111:43532.service: Deactivated successfully. Mar 13 00:33:29.367541 systemd[1]: session-14.scope: Deactivated successfully. Mar 13 00:33:29.371843 systemd-logind[1525]: Session 14 logged out. Waiting for processes to exit. Mar 13 00:33:29.375709 systemd-logind[1525]: Removed session 14. Mar 13 00:33:29.409679 systemd[1]: Started sshd@16-10.128.0.72:22-20.161.92.111:43548.service - OpenSSH per-connection server daemon (20.161.92.111:43548). Mar 13 00:33:29.637863 sshd[5558]: Accepted publickey for core from 20.161.92.111 port 43548 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:29.640631 sshd-session[5558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:29.648167 systemd-logind[1525]: New session 15 of user core. Mar 13 00:33:29.662468 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 13 00:33:29.891327 sshd[5561]: Connection closed by 20.161.92.111 port 43548 Mar 13 00:33:29.892607 sshd-session[5558]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:29.903895 systemd[1]: sshd@16-10.128.0.72:22-20.161.92.111:43548.service: Deactivated successfully. Mar 13 00:33:29.907645 systemd[1]: session-15.scope: Deactivated successfully. Mar 13 00:33:29.908968 systemd-logind[1525]: Session 15 logged out. Waiting for processes to exit. Mar 13 00:33:29.912221 systemd-logind[1525]: Removed session 15. Mar 13 00:33:29.938700 systemd[1]: Started sshd@17-10.128.0.72:22-20.161.92.111:43564.service - OpenSSH per-connection server daemon (20.161.92.111:43564). Mar 13 00:33:30.164707 sshd[5571]: Accepted publickey for core from 20.161.92.111 port 43564 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:30.166824 sshd-session[5571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:30.173257 systemd-logind[1525]: New session 16 of user core. Mar 13 00:33:30.182413 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 13 00:33:30.364347 sshd[5574]: Connection closed by 20.161.92.111 port 43564 Mar 13 00:33:30.364949 sshd-session[5571]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:30.373729 systemd[1]: sshd@17-10.128.0.72:22-20.161.92.111:43564.service: Deactivated successfully. Mar 13 00:33:30.379786 systemd[1]: session-16.scope: Deactivated successfully. Mar 13 00:33:30.381622 systemd-logind[1525]: Session 16 logged out. Waiting for processes to exit. Mar 13 00:33:30.386011 systemd-logind[1525]: Removed session 16. Mar 13 00:33:35.416431 systemd[1]: Started sshd@18-10.128.0.72:22-20.161.92.111:38368.service - OpenSSH per-connection server daemon (20.161.92.111:38368). Mar 13 00:33:35.658727 sshd[5610]: Accepted publickey for core from 20.161.92.111 port 38368 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:35.661151 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:35.669713 systemd-logind[1525]: New session 17 of user core. Mar 13 00:33:35.675695 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 13 00:33:35.866730 sshd[5613]: Connection closed by 20.161.92.111 port 38368 Mar 13 00:33:35.868460 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:35.875220 systemd[1]: sshd@18-10.128.0.72:22-20.161.92.111:38368.service: Deactivated successfully. Mar 13 00:33:35.878526 systemd[1]: session-17.scope: Deactivated successfully. Mar 13 00:33:35.880876 systemd-logind[1525]: Session 17 logged out. Waiting for processes to exit. Mar 13 00:33:35.883920 systemd-logind[1525]: Removed session 17. Mar 13 00:33:35.919153 systemd[1]: Started sshd@19-10.128.0.72:22-20.161.92.111:38382.service - OpenSSH per-connection server daemon (20.161.92.111:38382). Mar 13 00:33:36.160640 sshd[5625]: Accepted publickey for core from 20.161.92.111 port 38382 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:36.162496 sshd-session[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:36.168915 systemd-logind[1525]: New session 18 of user core. Mar 13 00:33:36.178519 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 13 00:33:36.429352 sshd[5649]: Connection closed by 20.161.92.111 port 38382 Mar 13 00:33:36.430452 sshd-session[5625]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:36.436746 systemd[1]: sshd@19-10.128.0.72:22-20.161.92.111:38382.service: Deactivated successfully. Mar 13 00:33:36.440194 systemd[1]: session-18.scope: Deactivated successfully. Mar 13 00:33:36.441878 systemd-logind[1525]: Session 18 logged out. Waiting for processes to exit. Mar 13 00:33:36.444159 systemd-logind[1525]: Removed session 18. Mar 13 00:33:36.487016 systemd[1]: Started sshd@20-10.128.0.72:22-20.161.92.111:38398.service - OpenSSH per-connection server daemon (20.161.92.111:38398). Mar 13 00:33:36.731427 sshd[5658]: Accepted publickey for core from 20.161.92.111 port 38398 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:36.735342 sshd-session[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:36.745475 systemd-logind[1525]: New session 19 of user core. Mar 13 00:33:36.751406 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 13 00:33:37.611284 sshd[5667]: Connection closed by 20.161.92.111 port 38398 Mar 13 00:33:37.612279 sshd-session[5658]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:37.624281 systemd[1]: sshd@20-10.128.0.72:22-20.161.92.111:38398.service: Deactivated successfully. Mar 13 00:33:37.625432 systemd-logind[1525]: Session 19 logged out. Waiting for processes to exit. Mar 13 00:33:37.631831 systemd[1]: session-19.scope: Deactivated successfully. Mar 13 00:33:37.639840 systemd-logind[1525]: Removed session 19. Mar 13 00:33:37.669336 systemd[1]: Started sshd@21-10.128.0.72:22-20.161.92.111:38410.service - OpenSSH per-connection server daemon (20.161.92.111:38410). Mar 13 00:33:37.919812 sshd[5691]: Accepted publickey for core from 20.161.92.111 port 38410 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:37.921421 sshd-session[5691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:37.928063 systemd-logind[1525]: New session 20 of user core. Mar 13 00:33:37.935397 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 13 00:33:38.312112 sshd[5696]: Connection closed by 20.161.92.111 port 38410 Mar 13 00:33:38.313498 sshd-session[5691]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:38.318434 systemd[1]: sshd@21-10.128.0.72:22-20.161.92.111:38410.service: Deactivated successfully. Mar 13 00:33:38.322159 systemd[1]: session-20.scope: Deactivated successfully. Mar 13 00:33:38.325847 systemd-logind[1525]: Session 20 logged out. Waiting for processes to exit. Mar 13 00:33:38.327685 systemd-logind[1525]: Removed session 20. Mar 13 00:33:38.360560 systemd[1]: Started sshd@22-10.128.0.72:22-20.161.92.111:38422.service - OpenSSH per-connection server daemon (20.161.92.111:38422). Mar 13 00:33:38.600562 sshd[5707]: Accepted publickey for core from 20.161.92.111 port 38422 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:38.602533 sshd-session[5707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:38.610059 systemd-logind[1525]: New session 21 of user core. Mar 13 00:33:38.614473 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 13 00:33:38.795274 sshd[5710]: Connection closed by 20.161.92.111 port 38422 Mar 13 00:33:38.797397 sshd-session[5707]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:38.805434 systemd[1]: sshd@22-10.128.0.72:22-20.161.92.111:38422.service: Deactivated successfully. Mar 13 00:33:38.810639 systemd[1]: session-21.scope: Deactivated successfully. Mar 13 00:33:38.812016 systemd-logind[1525]: Session 21 logged out. Waiting for processes to exit. Mar 13 00:33:38.814598 systemd-logind[1525]: Removed session 21. Mar 13 00:33:43.837423 systemd[1]: Started sshd@23-10.128.0.72:22-20.161.92.111:58476.service - OpenSSH per-connection server daemon (20.161.92.111:58476). Mar 13 00:33:44.066135 sshd[5751]: Accepted publickey for core from 20.161.92.111 port 58476 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:44.066907 sshd-session[5751]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:44.072866 systemd-logind[1525]: New session 22 of user core. Mar 13 00:33:44.080410 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 13 00:33:44.242938 sshd[5754]: Connection closed by 20.161.92.111 port 58476 Mar 13 00:33:44.244502 sshd-session[5751]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:44.250336 systemd[1]: sshd@23-10.128.0.72:22-20.161.92.111:58476.service: Deactivated successfully. Mar 13 00:33:44.254098 systemd[1]: session-22.scope: Deactivated successfully. Mar 13 00:33:44.255699 systemd-logind[1525]: Session 22 logged out. Waiting for processes to exit. Mar 13 00:33:44.258334 systemd-logind[1525]: Removed session 22. Mar 13 00:33:49.291130 systemd[1]: Started sshd@24-10.128.0.72:22-20.161.92.111:58484.service - OpenSSH per-connection server daemon (20.161.92.111:58484). Mar 13 00:33:49.516289 sshd[5768]: Accepted publickey for core from 20.161.92.111 port 58484 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:49.517402 sshd-session[5768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:49.525741 systemd-logind[1525]: New session 23 of user core. Mar 13 00:33:49.536351 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 13 00:33:49.699200 sshd[5771]: Connection closed by 20.161.92.111 port 58484 Mar 13 00:33:49.700505 sshd-session[5768]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:49.706309 systemd-logind[1525]: Session 23 logged out. Waiting for processes to exit. Mar 13 00:33:49.706971 systemd[1]: sshd@24-10.128.0.72:22-20.161.92.111:58484.service: Deactivated successfully. Mar 13 00:33:49.710799 systemd[1]: session-23.scope: Deactivated successfully. Mar 13 00:33:49.712996 systemd-logind[1525]: Removed session 23. Mar 13 00:33:54.747012 systemd[1]: Started sshd@25-10.128.0.72:22-20.161.92.111:48182.service - OpenSSH per-connection server daemon (20.161.92.111:48182). Mar 13 00:33:54.965403 sshd[5783]: Accepted publickey for core from 20.161.92.111 port 48182 ssh2: RSA SHA256:uQjByQy7SUWwJv8O1efEqHmmzGn6ZMrMlwxdrDbTo0o Mar 13 00:33:54.967043 sshd-session[5783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 13 00:33:54.973955 systemd-logind[1525]: New session 24 of user core. Mar 13 00:33:54.980388 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 13 00:33:55.147673 sshd[5786]: Connection closed by 20.161.92.111 port 48182 Mar 13 00:33:55.148674 sshd-session[5783]: pam_unix(sshd:session): session closed for user core Mar 13 00:33:55.155365 systemd[1]: sshd@25-10.128.0.72:22-20.161.92.111:48182.service: Deactivated successfully. Mar 13 00:33:55.158713 systemd[1]: session-24.scope: Deactivated successfully. Mar 13 00:33:55.160263 systemd-logind[1525]: Session 24 logged out. Waiting for processes to exit. Mar 13 00:33:55.163025 systemd-logind[1525]: Removed session 24.